Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

references about the beauty of functional programming ?

14 views
Skip to first unread message

Pietro Abate

unread,
Oct 8, 2006, 4:10:09 AM10/8/06
to
Hi all,
I need few references (books, scientific articles, ...)to justify the
following claims:

- Functional languages are known to be concise and programs written in
such languages are therefore easier to maintain.
- Functional languages are more resistant to problems related to memory
management and runtime errors.
- Using a purely functional programming language (or only use pure
techniques) can impact on performances.
- Since modularity is the key to successful programming, the functional
programming style is vitally important to solve real world problems.
(This is actually a quote from Hueghes. Do you agree ?).
- Functional programs are easily verifiable using formal methods
techniques.
- Even if the methodological benefits of functional programming are
well known, the vast majority of programs are still written in
imperative languages such as C. This contradiction can be explained
on the one hand by the historical lack of performance of functional
programs and on the other, by the inherent conceptual difficulty of
writing functional programs.
- Even if functional languages are still slower then C/C++ in particular
domains, the difference in performance has been considerably reduced
in the last decade

The majority of these claims came from a few years of reading books,
experience on the field and scientific papers... I believe in what I've
written, but hard evidence is much better then that I think is true.

This is very much the kind of references that we find in the
many introductions...

Moreover ... Are these reference correct ?

Prolog:
@book{prolog-book,
author = {W. F. Clocksin and C. S. Mellish},
title = {Programming in Prolog},
year = {1987},
isbn = {0-387-17539-3},
publisher = {Springer-Verlag New York, Inc.},
address = {New York, NY, USA},
}

Scheme:
@book{scheme-book,
author = "R. Kent Dybvig"
title = "The Scheme Programming Language, Second Edition"
publisher = "Prentice Hall"
year = "1996"
}

Haskell:
@techreport{haskell99a,
author = "Simon Peyton Jones and John Hughes (editors)"
title = "Haskell 98: A Non-strict, Purely Functional Language"
month = "February"
year = "1999"
}

what's sml ?

thanks !
p

--
++ "All great truths begin as blasphemies." -George Bernard Shaw
++ Please avoid sending me Word or PowerPoint attachments.
See http://www.fsf.org/philosophy/no-word-attachments.html

Paul Rubin

unread,
Oct 8, 2006, 4:59:17 AM10/8/06
to
Pietro Abate <doesn...@hotmail.com> writes:
> I need few references (books, scientific articles, ...)to justify the
> following claims:
> - Functional languages are known to be concise and programs written in
> such languages are therefore easier to maintain.

Hudak's article about prototyping in Haskell is a start:
http://haskell.org/papers/NSWC/jfp.ps

> - Functional languages are more resistant to problems related to memory
> management and runtime errors.

Well, that's more of a general claim about strongly typed, garbage
collected languages.

> - Using a purely functional programming language (or only use pure
> techniques) can impact on performances.

I think the grand hope is to be able to stay pure and still get good
performance, under some definition of purity that includes monads.

> - Since modularity is the key to successful programming, the functional
> programming style is vitally important to solve real world problems.
> (This is actually a quote from Hueghes. Do you agree ?).

I think Hughes' numerical examples are pretty neat but I'm not totally
convinced yet. Hughes is specifically talking about lazy evaluation.
Haskell guru Simon Peyton-Jones in "Wearing the Hair Shirt" argues
that purity is a more essential feature of Haskell than lazy evaluation is:

http://research.microsoft.com/~simonpj/papers/haskell-retrospective/index.htm

SICP has a fair amount of material on stream-based programming using
lazy evaluation in Scheme:

http://mitpress.mit.edu/sicp/

> - Functional programs are easily verifiable using formal methods
> techniques.

Well, the proofs are obviously more composable than for imperative
programs, but someone else will have to add to this, I'd like to find
out more about the subject but am pretty ignorant right now.

> - Even if the methodological benefits of functional programming are
> well known, the vast majority of programs are still written in
> imperative languages such as C. This contradiction can be explained
> on the one hand by the historical lack of performance of functional
> programs and on the other, by the inherent conceptual difficulty of
> writing functional programs.

This would seem to be true, not just about FP, but for example see
Paul Graham's rants about Lisp, e.g.

http://www.paulgraham.com/avg.html

and of course you should read SICP (above) if you haven't yet.

> - Even if functional languages are still slower then C/C++ in particular
> domains, the difference in performance has been considerably reduced
> in the last decade

Maybe even before that, if you count Scheme.

> what's sml ?

Standard ML. See for example <http://mlton.org>.

Pascal Costanza

unread,
Oct 8, 2006, 5:30:45 AM10/8/06
to
Pietro Abate wrote:
> Hi all,
> I need few references (books, scientific articles, ...)to justify the
> following claims:
>
> - Functional languages are known to be concise and programs written in
> such languages are therefore easier to maintain.
> - Functional languages are more resistant to problems related to memory
> management and runtime errors.
> - Using a purely functional programming language (or only use pure
> techniques) can impact on performances.
> - Since modularity is the key to successful programming, the functional
> programming style is vitally important to solve real world problems.
> (This is actually a quote from Hueghes. Do you agree ?).
> - Functional programs are easily verifiable using formal methods
> techniques.
> - Even if the methodological benefits of functional programming are
> well known, the vast majority of programs are still written in
> imperative languages such as C. This contradiction can be explained
> on the one hand by the historical lack of performance of functional
> programs and on the other, by the inherent conceptual difficulty of
> writing functional programs.
> - Even if functional languages are still slower then C/C++ in particular
> domains, the difference in performance has been considerably reduced
> in the last decade

See http://citeseer.ist.psu.edu/hudak94haskell.html and
http://www.norvig.com/java-lisp.html for some input.


Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

Pietro Abate

unread,
Oct 8, 2006, 5:54:23 AM10/8/06
to
In comp.lang.functional, you wrote:
>> - Using a purely functional programming language (or only use pure
>> techniques) can impact on performances.
> I think the grand hope is to be able to stay pure and still get good
> performance, under some definition of purity that includes monads.

any papers/survey about comparison ? I can cite the shootout ... but ...

>> - Since modularity is the key to successful programming, the functional
>> programming style is vitally important to solve real world problems.
>> (This is actually a quote from Hueghes. Do you agree ?).
> I think Hughes' numerical examples are pretty neat but I'm not totally
> convinced yet. Hughes is specifically talking about lazy evaluation.
> Haskell guru Simon Peyton-Jones in "Wearing the Hair Shirt" argues
> that purity is a more essential feature of Haskell than lazy evaluation is:
> http://research.microsoft.com/~simonpj/papers/haskell-retrospective/index.htm
> SICP has a fair amount of material on stream-based programming using
> lazy evaluation in Scheme:
> http://mitpress.mit.edu/sicp/

I'll have a look at this book...

I think this is the paper where the quote come from:
@article{hughes-matters,
AUTHOR = {J. Hughes},
TITLE = {{Why Functional Programming Matters}},
JOURNAL = {Computer Journal},
VOLUME = {32},
NUMBER = {2},
PAGES = {98--107},
YEAR = {1989},
url = {citeseer.ist.psu.edu/hughes84why.html}
}


>
>> - Functional programs are easily verifiable using formal methods
>> techniques.
> Well, the proofs are obviously more composable than for imperative
> programs, but someone else will have to add to this, I'd like to find
> out more about the subject but am pretty ignorant right now.

Jean-Christophe Filliatre published a paper about a neat methodology to
verify ocaml programs by adding annotations about pre and post
conditions and then verify them with a theorem prover.

I'm sure that there are more extensive studies about this...

>> - Even if the methodological benefits of functional programming are
>> well known, the vast majority of programs are still written in
>> imperative languages such as C. This contradiction can be explained
>> on the one hand by the historical lack of performance of functional
>> programs and on the other, by the inherent conceptual difficulty of
>> writing functional programs.
> This would seem to be true, not just about FP, but for example see
> Paul Graham's rants about Lisp, e.g.
> http://www.paulgraham.com/avg.html

this was a nice reading !

thanks
:)

Paul Rubin

unread,
Oct 8, 2006, 7:04:32 AM10/8/06
to
Pietro Abate <doesn...@hotmail.com> writes:
> > I think the grand hope is to be able to stay pure and still get good
> > performance, under some definition of purity that includes monads.
>
> any papers/survey about comparison ? I can cite the shootout ... but ...

Well, the ML languages are definitely trying to beat C, and their
impurity seems to me to be sort of an accident (they were designed
before the monadic approach was discovered). Peyton-Jones' talk
"Wearing the hair shirt" mentions that the next version of ML will be
pure, which I guess means they'll use monads.

> > This would seem to be true, not just about FP, but for example see
> > Paul Graham's rants about Lisp, e.g.
> > http://www.paulgraham.com/avg.html
>
> this was a nice reading !

You might also like the musical accompaniment:

http://www.songworm.com/lyrics/songworm-parody/EternalFlame.html
http://www.prometheus-music.com/audio/eternalflame.mp3

;-)

Jon Harrop

unread,
Oct 8, 2006, 10:39:50 AM10/8/06
to

The following points are addressed by my book on OCaml (see my .sig):

Pietro Abate wrote:
> - Functional languages are known to be concise and programs written in
> such languages are therefore easier to maintain.
> - Functional languages are more resistant to problems related to memory
> management and runtime errors.

> - Since modularity is the key to successful programming, the functional
> programming style is vitally important to solve real world problems.
> (This is actually a quote from Hueghes. Do you agree ?).

OCaml certainly scales better than C++, both to bigger projects and to
larger groups of programmers.

> - Even if the methodological benefits of functional programming are
> well known, the vast majority of programs are still written in
> imperative languages such as C. This contradiction can be explained
> on the one hand by the historical lack of performance of functional
> programs and on the other, by the inherent conceptual difficulty of
> writing functional programs.

This is just momentum, IMHO. Teachers teach what they were taught and don't
learn. Consequently, they are still teaching Fortran on the undergraduate
science courses at Cambridge university, for example. I think this is a
really awful situation and I'm trying to address it...

> - Even if functional languages are still slower then C/C++ in particular
> domains, the difference in performance has been considerably reduced
> in the last decade

For non-trivial tasks, OCaml is usually much faster than C++.

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists

ig...@yahoo.com

unread,
Oct 8, 2006, 11:06:23 AM10/8/06
to

Jon Harrop wrote:
> The following points are addressed by my book on OCaml (see my .sig):
>
> Pietro Abate wrote:
> > - Functional languages are known to be concise and programs written in
> > such languages are therefore easier to maintain.
> > - Functional languages are more resistant to problems related to memory
> > management and runtime errors.
> > - Since modularity is the key to successful programming, the functional
> > programming style is vitally important to solve real world problems.
> > (This is actually a quote from Hueghes. Do you agree ?).
>
> OCaml certainly scales better than C++, both to bigger projects and to
> larger groups of programmers.

What experience/data do you base that assertion upon?

>
> > - Even if the methodological benefits of functional programming are
> > well known, the vast majority of programs are still written in
> > imperative languages such as C. This contradiction can be explained
> > on the one hand by the historical lack of performance of functional
> > programs and on the other, by the inherent conceptual difficulty of
> > writing functional programs.
>
> This is just momentum, IMHO. Teachers teach what they were taught and don't
> learn. Consequently, they are still teaching Fortran on the undergraduate
> science courses at Cambridge university, for example. I think this is a
> really awful situation and I'm trying to address it...
>
> > - Even if functional languages are still slower then C/C++ in particular
> > domains, the difference in performance has been considerably reduced
> > in the last decade
>
> For non-trivial tasks, OCaml is usually much faster than C++.

I read C/C++ not C++

For non-trivial tasks (whatever that means) is OCaml usually much
faster than C? What experience/data do you base that assertion upon?

Marcin 'Qrczak' Kowalczyk

unread,
Oct 8, 2006, 11:37:51 AM10/8/06
to
Jon Harrop <j...@ffconsultancy.com> writes:

> For non-trivial tasks, OCaml is usually much faster than C++.

Usually it is not.

--
__("< Marcin Kowalczyk
\__/ qrc...@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Thant Tessman

unread,
Oct 8, 2006, 11:42:55 AM10/8/06
to
ig...@yahoo.com wrote:
> Jon Harrop wrote:

[...]

>> For non-trivial tasks, OCaml is usually much faster than C++.
>
> I read C/C++ not C++
>
> For non-trivial tasks (whatever that means) is OCaml usually much
> faster than C? What experience/data do you base that assertion upon?

Jon Harrop's statement is misleading. He should have said something
like: Given a limited amount of development time and a sufficiently
complex task, it is easier to write an efficient and reliable program in
OCaml than in C++. However, given enough time and effort, a programmer
can always make a C/C++ program go faster. (Reliability on the other
hand, is something C/C++ will never be really good at.)

-thant

Pietro Abate

unread,
Oct 8, 2006, 10:24:50 PM10/8/06
to
On 2006-10-08, ig...@yahoo.com <ig...@yahoo.com> wrote:
>> > - Even if functional languages are still slower then C/C++ in particular
>> > domains, the difference in performance has been considerably reduced
>> > in the last decade
>> For non-trivial tasks, OCaml is usually much faster than C++.
> I read C/C++ not C++
> For non-trivial tasks (whatever that means) is OCaml usually much
> faster than C? What experience/data do you base that assertion upon?

This is very much my problem here. Making this kind of statements (in
particular in scientific papers) can stir a bit of discussion, in
particular from people that have never used a functional language and
have a blinkers on regarding performances and software development.
Usually these are old academics that have learned how to program in c
on a Vax and the jumped on the OOP bandwagon in the 80s/90s.

The two reference given on this thread, java vs lisp and the haskell
navy project are good empirical evidence that can be cited. As I said,
the shootout is not very scientific (no peer review in the academic
sense). J. Harrop book and his ray tracer (??) is one experiment, but
it's difficult to generalize from it.

What I'm looking for is a peer-reviewed paper of some empirical evidence
to support (or disproof) my thesis... You certainly don't need to
convince me ... but I need this to convince others, and I'm definitely
too young to cite myself ! :)

Jon Harrop

unread,
Oct 9, 2006, 12:24:15 AM10/9/06
to
ig...@yahoo.com wrote:
>> OCaml certainly scales better than C++, both to bigger projects and to
>> larger groups of programmers.
>
> What experience/data do you base that assertion upon?

Developing applications by myself and as part of a team.

For example, I wrote a vector graphics library in C++ that became
unmaintainable because alterations that I wanted to make were prohibitively
difficult to implement without breaking the code. In the OCaml
implementation, I was able to make changes to the OCaml codebase and fix
everything easily thanks to its static checking. I've had similar
experiences with a variety of tasks.

When working as a team, I've often had problems working concurrently on C++
projects without breaking the codebase. With OCaml, I've worked as a team
for months with virtually no such problems.

>> For non-trivial tasks, OCaml is usually much faster than C++.
>
> I read C/C++ not C++
>
> For non-trivial tasks (whatever that means) is OCaml usually much
> faster than C?

Yes. I am referring to tasks sufficiently complicated that development time
limits optimisation.

> What experience/data do you base that assertion upon?

My vector graphics library (soft real time) is 5x faster for worst case
(which is the important case) than the C++.

Mathematica is written in C and dozens of authors have spent years
optimising it. However, you can implement the core of Mathematica in only a
thousand lines of ML and it will run just as fast. With a little more work,
you can get it running much faster than the real thing.

Jon Harrop

unread,
Oct 9, 2006, 12:36:06 AM10/9/06
to
Thant Tessman wrote:
> Jon Harrop's statement is misleading. He should have said something
> like: Given a limited amount of development time and a sufficiently
> complex task, it is easier to write an efficient and reliable program in
> OCaml than in C++. However, given enough time and effort, a programmer
> can always make a C/C++ program go faster.

A good enough programmer, maybe. You don't have to look far to find
relatively simple programs that a C/C++ programmer would have a tough time
optimising.

For example, try getting C/C++ programmers to rewrite the core of
the "n"th-nearest neighbour example from my OCaml book:

let rec nth_nn =
let memory = Hashtbl.create 1 in
fun n (i, io) ->
try Hashtbl.find memory (n, i)
with Not_found -> match n with
0 -> AtomSet.singleton (i, io)
| 1 ->
let nn = bonds.(i - 1) in
if io = zero then nn else
let aux (j, jo) s = AtomSet.add (j, add_i io jo) s in
AtomSet.fold aux nn AtomSet.empty
| n ->
let pprev = nth_nn (n-2) (i, io) in
let prev = nth_nn (n-1) (i, io) in
let aux j t = AtomSet.union (nth_nn 1 j) t in
let t = AtomSet.fold aux prev AtomSet.empty in
let t = AtomSet.diff (AtomSet.diff t prev) pprev in
Hashtbl.add memory (n, i) t;
t

What proportion of C/C++ programmers will beat the performance of that
OCaml? Not many, I'd wager.

Now look at more complicated examples (like most real-world programs)...

> (Reliability on the other
> hand, is something C/C++ will never be really good at.)

And correctness is always more important than performance. So, although
OCaml walks all up and down C++'s ass in terms of performance, people
should do functional programming because it is safer. ;-)

Jon Harrop

unread,
Oct 9, 2006, 12:42:17 AM10/9/06
to
Pietro Abate wrote:
> J. Harrop book and his ray tracer (??) is one experiment, but
> it's difficult to generalize from it.

The final chapter of my book gives 5 different examples: MEM,
minimization, "n"th-nearest neighbours, eigen-problems and DWT.

> What I'm looking for is a peer-reviewed paper of some empirical evidence
> to support (or disproof) my thesis... You certainly don't need to
> convince me ... but I need this to convince others, and I'm definitely
> too young to cite myself ! :)

The ray tracer gives a side-by-side comparison of some of the languages.
Some of the examples from my book are also written in C++.

For example, the nth_nn function that I posted the OCaml for can be written
in C++ as:

const AtomSet nth_nn(int n, int i, const vector<int> io) {
const Map::key_type k=make_pair(n, make_pair(i, io));
Map::const_iterator m=memory.find(k);
if (m != memory.end()) return m->second;
AtomSet s;
if (n == 0) {
s.insert(make_pair(i, io));
return s;
} else
if (n == 1) {
const AtomSet &nn=bonds[i-1];
for (AtomSet::const_iterator it=nn.begin();
it != nn.end(); it++) {
int j = it->first;
vector<int> jo = it->second;
for (i=0; i<d; i++)
jo[i] += io[i];
s.insert(make_pair(j, jo));
}
return s;
} else {
const AtomSet
pprev = nth_nn(n-2, i, io),
prev = nth_nn(n-1, i, io);
for (AtomSet::const_iterator it=prev.begin();
it != prev.end(); it++) {
const AtomSet t = nth_nn(1, it->first, it->second);
s.insert(t.begin(), t.end());
}
for (AtomSet::const_iterator it=prev.begin();
it != prev.end(); it++) {
AtomSet::iterator it2 = s.find(*it);
if (it2 != s.end()) s.erase(it2);
}
for (AtomSet::const_iterator it=pprev.begin();
it != pprev.end(); it++) {
AtomSet::iterator it2 = s.find(*it);
if (it2 != s.end()) s.erase(it2);
}
}
memory[k] = s;
return memory.find(k)->second;
}

I've discussed elsewhere the reason why the FPL approach is much more
succinct and faster in this case.

Didier Verna

unread,
Oct 9, 2006, 3:23:50 AM10/9/06
to
Pietro Abate <doesn...@hotmail.com> wrote:

> On 2006-10-08, ig...@yahoo.com <ig...@yahoo.com> wrote:
>>> > - Even if functional languages are still slower then C/C++ in particular
>>> > domains, the difference in performance has been considerably reduced
>>> > in the last decade

> What I'm looking for is a peer-reviewed paper of some empirical evidence


> to support (or disproof) my thesis... You certainly don't need to
> convince me ... but I need this to convince others, and I'm definitely
> too young to cite myself ! :)

You may be interested in my recent paper at ECOOP'06 Lisp workshop:

"Beating C in Scientific Computing Applications -- On the Behavior and
Performance of Lisp, Part I"

It's for a specific set of applications / operations, but I believe you'll
find it at least satisfactory on the "academic" plan ...

You can find it near the top of this page:

http://www.lrde.epita.fr/~didier/comp/research/publi.php

--
Check out my new jazz CD on http://www.didierverna.com/ !

Didier Verna EPITA / LRDE, 14-16 rue Voltaire Tel.+33 (1) 44 08 01 85
94276 Le Kremlin-Bicêtre, France Fax.+33 (1) 53 14 59 22

Torben Ægidius Mogensen

unread,
Oct 9, 2006, 7:01:36 AM10/9/06
to
Paul Rubin <http://phr...@NOSPAM.invalid> writes:


> Well, the ML languages are definitely trying to beat C, and their
> impurity seems to me to be sort of an accident (they were designed
> before the monadic approach was discovered). Peyton-Jones' talk
> "Wearing the hair shirt" mentions that the next version of ML will be
> pure, which I guess means they'll use monads.

I think not. I don't recall Simon saying that the next ML will be
pure, but even so this doesn't imply monads. Anyway, more relevant to
the future of ML may be Greg Morrisett's talk "What will the next ML
look like?" at last year's ML workshop in Tallinn. He expected
type-and-effect systems to be used to manage effects rather than
encapsulating them in monads. He did, however, expect type classes to
make it into future ML-like languages. See also his ideas for Y0 on
his web-page (http://www.eecs.harvard.edu/~greg/).

Torben

Förster vom Silberwald

unread,
Oct 9, 2006, 9:05:04 AM10/9/06
to

Jon Harrop wrote:

> OCaml certainly scales better than C++, both to bigger projects and to
> larger groups of programmers.

What will you say at this (link to the thread follows):

==
I will give
you an example. An engineer had a data cloud with about 1000000 points.

These points were represented as large integers, and our problem was to

read them, transform them into double float, and Marshal them. You
could say that this is the ideal problem for Perl and Python. However,
both languages don't have powerful compilers, and cannot deal with
structures as large as the data cloud of the example. I thought about
OCAML, but the size of the problem put the compiler on its kneels.
Then I tried Bigloo. Then I tried Bigloo. I added an open parenthesis
at beginning of the
file, a close parentheses at the end, opened the file, and read
everything with a single command:

(let ((pin (open-input-port "myfile.txt")))
(process (read pin)) )

A few lines were enough to add the double format, Marshal everything,
and output the solution.
==

http://groups.google.com/group/comp.lang.scheme/browse_frm/thread/c43f4a44349d9e32/
e374666cfb51f45e?tvc=1&q=schneewittchen#e374666cfb51f45e

It is a pity that the poster in the thread hasn't gone deeper in
explaining what went wrong with OCaml at his particular problem.

Regards,
Schneewittchen

Förster vom Silberwald

unread,
Oct 9, 2006, 9:10:29 AM10/9/06
to

Jon Harrop wrote:

> OCaml certainly scales better than C++, both to bigger projects and to
> larger groups of programmers.

What will you say at this (link to the thread follows):

Marcin 'Qrczak' Kowalczyk

unread,
Oct 9, 2006, 9:27:19 AM10/9/06
to
"Förster vom Silberwald" <chain...@hotmail.com> writes:

> It is a pity that the poster in the thread hasn't gone deeper in
> explaining what went wrong with OCaml at his particular problem.

I don't know. Anyway, OCaml on a 32-bit machine has 31-bit integers
with no error checking during arithmetic (plus a separate big integer
type, with separate operators and explicit conversions).

And an array has the maximum size of 2^22 elements, or 4_194_304
(twice smaller for float arrays). Again there is a separate library
with support for big arrays, with separate operations.

Jon Harrop

unread,
Oct 9, 2006, 10:05:32 AM10/9/06
to
Didier Verna wrote:
> "Beating C in Scientific Computing Applications -- On the Behavior and
> Performance of Lisp, Part I"
>
> It's for a specific set of applications / operations, but I believe you'll
> find it at least satisfactory on the "academic" plan ...
>
> You can find it near the top of this page:
>
> http://www.lrde.epita.fr/~didier/comp/research/publi.php

Is the code available? I'd like to benchmark it myself...

Jon Harrop

unread,
Oct 9, 2006, 10:20:07 AM10/9/06
to
Förster vom Silberwald wrote:
> I will give
> you an example. An engineer had a data cloud with about 1000000 points.
> These points were represented as large integers, and our problem was to
> read them, transform them into double float, and Marshal them.

Firstly, I was talking about "non-trivial" problems and this doesn't fall
into that category.

> I thought about
> OCAML, but the size of the problem put the compiler on its kneels.

I'd take any OCaml advice in comp.lang.scheme with a pinch of salt. Here's
my OCaml:

let ch = open_in "foo";;
let data = ref [];;
while true do try data := input_float ch :: !data with End_of_file -> ();;
close_in ch;;
Marshal.to_channel (open_out "bar") !data [];;

Note that there are no big ints or arrays.

Torben Ægidius Mogensen

unread,
Oct 9, 2006, 10:42:16 AM10/9/06
to
"Förster vom Silberwald" <chain...@hotmail.com> writes:

> I will give
> you an example. An engineer had a data cloud with about 1000000 points.
>
> These points were represented as large integers, and our problem was to
>
> read them, transform them into double float, and Marshal them. You
> could say that this is the ideal problem for Perl and Python. However,
> both languages don't have powerful compilers, and cannot deal with
> structures as large as the data cloud of the example. I thought about
> OCAML, but the size of the problem put the compiler on its kneels.
> Then I tried Bigloo. Then I tried Bigloo. I added an open parenthesis
> at beginning of the
> file, a close parentheses at the end, opened the file, and read
> everything with a single command:
>
> (let ((pin (open-input-port "myfile.txt")))
> (process (read pin)) )
>
> A few lines were enough to add the double format, Marshal everything,
> and output the solution.

I don't see why you would need to read the entire set of points into
memory before processsing them (unless you also needed to reorder
them). So a C-style read-process-print loop would seem logical, as
would using a lazy language like Haskell, where the program would be
something like

Module Main where

main = interact (unwords . map process . words)

process = ...


Torben

ig...@yahoo.com

unread,
Oct 9, 2006, 12:50:17 PM10/9/06
to
Jon Harrop wrote:
> ig...@yahoo.com wrote:
> >> OCaml certainly scales better than C++, both to bigger projects and to
> >> larger groups of programmers.
> >
> > What experience/data do you base that assertion upon?
>
> Developing applications by myself and as part of a team.
>
> For example, I wrote a vector graphics library in C++ that became
> unmaintainable because alterations that I wanted to make were prohibitively
> difficult to implement without breaking the code. In the OCaml
> implementation, I was able to make changes to the OCaml codebase and fix
> everything easily thanks to its static checking. I've had similar
> experiences with a variety of tasks.

Perhaps you did a better implementation the second time, because it was
the second time?

>
> When working as a team, I've often had problems working concurrently on C++
> projects without breaking the codebase. With OCaml, I've worked as a team
> for months with virtually no such problems.

Ummm sometimes a seagull craps on my car but I don't think it's got
anything to do with the colour of my shoes. (Actually don't bother -
I'm really not interested in the problems you had working with C++)

>
> >> For non-trivial tasks, OCaml is usually much faster than C++.
> >
> > I read C/C++ not C++
> >
> > For non-trivial tasks (whatever that means) is OCaml usually much
> > faster than C?
>
> Yes. I am referring to tasks sufficiently complicated that development time
> limits optimisation.
>
> > What experience/data do you base that assertion upon?
>
> My vector graphics library (soft real time) is 5x faster for worst case
> (which is the important case) than the C++.
>
> Mathematica is written in C and dozens of authors have spent years
> optimising it. However, you can implement the core of Mathematica in only a
> thousand lines of ML and it will run just as fast. With a little more work,
> you can get it running much faster than the real thing.

Does that mean you /have/ re-implemented the core of Mathematica and
/it does/ run just as fast? (And do you have just /the core of
Mathematica/ in C to compare against?)

Jon Harrop

unread,
Oct 9, 2006, 8:53:24 PM10/9/06
to
ig...@yahoo.com wrote:
> Jon Harrop wrote:
>> For example, I wrote a vector graphics library in C++ that became
>> unmaintainable because alterations that I wanted to make were
>> prohibitively difficult to implement without breaking the code. In the
>> OCaml implementation, I was able to make changes to the OCaml codebase
>> and fix everything easily thanks to its static checking. I've had similar
>> experiences with a variety of tasks.
>
> Perhaps you did a better implementation the second time, because it was
> the second time?

Absolutely. I wrote several better implementations in OCaml in less time
than it would have taken to write one good one in C++. That is why
developing in OCaml results in faster code. In this case, it was not clear
which of several possible approaches would be most efficient.

>> Mathematica is written in C and dozens of authors have spent years
>> optimising it. However, you can implement the core of Mathematica in only
>> a thousand lines of ML and it will run just as fast. With a little more
>> work, you can get it running much faster than the real thing.
>
> Does that mean you /have/ re-implemented the core of Mathematica and
> /it does/ run just as fast? (And do you have just /the core of
> Mathematica/ in C to compare against?)

Yes and yes.

Ulf Wiger

unread,
Oct 10, 2006, 4:55:53 AM10/10/06
to
>>>>> "igouy" == igouy <ig...@yahoo.com> writes:

igouy> Jon Harrop wrote:
>> The following points are addressed by my book on OCaml (see my
>> .sig):
>>
>> Pietro Abate wrote:
>> > - Functional languages are known to be concise and programs
>> > written in such languages are therefore easier to maintain.

We have made the similar observation about erlang.

>> > - Functional languages are more resistant to problems related
>> > to memory management and runtime errors.

>> > - Since modularity is the key to successful programming, the
>> > functional programming style is vitally important to solve
>> > real world problems. (This is actually a quote from
>> > Hueghes. Do you agree ?).

I agree.

>> OCaml certainly scales better than C++, both to bigger projects
>> and to larger groups of programmers.

I have no experience using OCaml in either small or large projects
(except for installing it in order to use the Felix language.)
Replacing OCaml with Erlang, I wholeheartedly agree.

Two concerns I would have using OCaml in a large project would be
(1) how to manage the types, and (2) the overall maturity of the
environment. The larger the project, the greater the risk that
you get snagged by some aspects of the development environment or
libraries that haven't yet been fully developed.

('Large' in my world is a million lines of code or more, and
a project of 100 people or more.)

igouy> What experience/data do you base that assertion upon?


John's statement does correspond well with our experience from
using Erlang in commercial development for the last 10+ years.
Many of the Erlang programmers I work with are ex C++ programmers,
and we've had several opportunities to conduct informal comparisons
with C++ projects. I've had the opportunity to grow a fairly
wide network of contacts, and have participated in quite a
number of project- and architecture reviews over the years,
albeit all in the same domain (commercial telecoms SW).
That's the basis of my assertions.

In my experience, while Erlang doesn't claim to match C/C++ in
speed (at least not in most micro benchmarks), we have long
since seized to be surprised when Erlang-based applications show
superior performance compared to similar C++ applications.
It is not consistently so, and we do develop quite a lot of code
in C for speed. We see no point in using C++, and in fact have
some strikingly bad experiences using C++ in large projects.

I have had occasion to try to urge some of our programmers to
dig into some C++ application for the good of the project. Most
of them refuse - a peculiar trait perhaps of Swedish workplaces
is that they can readily do so. I can still recall some of the
heated discussions 10 years ago when we decided to go with
Erlang. The very same people - then hardcore C++ programmers -
were quite upset, but eventually converted. Now, some of them
will sooner resign than take up C++ again.

Of course, it should be noted that our applications contain
lots of concurrency, fault tolerance and distribution, and
Erlang was designed expressly to support this well. It's
still a topic of discussion whether it's the excellent
concurrency support in Erlang or the functional aspects
that pay the greatest dividends.

(You will undoubtedly be able to find excellent programmers
at Ericsson who do not agree with me at all. Such is the
nature of these comparisons.)

BR,
Ulf Wiger
--
Ulf Wiger, Senior Specialist,
/ / / Architecture & Design of Carrier-Class Software
/ / / Team Leader, Software Characteristics
/ / / Ericsson AB, IMS Gateways

George Neuner

unread,
Oct 10, 2006, 1:06:04 PM10/10/06
to
On Mon, 09 Oct 2006 05:42:17 +0100, Jon Harrop <j...@ffconsultancy.com>
wrote:

>For example, try getting C/C++ programmers to rewrite the core of
>the "n"th-nearest neighbour example from my OCaml book:
>
>let rec nth_nn =
> let memory = Hashtbl.create 1 in
> fun n (i, io) ->
> try Hashtbl.find memory (n, i)
> with Not_found -> match n with
> 0 -> AtomSet.singleton (i, io)
> | 1 ->
> let nn = bonds.(i - 1) in
> if io = zero then nn else
> let aux (j, jo) s = AtomSet.add (j, add_i io jo) s in
> AtomSet.fold aux nn AtomSet.empty
> | n ->
> let pprev = nth_nn (n-2) (i, io) in
> let prev = nth_nn (n-1) (i, io) in
> let aux j t = AtomSet.union (nth_nn 1 j) t in
> let t = AtomSet.fold aux prev AtomSet.empty in
> let t = AtomSet.diff (AtomSet.diff t prev) pprev in
> Hashtbl.add memory (n, i) t;
> t

>the nth_nn function that I posted the OCaml for can be written


These two code snippets are not equivalent.

Leaving aside that the OCaml implementations of both Hashtbl and
AtomSet are unspecified and the implication that OCaml's hash tables
are equivalent to C++'s pair associative map (which I doubt), the 'n'
clause of your OCaml code simply returns the object whereas your C++
code performs an additional, unnecessary lookup. It may have been a
simple mistake, but it is costly mistake performance wise.

Then too, you might get much better performance using std::hash_map.
The STL is an *interface* library - very few guarantees are made about
the algorithmic complexity of the underlying implementation. Std::map
makes no guarantees other than the collection elements are sorted and
unique ... it is usually implemented as some form of balanced tree but
a linked list implementation is equally permissible. I would bet that
OCaml is using an ~O(1) array based implementation more similar to
hash_map.

You can, quite reasonably, say "in OCaml, the compiler chooses the
best map representation for me". That's great but it's also
irrelevant ... C++ doesn't choose for you and a programmer has a
responsibility to know his tools.

I'm not defending C++ or the STL ... I believe (as you do) that both
are too complex and present too many problems for the average
programmer. But I don't think it is fair to compare non-equivalent
code and use the results to form or justify opinions about performance
or utility.

George
--
for email reply remove "/" from address

George Neuner

unread,
Oct 10, 2006, 2:07:43 PM10/10/06
to
On Tue, 10 Oct 2006 13:06:04 -0400, George Neuner
<gneuner2/@comcast.net> wrote:

>
>These two code snippets are not equivalent.
>

Forgot to mention also that OCaml simply abandons the hash table (to
GC) at the end of the function whereas C++ takes additional time to
deconstruct it.

Proponents of GC'd languages rarely include GC overhead in their
performance measurements. The immediate destructor calls in C++ can,
in isolation, make a naively coded C++ function seem slower than it's
GC'd equivalent. This observation may or may not be true in the
context of the whole program. Furthermore, it is always possible to
code a C++ program to be strictly equivalent to the GC'd program, and
when this is done, the C++ program is usually faster.

Isaac Gouy

unread,
Oct 10, 2006, 2:07:27 PM10/10/06
to

Jon Harrop wrote:
> ig...@yahoo.com wrote:
> > Jon Harrop wrote:
> >> For example, I wrote a vector graphics library in C++ that became
> >> unmaintainable because alterations that I wanted to make were
> >> prohibitively difficult to implement without breaking the code. In the
> >> OCaml implementation, I was able to make changes to the OCaml codebase
> >> and fix everything easily thanks to its static checking. I've had similar
> >> experiences with a variety of tasks.
> >
> > Perhaps you did a better implementation the second time, because it was
> > the second time?
>
> Absolutely. I wrote several better implementations in OCaml in less time
> than it would have taken to write one good one in C++. That is why
> developing in OCaml results in faster code. In this case, it was not clear
> which of several possible approaches would be most efficient.

Let's try that again - did you first write the C++ implementations and
then write the OCaml implementations? Perhaps you were smart enough not
to make the same mistakes with OCaml that you previously made with C++.

>
> >> Mathematica is written in C and dozens of authors have spent years
> >> optimising it. However, you can implement the core of Mathematica in only
> >> a thousand lines of ML and it will run just as fast. With a little more
> >> work, you can get it running much faster than the real thing.
> >
> > Does that mean you /have/ re-implemented the core of Mathematica and
> > /it does/ run just as fast? (And do you have just /the core of
> > Mathematica/ in C to compare against?)
>
> Yes and yes.

Does that mean you /have/ it running much faster than the real thing?

Isaac Gouy

unread,
Oct 10, 2006, 2:15:00 PM10/10/06
to

Ulf Wiger wrote:
> >>>>> "igouy" == igouy <ig...@yahoo.com> writes:
>
> igouy> Jon Harrop wrote:
> >> The following points are addressed by my book on OCaml (see my
> >> .sig):
> >>
> >> Pietro Abate wrote:
> >> > - Functional languages are known to be concise and programs
> >> > written in such languages are therefore easier to maintain.
>
> We have made the similar observation about erlang.
>
> >> > - Functional languages are more resistant to problems related
> >> > to memory management and runtime errors.
>
>
>
> >> > - Since modularity is the key to successful programming, the
> >> > functional programming style is vitally important to solve
> >> > real world problems. (This is actually a quote from
> >> > Hueghes. Do you agree ?).
>
> I agree.
>
> >> OCaml certainly scales better than C++, both to bigger projects
> >> and to larger groups of programmers.
>
> I have no experience using OCaml in either small or large projects
> (except for installing it in order to use the Felix language.)
> Replacing OCaml with Erlang, I wholeheartedly agree.

iirc I have heard people describe OCaml as their favourite /imperative/
language, so I'm not clear how well it demonstrates the beauty of
functional programming (I guess we'd need to look at the code).

Some might suggest it's comparing a domain specific high-level language
with a general purpose low-level language.

Paul Rubin

unread,
Oct 10, 2006, 2:29:47 PM10/10/06
to
George Neuner <gneuner2/@comcast.net> writes:
> Furthermore, it is always possible to
> code a C++ program to be strictly equivalent to the GC'd program, and
> when this is done, the C++ program is usually faster.

In normal C++ style, this is not always possible, since in C++ you
usually have to seprately free each object that needs to be reclaimed.
A copying GC never touches the garbage. Also, a typical C++ program
mallocs an object and then leaves it at the same memory location
through its lifetime, causing memory fragmentation and consequently
lousy paging and/or cache performance as other interspersed objects
get allocated and freed as the program runs. A copying GC compacts
the live objects in memory. For these reasons, GC systems can and
sometimes do outperform manual C-style memory management.

M E Leypold

unread,
Oct 10, 2006, 5:29:49 PM10/10/06
to

Hi George,

George Neuner <gneuner2/@comcast.net> writes:

> On Tue, 10 Oct 2006 13:06:04 -0400, George Neuner
> <gneuner2/@comcast.net> wrote:
>
> >
> >These two code snippets are not equivalent.
> >
>
> Forgot to mention also that OCaml simply abandons the hash table (to
> GC) at the end of the function whereas C++ takes additional time to
> deconstruct it.
>
> Proponents of GC'd languages rarely include GC overhead in their
> performance measurements. The immediate destructor calls in C++ can,

One might argue that the absence of a garbage collector in an OO
language is a design defect in itself. Consider that no part of code,
written with only a local view of things in mind (like, let's say the
checkout queue of a web shopping system) can decide wether an object
is still needed.

The paradigm of the local view says, that a part of the system can
only decide wether it (the part) needs a reference to the object any
more. Real deallocation (end of life) for an object is really a global
property (i.e. a more global view might turn up, that shopping basket
objects are stored after checkout for later processing by the
daily-statistics module which does do the processing some time in the
night). In OO the local view is necessary for modularity and decision
on deallocation rooted in specific parts of the system hinder system
composition.

> in isolation, make a naively coded C++ function seem slower than it's
> GC'd equivalent. This observation may or may not be true in the

But I agree, to really do a comparison, one has to run the very same
benchmark repeatedly within the same process. On the other side a
"single-shot" run is also a realistic benchmarking situation (i.e. for
tools invoked from other programs). Both kind of benchmarks should be
done for a complete comparison and give different information (like,
e.g.: With Ocaml you pay on startup (non static definitions), with C++
on shutdown (immediate deallocation) and when running repeatedly in
the same process it might just depend on the algorithm in question
(I'd still bet on Ocaml for some reasons, if the system is constructed
in a modular fashion).

> context of the whole program. Furthermore, it is always possible to

I completely agree here.

> code a C++ program to be strictly equivalent to the GC'd program, and

I doubt that: As I said, in GC'ed environments the algorithms can be
constructed in way that decision on deallocation is a non-local
property. That might be faster. Or not. The point is: Without specific
examples it's hard to say and the specific functional world view is
different enough from the imperative world view, that it is usually
not possible to compare the implementations directly.

So what remains is anectodic evidence that person X was faster solving
problem P in language L1 with algorithm A1 than in language L2 with
algorithm A2 or that the implementation in question was faster.

My impression is, that there are no systematic studies beyond that
which compare functional and traditional languages / programming,
perhaps because the question is so ill defined (what do we really want
to compare) or empirical studies would be so expensive in that area.

> when this is done, the C++ program is usually faster.

Regards -- Markus

Jon Harrop

unread,
Oct 10, 2006, 6:24:44 PM10/10/06
to
Isaac Gouy wrote:

> Jon Harrop wrote:
>> Absolutely. I wrote several better implementations in OCaml in less time
>> than it would have taken to write one good one in C++. That is why
>> developing in OCaml results in faster code. In this case, it was not
>> clear which of several possible approaches would be most efficient.
>
> Let's try that again - did you first write the C++ implementations and
> then write the OCaml implementations?

Yes. Then they were equivalent. Then I thought of an improvement. I made the
improvement to the OCaml easily but was unable to make the improvement to
the C++ because the conceptual change required a large number of different
changes to the source code that could not be done incrementally and could
not be done simultaneously because C++ is too fragile.

> Perhaps you were smart enough not
> to make the same mistakes with OCaml that you previously made with C++.

They weren't mistakes. C++ code simply isn't as maintainable and cannot be
developed as quickly.

>> Yes and yes.
>
> Does that mean you /have/ it running much faster than the real thing?

Yes, exactly.

Jon Harrop

unread,
Oct 10, 2006, 6:25:36 PM10/10/06
to
George Neuner wrote:
> These two code snippets are not equivalent.

They are equivalent in the sense that they compute the same thing.

> Leaving aside that the OCaml implementations of both Hashtbl and
> AtomSet are unspecified and the implication that OCaml's hash tables
> are equivalent to C++'s pair associative map (which I doubt), the 'n'
> clause of your OCaml code simply returns the object whereas your C++
> code performs an additional, unnecessary lookup. It may have been a
> simple mistake, but it is costly mistake performance wise.

Performance is limited by set union. The performance of the memoization and
extra lookup in the C++ are irrelevant so they were written for clarity.

> Then too, you might get much better performance using std::hash_map.
> The STL is an *interface* library - very few guarantees are made about
> the algorithmic complexity of the underlying implementation.

The STL goes to great lengths to specify the asymptotic algorithmic
complexity of most of its operations.

> Std::map
> makes no guarantees other than the collection elements are sorted and
> unique ... it is usually implemented as some form of balanced tree but
> a linked list implementation is equally permissible.

STL implementations must work in the specified complexities and a linked
list would not satisfy this.

> I would bet that
> OCaml is using an ~O(1) array based implementation more similar to
> hash_map.

Yes, exactly.

> You can, quite reasonably, say "in OCaml, the compiler chooses the
> best map representation for me". That's great but it's also
> irrelevant ... C++ doesn't choose for you and a programmer has a
> responsibility to know his tools.

OCaml's performance and brevity is due to its use of immutable balanced
binary trees to represent sets (AtomSet) which are provided with the
language. C++ and the STL do not provide this and, consequently, C++
solutions to this task are substantially longer and more complicated.

> I'm not defending C++ or the STL ... I believe (as you do) that both
> are too complex and present too many problems for the average
> programmer. But I don't think it is fair to compare non-equivalent
> code and use the results to form or justify opinions about performance
> or utility.

Comparing programs that do the same thing is the only meaningful and useful
form of equivalence.

Jon Harrop

unread,
Oct 10, 2006, 6:35:03 PM10/10/06
to
George Neuner wrote:
> Forgot to mention also that OCaml simply abandons the hash table (to
> GC) at the end of the function whereas C++ takes additional time to
> deconstruct it.
>
> Proponents of GC'd languages rarely include GC overhead in their
> performance measurements. The immediate destructor calls in C++ can,
> in isolation, make a naively coded C++ function seem slower than it's
> GC'd equivalent. This observation may or may not be true in the
> context of the whole program.

I'm only interested in how long it takes to get a correct answer, including
the time taken to write the program.

> Furthermore, it is always possible to
> code a C++ program to be strictly equivalent to the GC'd program,

Please define "strictly equivalent".

> and when this is done, the C++ program is usually faster.

Guessing whether or not the C++ "equivalent" of an OCaml program will be
faster is not useful.

I claim that a C++ implementation is longer and either more obfuscated or
much slower than the OCaml.

Paul Rubin

unread,
Oct 10, 2006, 6:37:50 PM10/10/06
to
Jon Harrop <j...@ffconsultancy.com> writes:
> I'm only interested in how long it takes to get a correct answer, including
> the time taken to write the program.

It's silly to write a C++ program that you only plan to run once.

M E Leypold

unread,
Oct 10, 2006, 6:56:26 PM10/10/06
to

Paul Rubin <http://phr...@NOSPAM.invalid> writes:

Eeeecactly. Wheras with a strongly typed and _flexible_ language like
OCaml you can do what elesewhere is termed "scripting" -- write small
throwaway programs on top of powerful libraries for just on specific
purpose. What better way for a programmer to interact with a computer?

Rather related: I always found it easier and faster to write a text
file and a short awk one liner for singular (just once a time)
problems, than using a spreadsheet (which is also programmimg, but,
well ... contorted.

I hope to achieve the same situation with something like ocaml for
other purposes as well (like "which user on that machine have a myslq
accound an belong to the unix group foo and have not been logged in
for more than 2 weeks").

Regards -- Markus

Jon Harrop

unread,
Oct 10, 2006, 7:13:35 PM10/10/06
to
Paul Rubin wrote:
> ... For these reasons, GC systems can and

> sometimes do outperform manual C-style memory management.

Yes. This is particularly true for real-time applications where C++-style
amortised deallocation can have a terrible effect on worst-case
performance.

The worst-case performance of my vector graphics library is 5x faster in
OCaml than in C++.

--

Isaac Gouy

unread,
Oct 10, 2006, 8:30:50 PM10/10/06
to

Jon Harrop wrote:
> Isaac Gouy wrote:
> > Jon Harrop wrote:
> >> Absolutely. I wrote several better implementations in OCaml in less time
> >> than it would have taken to write one good one in C++. That is why
> >> developing in OCaml results in faster code. In this case, it was not
> >> clear which of several possible approaches would be most efficient.
> >
> > Let's try that again - did you first write the C++ implementations and
> > then write the OCaml implementations?
>
> Yes. Then they were equivalent. Then I thought of an improvement. I made the
> improvement to the OCaml easily but was unable to make the improvement to
> the C++ because the conceptual change required a large number of different
> changes to the source code that could not be done incrementally and could
> not be done simultaneously because C++ is too fragile.

Was it that C++ is too fragile, or that the C++ programs you wrote were
too fragile.


>
> > Perhaps you were smart enough not
> > to make the same mistakes with OCaml that you previously made with C++.
>
> They weren't mistakes. C++ code simply isn't as maintainable and cannot be
> developed as quickly.
>
> >> Yes and yes.
> >
> > Does that mean you /have/ it running much faster than the real thing?
>
> Yes, exactly.

I'm surprised that you haven't listed comparison timings.

Ulf Wiger

unread,
Oct 11, 2006, 4:12:29 AM10/11/06
to
>>>>> "I.G." == Isaac Gouy <ig...@yahoo.com> writes:

I.G.> Ulf Wiger wrote:

>> Of course, it should be noted that our applications contain lots
>> of concurrency, fault tolerance and distribution, and Erlang was
>> designed expressly to support this well. It's still a topic of
>> discussion whether it's the excellent concurrency support in
>> Erlang or the functional aspects that pay the greatest
>> dividends.

I.G.> Some might suggest it's comparing a domain specific high-level
I.G.> language with a general purpose low-level language.

Indeed, and personally, I'm a great fan of domain-specific
languages. (:

Different application domains have different major challenges.
In our domain, concurrency and real-time (both hard and soft)
are the main killers. That, and the staggering amount of
functionality that needs to be implemented. But some things that
are important in many other applications (COM or SQL support, fancy
graphics, etc.) are quite _unimportant_. A good programming language
for telecoms would have to be general purpose, but some components
that are vital in other domains don't really need to be there in
telecoms. This is more of a component issue than a language issue.

I'd be hesitant to mix hard and soft real-time in the same language,
though. The differences are not well enough understood yet, I think.

So I would argue that Erlang is pretty general purpose,
and that CSP-like concurrency can be used as a very powerful
modeling technique in many applications.

(With a better approach to concurrency, perhaps MS Outlook wouldn't
hog the entire machine for 5 minutes while synching with the
Exchange server, and Opera 9 perhaps wouldn't crash whenever you
click too fast...)

But this is also part of the reason why we don't really see
any point to using C++. When we need a general purpose low-level
language, C fits the bill. C++ would presumably give better high-
level modeling support, but in this respect, we feel that Erlang
gives much better support. C++ also offers no help to speak of
in the area of concurrency, which in my domain is disastrous
(I've seen too many excellent programmers get it wrong, and
very few get it right - Erlang, for the most part, got it right,
and Java, for the most part, got it wrong.)

And, with the ever stronger trend towards multi-core architectures
and web-oriented programming, it's increasingly looking to be
a very bad approach. Microsoft is obviously not making the same
mistake with C#.

It's also our experience that C++ projects become much more
dependent on a few excellent programmers than Erlang projects.
Within my problem domain, it can be very costly if average-to-good
programmers can't make significant contributions to the product
without having performance plummet or introducing memory leaks
or wild pointers. This also tends to delay projects, as the
excellent programmers become a precious resource.

(One might argue that we're hiring the wrong type of programmers
then, but consider this: our products have a very long life
cycle, and about 80% of the programmer time is actually spent
in maintenance, rather than in new exciting development. While
this certainly doesn't call for _bad_ programmers, it means that
we will tend to favour longevity and predictable performance over
excellence. Overall, 'predictable' wins over 'excellent' in large
projects, where the No 1 challenge isn't necessarily to come
first, but to finish at all.)

Again, these are the experiences from my projects, within the
telecoms domain. Within this problem area, I believe myself to
have a very solid basis for my claims. You own mileage will
of course vary.

BR,
Ulf W

Jon Harrop

unread,
Oct 11, 2006, 5:53:06 AM10/11/06
to
Isaac Gouy wrote:
> Jon Harrop wrote:
>> Yes. Then they were equivalent. Then I thought of an improvement. I made
>> the improvement to the OCaml easily but was unable to make the
>> improvement to the C++ because the conceptual change required a large
>> number of different changes to the source code that could not be done
>> incrementally and could not be done simultaneously because C++ is too
>> fragile.
>
> Was it that C++ is too fragile, or that the C++ programs you wrote were
> too fragile.

Objectively, forms of static checking provided by the OCaml language and not
by C++ facilitated the development.

>> >> Yes and yes.
>> >
>> > Does that mean you /have/ it running much faster than the real thing?
>>
>> Yes, exactly.
>
> I'm surprised that you haven't listed comparison timings.

The work was done under NDA.

Jon Harrop

unread,
Oct 11, 2006, 5:54:59 AM10/11/06
to
Ulf Wiger wrote:
> And, with the ever stronger trend towards multi-core architectures
> and web-oriented programming, it's increasingly looking to be
> a very bad approach. Microsoft is obviously not making the same
> mistake with C#.

F# is awesome, BTW. :-)

Joachim Durchholz

unread,
Oct 11, 2006, 6:01:06 AM10/11/06
to
M E Leypold schrieb:

> My impression is, that there are no systematic studies beyond that
> which compare functional and traditional languages / programming,
> perhaps because the question is so ill defined (what do we really want
> to compare) or empirical studies would be so expensive in that area.

Actually, no - there are several studies.

The oldest one is at http://citeseer.ist.psu.edu/hudak94haskell.html
(1993!), PDF at http://makeashorterlink.com/?G2EB121FD .
The highlight of that paper is that it goes beyond simple benchmark
results and points out reasons.

There are also the ICFP contests. While these aren't studies in the
strict sense, and the rankings are just benchmarks, the blogs of the
teams are full of interesting information about what they did and why,
and how it worked out.
It would be interesting to see a scientific study based on the next ICFP
contest. Just send an interviewer/observer to each team while they are
programming (could be interesting for the sociologists, too). Or
evaluate the revision control repositories of the teams, and check what
changes were checked in when, and categorize the project activities into
infrastructure, problem solving, and debugging.
I think the ICFP contests provide a *lot* of untapped raw data...

Regards,
Jo

Joachim Durchholz

unread,
Oct 11, 2006, 6:14:18 AM10/11/06
to
Paul Rubin schrieb:

I heard one can prototype quite quickly in C++ if one makes heavy use of
STL and similar libraries.
So it seems the answer to that claim is "it depends" (as is too often
the case).

Jon Harrop

unread,
Oct 11, 2006, 6:17:55 AM10/11/06
to
M E Leypold wrote:
> e.g.: With Ocaml you pay on startup (non static definitions),

You can do that in C++ using a constructor and a global.

Jon Harrop

unread,
Oct 11, 2006, 6:22:19 AM10/11/06
to
Joachim Durchholz wrote:
> There are also the ICFP contests. While these aren't studies in the
> strict sense, and the rankings are just benchmarks, the blogs of the
> teams are full of interesting information about what they did and why,
> and how it worked out.
> It would be interesting to see a scientific study based on the next ICFP
> contest. Just send an interviewer/observer to each team while they are
> programming (could be interesting for the sociologists, too). Or
> evaluate the revision control repositories of the teams, and check what
> changes were checked in when, and categorize the project activities into
> infrastructure, problem solving, and debugging.
> I think the ICFP contests provide a *lot* of untapped raw data...

A big problem with this and, for example, my ray tracer benchmark is the
limited size of the code base. High-level languages are relatively better
on larger code bases.

I had a first stab at measuring the effect by plotting verbosity vs
performance for "equivalent" ray tracers but I'd like to see studies that
examine the differences as a function of code size.

I've been interested in repeating my experiment using the task of
implementing an interpreter. However, the challenge of implementing even a
simple interpreter in C++ is just too daunting. ;-)

Jon Harrop

unread,
Oct 11, 2006, 6:30:54 AM10/11/06
to

Not at all. Most programs in science are disposable.

I did my PhD in theoretical physics. I had lots of crazy ideas for new forms
of analysis so I wrote a program for each one. I ran the program on data
(from experimentalists and simulators) and discussed the results.

Most of my ideas (~9/10) were bad, so most of my programs will never be run
again. Only a few ideas were good and I've made those into decent software
so that other people can use them:

http://www.ffconsultancy.com/products/CWT/

Without tools like Mathematica and OCaml I would have wasted most of my time
debugging programs that were never going to be used again.

Paul Rubin

unread,
Oct 11, 2006, 6:40:12 AM10/11/06
to
Jon Harrop <j...@ffconsultancy.com> writes:
> > It's silly to write a C++ program that you only plan to run once.
>
> Not at all. Most programs in science are disposable.
> ...

> Without tools like Mathematica and OCaml I would have wasted most of my time
> debugging programs that were never going to be used again.

That's what I mean, with some possible exceptions for
ultra-long-running calculations and/or very simple things that map
directly onto STL calls, if you just want to figure out an answer to
some science problem, why deal with the hassle of developing a C++
program when you can do it with much less effort using Mathematica?
Who cares if Mathematica spends 30 seconds crunching the numbers and
the C++ program can do it in 2 seconds, if you've saved days of
development? OCaml would be somewhere in between.

Torben Ægidius Mogensen

unread,
Oct 11, 2006, 6:43:06 AM10/11/06
to
Jon Harrop <j...@ffconsultancy.com> writes:

> Paul Rubin wrote:
>> Jon Harrop <j...@ffconsultancy.com> writes:
>>> I'm only interested in how long it takes to get a correct answer,
>>> including the time taken to write the program.
>>
>> It's silly to write a C++ program that you only plan to run once.
>
> Not at all. Most programs in science are disposable.

> ...


>
> Without tools like Mathematica and OCaml I would have wasted most of my time
> debugging programs that were never going to be used again.

The original remark was about writing one-use C++ programs, which I
agree is a bit silly. If you are going to run a program only once,
you normally don't care too much about running times -- why save a few
seconds running time if you can save a few hours development time by
using a better langauge.

I have, myself, writtena lot of one-use programs in various languages.
Lately, this has been mostly in Haskell, as I find the list
comprehensions and rich set of predefined list functions useful for
making one-liner programs for combinatorial problems. Not fast
programs (I use Hugs), but quick to write. After writing a lot of
one-use programs to calculate probability distributions for various
die-roll mechanisms for games, I made a language for that, though, so
a lot of my one-use programs are now written in that language.

Torben

Jon Harrop

unread,
Oct 11, 2006, 7:00:33 AM10/11/06
to
Paul Rubin wrote:
> That's what I mean, with some possible exceptions for
> ultra-long-running calculations and/or very simple things that map
> directly onto STL calls, if you just want to figure out an answer to
> some science problem, why deal with the hassle of developing a C++
> program when you can do it with much less effort using Mathematica?
> Who cares if Mathematica spends 30 seconds crunching the numbers and
> the C++ program can do it in 2 seconds, if you've saved days of
> development? OCaml would be somewhere in between.

I agree entirely. However, a lot of physical scientists still write
disposable programs in C or Fortran.

The activation barrier is that they add the time taken to learn an extra
language to the time taken to write and run the program. The solution is to
teach science undergraduates how to use more appropriate tools.

Joachim Durchholz

unread,
Oct 11, 2006, 8:19:28 AM10/11/06
to
Isaac Gouy schrieb:

> Jon Harrop wrote:
>> Isaac Gouy wrote:
>> Yes. Then they were equivalent. Then I thought of an improvement. I made the
>> improvement to the OCaml easily but was unable to make the improvement to
>> the C++ because the conceptual change required a large number of different
>> changes to the source code that could not be done incrementally and could
>> not be done simultaneously because C++ is too fragile.
>
> Was it that C++ is too fragile, or that the C++ programs you wrote were
> too fragile.

Whatever. The net statement is that it's more difficult to write C++
code that's both robust and alterable. And that it was easier to write
OCaml code that's both (for Jon, at least - but Jon is not the only one
to report such findings, so I think this can be cautiously generalized).

> I'm surprised that you haven't listed comparison timings.

He had done so earlier, and promply got flamed for "publishing
irrelevant benchmark times".

Please don't try to shoot down such results as irrelevant just because
they don't provide the exact metrics that you'd have liked to see. Take
them as anecdotal evidence - that's the best that's available in this
field anyway. Add other sources.

ICFP contest results show that C++ can compete - in the right hands. The
same result goes for FPLs.
One result that I find noteworthy is that in the wrong hands, a C++
solution tends to simply break (never get done, or fail with too many
bugs), whereas an FPL solution tends to be somewhat slower but still
have reasonable quality.
From that perspective, I see Jon's claims as interesting anecdotal
evidence, but I'm unsure whether it can be generalized.

Nevertheless, I do agree with his basic claim: that it's easier to write
modular code if it is free of side effects and/or can use automatic
garbage collection.
I also add a claim of my own: it's easier to write concise modular code
if you have higher-order functions, and higher-order functions are far
easier to write modularly if you don't have to worry about side effects
in functions passed to the HOFs.
Another claim: C++ has bad support for side-effect-free programming and
HOFs. (It also has far too many pitfalls, and avoiding them takes time
and experience, or a bulletproof template library.)

Regards,
Jo

Joachim Durchholz

unread,
Oct 11, 2006, 8:39:47 AM10/11/06
to
Ulf Wiger schrieb:

> I'd be hesitant to mix hard and soft real-time in the same language,
> though. The differences are not well enough understood yet, I think.

I think HRT is a bit of a straw man. Investing millions of dollars into
software that's HRT, just to find that the machinery may still break
from wear and tear, doesn't sound very helpful.
I also suspect that a lot of HRT design work is turning the hard
constraints into soft ones, by introducing fallback strategies in case a
deadline cannot be met. (The fallbacks may be in hardware.)

> Within my problem domain, it can be very costly if average-to-good
> programmers can't make significant contributions to the product
> without having performance plummet or introducing memory leaks
> or wild pointers. This also tends to delay projects, as the
> excellent programmers become a precious resource.

That's one of the important points.
It's also the point why I think that having ivory tower terminology is
disastrous. Erlang got it right here - it's the same concepts, but
nobody is talking about monads in an Erlang context.

> (One might argue that we're hiring the wrong type of programmers
> then, but consider this: our products have a very long life
> cycle, and about 80% of the programmer time is actually spent
> in maintenance, rather than in new exciting development. While
> this certainly doesn't call for _bad_ programmers, it means that
> we will tend to favour longevity and predictable performance over
> excellence. Overall, 'predictable' wins over 'excellent' in large
> projects, where the No 1 challenge isn't necessarily to come
> first, but to finish at all.)

Most of current-day projects are large. You need not look at Linux
(which is huge), other FOSS projects will do: Subversion, Open Office,
Mozilla, even most games projects: these are all beyond the scope that a
few programmers can tackle, even if they are brilliant super-geniuses.

I think most of the things that a genius can do are already done. diff
and make already exist, all the other small-but-useful tools are already
rounded out. What's most needed are things like ergonomics, consistent
user interfaces, working interfaces with other software - stuff that
requires more diligence than brilliance.

There is still room for brilliance, of course. E.g. I'd like to see an
FPL that the average programmer can easily pick up and quickly get
productive with, or an OS that's immune to rootkits. However, in both
cases, I'd say that brilliance is just the first 1% of the task, the
remaining 99% are still diligence. I think the days of brilliance-only
projects are over.

Regards,
Jo

Paul Rubin

unread,
Oct 11, 2006, 8:46:16 AM10/11/06
to
Joachim Durchholz <j...@durchholz.org> writes:
> > Within my problem domain, it can be very costly if average-to-good
> > programmers can't make significant contributions to the product
> > without having performance plummet or introducing memory leaks or
> > wild pointers. This also tends to delay projects, as the excellent
> > programmers become a precious resource.
>
> That's one of the important points.
> It's also the point why I think that having ivory tower terminology is
> disastrous. Erlang got it right here - it's the same concepts, but
> nobody is talking about monads in an Erlang context.

But this is the usual selling point for Java, not FPL's. Tell a PHB
that a language has no assignment statements and no loops, and you
might as well start in on monads.

Ulf Wiger

unread,
Oct 11, 2006, 9:24:25 AM10/11/06
to
>>>>> "JD" == Joachim Durchholz <j...@durchholz.org> writes:

JD> Ulf Wiger schrieb:


>> I'd be hesitant to mix hard and soft real-time in the same
>> language, though. The differences are not well enough understood
>> yet, I think.

JD> I think HRT is a bit of a straw man. Investing millions of
JD> dollars into software that's HRT, just to find that the
JD> machinery may still break from wear and tear, doesn't sound very
JD> helpful. I also suspect that a lot of HRT design work is
JD> turning the hard constraints into soft ones, by introducing
JD> fallback strategies in case a deadline cannot be met. (The
JD> fallbacks may be in hardware.)

I'm a SRT person myself, so my opinion in this area is perhaps not
of particular weight, but I still think there are many applications
where one has to strive for hard real-time characteristics, even
though there are fallback strategies. A few examples:

- Echo cancellation in telephony. This is done in digital signal
processors, and the algorithm has to execute in an exact
number of CPU cycles - no more, no less. Failure to do so
doesn't result in people dying or cores melting, but it does
result in poor perceived quality.

- Fuel injection control in cars. Most fuel-injected cars have
"limp home settings" as fallback, where the engine is fed a
"fat" fuel mix that won't damage the engine, but is far from
optimal. It's not unusual, I gather, to let the regulation
software trigger on engine revolutions, such that the function
must complete before the next cycle begins, or the regulation
falls out of range, causing a fallback to LOS.

- Packet forwarding in routers. The aim is to handle most packets
in network processors, equipped with CAM memory etc. This is
called "fast path", or "wire speed" forwarding. Again, the
lookups and decisions needed to forward a packet must complete
within the limits set by the interface speed and size of
available buffers. The fallback alternatives are "slow path",
which means that the packet is forwarded to the control software
(typically a C program executing in a FreeBSD process or in
the FreeBSD kernel), or simply dropping the packet. Resorting
to the fallbacks too often leads to severely reduced performance,
which in its turn can lead to network clogging, poor service
quality, etc.

- Servo control of fighter airplanes. In Sweden, we had a famous
incident with the JAS fighter, when it was first paraded on
prime time television. The fighter lost control as it came in
for landing, and crashed before the cameras, accompanied by
loud exclamations by the expert commentators. Besides being
unusually entertaining television (the pilot walked away from the
crash), the crash was due to a feedback loop delay in the
control software (the JAS is inherently unstable and cannot
be flown "by wire"), causing the pilot to over-compensate.
When the incident was repeated in simulators _every_
experienced pilot who tried failed to recover the plane.
The only response that would have saved it was to let
go of the stick (presumably to cover your eyes and scream);
then the plane would have landed itself. No experienced pilot,
it seemed, had this particular reflex (go figure!)

I don't want to knock HRT at all. My pet peeve is that the
techniques used to handle such problems are not very useful
when it comes to SRT, and I'm wary of tools that try to cover
too many difficult domains at once.

/Ulf W

Isaac Gouy

unread,
Oct 11, 2006, 11:39:57 AM10/11/06
to

Didier Verna wrote:
> Pietro Abate <doesn...@hotmail.com> wrote:
>
> > On 2006-10-08, ig...@yahoo.com <ig...@yahoo.com> wrote:
> >>> > - Even if functional languages are still slower then C/C++ in particular
> >>> > domains, the difference in performance has been considerably reduced
> >>> > in the last decade
>
> > What I'm looking for is a peer-reviewed paper of some empirical evidence
> > to support (or disproof) my thesis... You certainly don't need to
> > convince me ... but I need this to convince others, and I'm definitely
> > too young to cite myself ! :)
>
> You may be interested in my recent paper at ECOOP'06 Lisp workshop:
>
> "Beating C in Scientific Computing Applications -- On the Behavior and
> Performance of Lisp, Part I"
>
> It's for a specific set of applications / operations, but I believe you'll
> find it at least satisfactory on the "academic" plan ...
>
> You can find it near the top of this page:
>
> http://www.lrde.epita.fr/~didier/comp/research/publi.php
>
> --
> Check out my new jazz CD on http://www.didierverna.com/ !
>
> Didier Verna EPITA / LRDE, 14-16 rue Voltaire Tel.+33 (1) 44 08 01 85
> 94276 Le Kremlin-Bicêtre, France Fax.+33 (1) 53 14 59 22

afaict those are 5 line micro benchmarks

Isaac Gouy

unread,
Oct 11, 2006, 11:45:58 AM10/11/06
to

Joachim Durchholz wrote:
> Isaac Gouy schrieb:
> > Jon Harrop wrote:
> >> Isaac Gouy wrote:
> >> Yes. Then they were equivalent. Then I thought of an improvement. I made the
> >> improvement to the OCaml easily but was unable to make the improvement to
> >> the C++ because the conceptual change required a large number of different
> >> changes to the source code that could not be done incrementally and could
> >> not be done simultaneously because C++ is too fragile.
> >
> > Was it that C++ is too fragile, or that the C++ programs you wrote were
> > too fragile.
>
> Whatever. The net statement is that it's more difficult to write C++
> code that's both robust and alterable. And that it was easier to write
> OCaml code that's both (for Jon, at least - but Jon is not the only one
> to report such findings, so I think this can be cautiously generalized).

Are the others who report such findings also selling OCaml services :-)

I've noticed that I can be quite selective in the way I present
language X compared to language Y - I do all manner of special
pleading, sometimes I seem quite capable of fooling myself - I suspect
other people can be similarly selective in the way they describe
something they've chosen to use.

I would even guess that we could go back and find people making similar
statements about C++ that's a problem we've been stuck with for so
many years - a multitude of competing claims, none of which seem to be
based on anything much.

>
> > I'm surprised that you haven't listed comparison timings.
>
> He had done so earlier, and promply got flamed for "publishing
> irrelevant benchmark times".
>
> Please don't try to shoot down such results as irrelevant just because
> they don't provide the exact metrics that you'd have liked to see. Take
> them as anecdotal evidence - that's the best that's available in this
> field anyway. Add other sources.
>
> ICFP contest results show that C++ can compete - in the right hands. The
> same result goes for FPLs.
> One result that I find noteworthy is that in the wrong hands, a C++
> solution tends to simply break (never get done, or fail with too many
> bugs), whereas an FPL solution tends to be somewhat slower but still
> have reasonable quality.
> From that perspective, I see Jon's claims as interesting anecdotal
> evidence, but I'm unsure whether it can be generalized.
>
> Nevertheless, I do agree with his basic claim: that it's easier to write
> modular code if it is free of side effects and/or can use automatic
> garbage collection.

I've quickly looked back through the postings and it doesn't seem Jon
made a claim about side effect free code. Maybe Jon could make clear if
he's using OCaml as a better imperative language or as a functional
language.

Isaac Gouy

unread,
Oct 11, 2006, 12:05:00 PM10/11/06
to

Ulf Wiger wrote:
-snip-

> And, with the ever stronger trend towards multi-core architectures
> and web-oriented programming, it's increasingly looking to be
> a very bad approach. Microsoft is obviously not making the same
> mistake with C#.

What are you refering to with "Microsoft is obviously not making the
same mistake with C#"?

Isaac Gouy

unread,
Oct 11, 2006, 12:15:37 PM10/11/06
to

Maybe it is one of the things people claim when they are selling Java -
that doesn't make it true ;-)

So tell the boss something he might be interested in hearing -
something about cheaper quicker on-time-every-time or better
demonstrate that those things.

Paul Rubin

unread,
Oct 11, 2006, 12:30:06 PM10/11/06
to
"Isaac Gouy" <ig...@yahoo.com> writes:
> Maybe it is one of the things people claim when they are selling Java -
> that doesn't make it true ;-)

It really is true that Java programs are free of wild pointers,
usually free of memory leaks, etc. Performance is another issue, I
suppose. Java might be awful, but C++ is way worse ;-)

Jon Harrop

unread,
Oct 11, 2006, 1:18:17 PM10/11/06
to
Isaac Gouy wrote:

> Joachim Durchholz wrote:
>> Whatever. The net statement is that it's more difficult to write C++
>> code that's both robust and alterable. And that it was easier to write
>> OCaml code that's both (for Jon, at least - but Jon is not the only one
>> to report such findings, so I think this can be cautiously generalized).
>
> Are the others who report such findings also selling OCaml services :-)

You think that I like OCaml because I wrote a book on OCaml.

> I would even guess that we could go back and find people making similar
> statements about C++ that's a problem we've been stuck with for so
> many years - a multitude of competing claims, none of which seem to be
> based on anything much.

I've put my money where my mouth is. So have Microsoft (by joining the Caml
Consortium and by developing their own dialect of Caml).

>> Nevertheless, I do agree with his basic claim: that it's easier to write
>> modular code if it is free of side effects and/or can use automatic
>> garbage collection.
>
> I've quickly looked back through the postings and it doesn't seem Jon
> made a claim about side effect free code. Maybe Jon could make clear if
> he's using OCaml as a better imperative language or as a functional
> language.

Functional.

>> Another claim: C++ has bad support for side-effect-free programming and
>> HOFs. (It also has far too many pitfalls, and avoiding them takes time
>> and experience, or a bulletproof template library.)

Yes, the approach provided by C++ is far too obfuscated and error prone to
be useful.

George Neuner

unread,
Oct 11, 2006, 3:38:10 PM10/11/06
to
On Tue, 10 Oct 2006 23:25:36 +0100, Jon Harrop <j...@ffconsultancy.com>
wrote:

>George Neuner wrote:
>> These two code snippets are not equivalent.
>
>They are equivalent in the sense that they compute the same thing.
>

That's a ridiculous argument. Repeated addition computes the same
result as multiplication - that doesn't make it a practical
alternative.


>> Leaving aside that the OCaml implementations of both Hashtbl and
>> AtomSet are unspecified and the implication that OCaml's hash tables
>> are equivalent to C++'s pair associative map (which I doubt), the 'n'
>> clause of your OCaml code simply returns the object whereas your C++
>> code performs an additional, unnecessary lookup. It may have been a
>> simple mistake, but it is costly mistake performance wise.
>
>Performance is limited by set union. The performance of the memoization and
>extra lookup in the C++ are irrelevant so they were written for clarity.

The implementation of set union was not included in either code
snippet so I can't comment on it. However, I specifically gave two
trivial changes which would significantly speed up the C++ code you
did provide.


George
--
for email reply remove "/" from address

George Neuner

unread,
Oct 11, 2006, 4:06:17 PM10/11/06
to
On Tue, 10 Oct 2006 23:35:03 +0100, Jon Harrop <j...@ffconsultancy.com>
wrote:

>George Neuner wrote:
>
>> Furthermore, it is always possible to
>> code a C++ program to be strictly equivalent to the GC'd program,
>
>Please define "strictly equivalent".

Operating with the same semantics.


>I claim that a C++ implementation is longer and either more obfuscated or
>much slower than the OCaml.

C++ is more verbose than OCaml. So what? Verbose code has nothing
whatsoever to do with speed of execution.

I spent over 10 years writing hard real time process monitoring
applications in C++ ... all multithreaded and most with complex
dynamic allocation patterns. If there's any way to make C++ fast, I
have probably used it.

You don't like C++ ... fine! It's not my favorite either - I just
earn money with it. But you have been publicly running a language
"shootout" for over a year claiming interest in speed and yet you
reject really trivial code modifications that could dramatically
increase speed because you don't like the language they're written in.

You've demonstrated that you can't be bothered with the algorithmic
complexity of the functions you use and simply select them by name
association with functions from another language. And you are
obviously not interested in having others point out your mistakes.

There's no value in continuing this discussion any further because you
are clearly not interested in fair competition - only in competition
that shows your pet language is superior.

Matthias Blume

unread,
Oct 11, 2006, 4:16:16 PM10/11/06
to
George Neuner <gneuner2/@comcast.net> writes:

> On Tue, 10 Oct 2006 23:35:03 +0100, Jon Harrop <j...@ffconsultancy.com>
> wrote:
>
>>George Neuner wrote:
>>
>>> Furthermore, it is always possible to
>>> code a C++ program to be strictly equivalent to the GC'd program,
>>
>>Please define "strictly equivalent".
>
> Operating with the same semantics.

To back up this claim, could you, please, provide a formal semantics
for C++?

Jon Harrop

unread,
Oct 11, 2006, 4:42:22 PM10/11/06
to
George Neuner wrote:
> On Tue, 10 Oct 2006 23:25:36 +0100, Jon Harrop <j...@ffconsultancy.com>
> wrote:
>>George Neuner wrote:
>>> These two code snippets are not equivalent.
>>
>>They are equivalent in the sense that they compute the same thing.
>
> That's a ridiculous argument.

I'm only interested in practically important observations. Trying to mimic
the entire OCaml run-time every time I write a C++ program is not feasible.

> Repeated addition computes the same
> result as multiplication - that doesn't make it a practical
> alternative.

If you're comparing two languages and one forces you to write your own
ad-hoc, informally specified and bug-ridden implementation of
multiplication, I know which one I'd recommend.

>>Performance is limited by set union. The performance of the memoization
>>and extra lookup in the C++ are irrelevant so they were written for
>>clarity.
>
> The implementation of set union was not included in either code
> snippet so I can't comment on it.

The implementations come with the standard libraries of both languages.
However, OCaml's is asymptotically faster that the STL's so you have to
roll your own if you want decent performance from C++ (as I have done here,
although you didn't notice that because the C++ is correspondingly
obfuscated).

> However, I specifically gave two
> trivial changes which would significantly speed up the C++ code you
> did provide.

No, you gave two premature optimisations that would not affect overall
performance significantly.

Isaac Gouy

unread,
Oct 11, 2006, 8:00:31 PM10/11/06
to

My mistake, I thought you were talking about whether Java lets average
programmers make significant contributions. (Do Java and C++ have the
slightest thing to do with "the beauty of functional programming"?)

Isaac Gouy

unread,
Oct 11, 2006, 8:07:15 PM10/11/06
to

Jon Harrop wrote:
> Isaac Gouy wrote:
> > Joachim Durchholz wrote:
> >> Whatever. The net statement is that it's more difficult to write C++
> >> code that's both robust and alterable. And that it was easier to write
> >> OCaml code that's both (for Jon, at least - but Jon is not the only one
> >> to report such findings, so I think this can be cautiously generalized).
> >
> > Are the others who report such findings also selling OCaml services :-)
>
> You think that I like OCaml because I wrote a book on OCaml.

I don't think book authorship covers all your OCaml services, but
rather than list them here I think anyone interested in your services
can simply follow the URL that you helpfully include at the end of your
postings. LOL!

>
> > I would even guess that we could go back and find people making similar
> > statements about C++ that's a problem we've been stuck with for so
> > many years - a multitude of competing claims, none of which seem to be
> > based on anything much.
>
> I've put my money where my mouth is. So have Microsoft (by joining the Caml
> Consortium and by developing their own dialect of Caml).

And your mouth where your money is? :-)

I don't think there's anything wrong with open, partisan,
self-interested advocacy.


>
> >> Nevertheless, I do agree with his basic claim: that it's easier to write
> >> modular code if it is free of side effects and/or can use automatic
> >> garbage collection.
> >
> > I've quickly looked back through the postings and it doesn't seem Jon
> > made a claim about side effect free code. Maybe Jon could make clear if
> > he's using OCaml as a better imperative language or as a functional
> > language.
>
> Functional.
>
> >> Another claim: C++ has bad support for side-effect-free programming and
> >> HOFs. (It also has far too many pitfalls, and avoiding them takes time
> >> and experience, or a bulletproof template library.)
>
> Yes, the approach provided by C++ is far too obfuscated and error prone to
> be useful.

I'm sure the C++ fan club can come up with the same kind of empty name
calling.

Jon Harrop

unread,
Oct 11, 2006, 10:47:26 PM10/11/06
to
Isaac Gouy wrote:
> (Do Java and C++ have the
> slightest thing to do with "the beauty of functional programming"?)

Of course, "beauty" is relative so you need to compare FPLs to non-FPLs like
Java and C++.

Paul Rubin

unread,
Oct 12, 2006, 1:24:20 AM10/12/06
to
"Isaac Gouy" <ig...@yahoo.com> writes:
> My mistake, I thought you were talking about whether Java lets average
> programmers make significant contributions.

I believe it does. It's been described as the Cobol of the 1990's ;-)

> (Do Java and C++ have the slightest thing to do with "the beauty of
> functional programming"?)

Certainly not!

Adrian Hey

unread,
Oct 12, 2006, 4:34:27 AM10/12/06
to
Paul Rubin wrote:
> It's been described as the Cobol of the 1990's ;-)

Eeek! "weasel words" alert! :-)
(You can look that up on wikipedia too :-)

Regards
--
Adrian Hey

M E Leypold

unread,
Oct 11, 2006, 7:53:53 AM10/11/06
to

Ulf Wiger <etx...@cbe.ericsson.se> writes:

> But this is also part of the reason why we don't really see
> any point to using C++. When we need a general purpose low-level
> language, C fits the bill.

Sometimes I wish that there would be a well defined subset of my
favorite functional languages (you case: Erlang, mine: Haskell and
(S|OCA)ML), which could be compiled into C (i.e. without the necessity
of garbage collection automatically. Those module could then be linked
into C programs or used with a FFI as "native" extensions to a
functional language.

I'd call it system-ML (or system-ERLANG) or something like this.

One would write the whole program in the high-level language (testing,
hernessing it to simulators of its environment etc) and then do
equivalence-trasnformations on the modules/parts that would have been
implemented in C in a world without system-ML until those parts fit
into the system-ML subset. Then the build system / make scripts are
changed to translate ML -> C+FFI -> Oobject instead of ML -> bytecode
or ML -> object and voila: One will never have to program C again.

Of course I'm dreaming. :-)

Bit-C is a bit like this.

Regards -- Markus

M E Leypold

unread,
Oct 11, 2006, 8:10:06 AM10/11/06
to

Jon Harrop <j...@ffconsultancy.com> writes:

> I've been interested in repeating my experiment using the task of
> implementing an interpreter. However, the challenge of implementing even a
> simple interpreter in C++ is just too daunting. ;-)

How's that? Which kind of interpreter? I'm a bit surprised here,
considering that most of the books on that subject I have read, do all
examples with C.

Regards -- Markus


M E Leypold

unread,
Oct 11, 2006, 9:43:07 AM10/11/06
to

So don't tell him. As far as the PHB is concerned, the "x <- yadda",
the let-expression and the tail recursive calling of locally defined
functions just fits the bill.

let loop a b c =
...
...
in loop 12 [] "yes!"


How do FPLs not have loops, man? There is even the word "loop", look!
:-)

My impression is, the PHB will be more intrested in which libraries
come with the language (XML, can I parse XML? Can we make a GUI? What
about HTTP support? We need to remote control the office package.). It
depends how you play your card here.

In most cases though the (often non-FPL) languge is just a given. Few
shops change horses more than once in a decade. So the reluctance to
change might not be rooted in the putative newcommer being a FPL, but
would rather also meet any attempt to change to any other language. I
think Java wasn't adopted _instead_ of other languages but happened to
meet a suitable vacuum in which to expand in the form of developing
internet services etc. for which no language had been adopted
yet. Also in 1995 to 2000 a huge number of new companies was founded
and that is the lucky moment where a new language has its chance.


Regards -- Markus


M E Leypold

unread,
Oct 11, 2006, 8:07:52 AM10/11/06
to

Joachim Durchholz <j...@durchholz.org> writes:

> M E Leypold schrieb:
> > My impression is, that there are no systematic studies beyond that
> > which compare functional and traditional languages / programming,
> > perhaps because the question is so ill defined (what do we really want
> > to compare) or empirical studies would be so expensive in that area.
>
> Actually, no - there are several studies.
>
> The oldest one is at http://citeseer.ist.psu.edu/hudak94haskell.html
> (1993!), PDF at http://makeashorterlink.com/?G2EB121FD .
> The highlight of that paper is that it goes beyond simple benchmark
> results and points out reasons.

Ah, yes. That looks good. I'll have a look into it within the next
weeks. my fear is, since they are talking about "Prototyping" that
they see FP only as prototyping method which prototypes will then have
to be translated into conventionell systems.

Thanks for the referenc.


>
> There are also the ICFP contests. While these aren't studies in the
> strict sense, and the rankings are just benchmarks, the blogs of the

Exactly. And that is why I called it anectodal.

> teams are full of interesting information about what they did and
> why, and how it worked out.

This is very valuable, and personally I believe that FP _is_ far
superior to conventional development (if one has the right tools, like
bindings to system interfaces and GUI libraries etc) but the ICFP
folklore is hardly a quantitative study with, say, comparable test
groups.

> It would be interesting to see a scientific study based on the next

My point.

> ICFP contest. Just send an interviewer/observer to each team while
> they are programming (could be interesting for the sociologists,

No sorry. You would probably measure the correlation between what
people choose at tools and how comptent they are solving the ICFP
problems. Something like: "ICFP problem are algorithcally oriented -
People who use FP are better in solving ICFP problems -- People who
dare to use FP are algorithmically oriented".

With "algorithmically oriented" I mean here, that the problem can only
be solved by finding a usefull algorithm and that this is the hard
part of the puzzle. Perhaps good puzzlers like to use functional
languages because they provide an intellectual challenge? -- I'm only
half serious here, but what I want to point out: For a significant
study you cannot leave it to the programmers which language they
choose: This decision needs to be random).

> too). Or evaluate the revision control repositories of the teams, and
> check what changes were checked in when, and categorize the project
> activities into infrastructure, problem solving, and debugging.

Comparability is the problem here. Not the same people, not (quite)
the same software.

> I think the ICFP contests provide a *lot* of untapped raw data...

There you are certainly right.

Regards -- Markus


M E Leypold

unread,
Oct 11, 2006, 7:41:47 AM10/11/06
to

"Isaac Gouy" <ig...@yahoo.com> writes:

> Some might suggest it's comparing a domain specific high-level
> language with a general purpose low-level language.

Or is it comparing a general purpose high-level language with a
low-level language which has served its purpose?

:-)

Regards -- Markus

Paul Rubin

unread,
Oct 12, 2006, 4:42:03 AM10/12/06
to
M E Leypold <development-2006-8...@ANDTHATm-e-leypold.de> writes:
> Sometimes I wish that there would be a well defined subset of my
> favorite functional languages (you case: Erlang, mine: Haskell and
> (S|OCA)ML), which could be compiled into C (i.e. without the necessity
> of garbage collection automatically. Those module could then be linked
> into C programs or used with a FFI as "native" extensions to a
> functional language....

> Of course I'm dreaming. :-)
>
> Bit-C is a bit like this.

I think the bootstrap phases of some Lisp implementations have been
written that way. The same thing for PyPy, whose initial phase is
written in a Python subset called RPython. Usually they compiled
directly to machine code instead of C, but they could in principle use C.

M E Leypold

unread,
Oct 12, 2006, 9:14:56 AM10/12/06
to

George Neuner <gneuner2/@comcast.net> writes:

>
> You've demonstrated that you can't be bothered with the algorithmic
> complexity of the functions you use and simply select them by name
> association with functions from another language. And you are
> obviously not interested in having others point out your mistakes.

I don't really know who is right in this case, but -- do you really
expect Jon Harrop to invest time and work to do those changes just on
you say-so? My impression was, that he has given some reasons (right
or wrong), why he doesn't expect much from those changes. What you
could do, is just apply your suggested changes to Jon's program and
post the difference in runtime here. That would prove your point.

Nobody is "interested in having others point out [his] mistakes". Not
in usenet and it is not constructive. Just demonstrating an
improvement would be better.


> There's no value in continuing this discussion any further because you
> are clearly not interested in fair competition - only in competition
> that shows your pet language is superior.

Bah. If OCaml is a pet language, then C++ must be something like a
live threating addiction. One can't stop taking it after starting the
habit, but ultimately it gets you knowhere and very probably kills
you.

Regards -- Markus

M E Leypold

unread,
Oct 12, 2006, 9:17:38 AM10/12/06
to

Paul Rubin <http://phr...@NOSPAM.invalid> writes:

> M E Leypold <development-2006-8...@ANDTHATm-e-leypold.de> writes:
> > Sometimes I wish that there would be a well defined subset of my
> > favorite functional languages (you case: Erlang, mine: Haskell and
> > (S|OCA)ML), which could be compiled into C (i.e. without the necessity
> > of garbage collection automatically. Those module could then be linked
> > into C programs or used with a FFI as "native" extensions to a
> > functional language....
> > Of course I'm dreaming. :-)
> >
> > Bit-C is a bit like this.
>
> I think the bootstrap phases of some Lisp implementations have been
> written that way.

Scheme48, if I remember right.

> The same thing for PyPy, whose initial phase is written in a Python
> subset called RPython. Usually they compiled directly to machine
> code instead of C, but they could in principle use C.

How beautiful. :-)

Regards -- Markus


Ulf Wiger

unread,
Oct 12, 2006, 10:49:36 AM10/12/06
to
"Isaac Gouy" <ig...@yahoo.com> writes:

Well, googling on 'C# concurrency' already gives a hint that
concurrency support is a major feature of C# 3.0. What I refered to
as a very bad approach was to not have a well thought-out strategy
for concurrency in the language.

Here's one starting point:

http://wesnerm.blogs.com/net_undocumented/2005/06/concurrency_rev.html

Here's a quote from "Modern Concurrency Abstractions for C#"
by Benton, Cardelli and Fournet
(http://research.microsoft.com/Users/luca/Papers/Polyphony%20(TOPLAS).pdf)

"We believe that concurrency should be a language feature and a part
of language specifications. Serious attempts in this direction were
made beginning in the 1970's with the concept of monitors [Hoare 1974]
and the Occam language [INMOS Limited 1984] (based on Communicating
Sequential Processes [Hoare 1985]). The general notion of monitors
has become very popular, particularly in its current object oriented
form of threads and object-bound mutexes, but it has been provided at
most as a veneer of syntactic sugar for optionally locking objects
on method calls.
Many things have changed in concurrency since monitors were
introduced. Communication has become more asynchronous, and
concurrent computations have to be "orchestrated" on a larger scale.
The concern is not as much with the efficient implementation and
use of locks on a single processor or multiprocessor, but with
the ability to handle asynchronous events without unnecessarily
blocking clients for long periods, and without deadlocking. In
other words, the focus is shifting from shared-memory
concurrency to message- or event-oriented concurrency."

BR,

Ulf Wiger

unread,
Oct 12, 2006, 10:57:44 AM10/12/06
to
M E Leypold <development-2006-8...@ANDTHATm-e-leypold.de> writes:

> Ulf Wiger <etx...@cbe.ericsson.se> writes:
>
>> But this is also part of the reason why we don't really see
>> any point to using C++. When we need a general purpose low-level
>> language, C fits the bill.
>
> Sometimes I wish that there would be a well defined subset of my
> favorite functional languages (you case: Erlang, mine: Haskell and
> (S|OCA)ML), which could be compiled into C (i.e. without the necessity
> of garbage collection automatically. Those module could then be linked
> into C programs or used with a FFI as "native" extensions to a
> functional language.

For OCaml afficionados, www.felix.org comes pretty close to this
description, I think. I've been lurking on the mailing list for
some time now, and it's starting to pick up some speed (relatively
speaking). They push it as 'the smart upgrade from C++'...
but my interest in it is as a possible way to write linked-in
drivers for Erlang with much more safety and expressive power than
in C, but without really sacrificing the speed. It does have GC,
though. (:

For one thing, it has integrated regexps, lexing and GLR parsing in
the language. The lexer is very much ML-style, I gather.

M E Leypold

unread,
Oct 12, 2006, 12:18:56 PM10/12/06
to

Ulf Wiger <etx...@seasc0010.dyn.rnd.as.sw.ericsson.se> writes:

> M E Leypold <development-2006-8...@ANDTHATm-e-leypold.de> writes:
>
> > Ulf Wiger <etx...@cbe.ericsson.se> writes:
> >
> >> But this is also part of the reason why we don't really see
> >> any point to using C++. When we need a general purpose low-level
> >> language, C fits the bill.
> >
> > Sometimes I wish that there would be a well defined subset of my
> > favorite functional languages (you case: Erlang, mine: Haskell and
> > (S|OCA)ML), which could be compiled into C (i.e. without the necessity
> > of garbage collection automatically. Those module could then be linked
> > into C programs or used with a FFI as "native" extensions to a
> > functional language.
>
> For OCaml afficionados, www.felix.org comes pretty close to this

Thanks. :-)

You must mean http://felix.sourceforge.net/, though.

> description, I think. I've been lurking on the mailing list for
> some time now, and it's starting to pick up some speed (relatively
> speaking). They push it as 'the smart upgrade from C++'...
> but my interest in it is as a possible way to write linked-in
> drivers for Erlang with much more safety and expressive power than
> in C, but without really sacrificing the speed. It does have GC,
> though. (:

And that is exactly what I'm hoping^Wdreaming of to get rid of. Of
course that would probably require a certain amount of manual
intervention: The prototypes would be in full System-ML. Then various
"tricks" like simulating a malloc() or passing required storage from
outside would be applied (always preserving functionial equivalence,
but not necessarily the interface), until the code could be compiled
as C.

I.e. instead of creating an Array in an Ocaml-function and passing it
back from the function, one would pass back an ArrayPointer which would
be defined as

type 'elem ArrayPointer = (('elem array) ref) option

(the option covers the null pointer case) and use (in the function) a
(mallocArray size) which the System-ML compiler would recognize/handle
as wrapper around malloc() whereas in the full ML environment this
would be an invocation of an ML function with obvious implementation.

The advantage would be to have full FP for testing and so on and no
disruption of the paradigm you're usually working in.

The transformations whose equivalence preservation should be proofed
(possibly by automated means) would guarantee correctness if that can
be proven in the less complex functional version.

I'm not sure that all is really feasible, but at that point I fully
admit that I'm only dreaming.

> For one thing, it has integrated regexps, lexing and GLR parsing in
> the language. The lexer is very much ML-style, I gather.

Regards -- Markus

Jon Harrop

unread,
Oct 12, 2006, 12:52:13 PM10/12/06
to
M E Leypold wrote:
> I don't really know who is right in this case, but -- do you really
> expect Jon Harrop to invest time and work to do those changes just on
> you say-so? My impression was, that he has given some reasons (right
> or wrong), why he doesn't expect much from those changes. What you
> could do, is just apply your suggested changes to Jon's program and
> post the difference in runtime here. That would prove your point.

His implication is really that all C++ programmers should be willing to
rewrite the parts of OCaml that they need, everytime they need them.

You can lead a horse to water... ;-)

Isaac Gouy

unread,
Oct 12, 2006, 2:10:07 PM10/12/06
to

Ulf Wiger wrote:
> "Isaac Gouy" <ig...@yahoo.com> writes:
>
> > Ulf Wiger wrote:
> > -snip-
> >> And, with the ever stronger trend towards multi-core architectures
> >> and web-oriented programming, it's increasingly looking to be
> >> a very bad approach. Microsoft is obviously not making the same
> >> mistake with C#.
> >
> > What are you refering to with "Microsoft is obviously not making the
> > same mistake with C#"?
>
> Well, googling on 'C# concurrency' already gives a hint that
> concurrency support is a major feature of C# 3.0. What I refered to
> as a very bad approach was to not have a well thought-out strategy
> for concurrency in the language.
-snip-

Hmmm I'd briefly looked at Comega but it isn't clear to me how much of
that has moved into C#3.0

Jon Harrop

unread,
Oct 12, 2006, 3:56:21 PM10/12/06
to

Yipes! You should read any books on ML. That family of languages was
designed for interpreter and compiler writing. In fact, check out some of
my web pages first:

http://www.ffconsultancy.com/free/ocaml/interpreter.html

Try translating that into C...

Ken Rose

unread,
Oct 13, 2006, 2:01:49 PM10/13/06
to
Jon Harrop wrote:
> Yipes! You should read any books on ML. That family of languages was
> designed for interpreter and compiler writing. In fact, check out some of
> my web pages first:
>
> http://www.ffconsultancy.com/free/ocaml/interpreter.html
>
> Try translating that into C...
>

While you're browsing, look at
http://caml.inria.fr/pub/docs/oreilly-book/html/book-ora058.html for a
primitive Basic interpreter in 300 or so lines of OCaml.

- ken

Joachim Durchholz

unread,
Oct 13, 2006, 2:11:09 PM10/13/06
to
Paul Rubin schrieb:

> But this is the usual selling point for Java, not FPL's. Tell a PHB
> that a language has no assignment statements and no loops, and you
> might as well start in on monads.

I'd sell Erlang.
Joe Armstrong has repeatedly given *very* convincing arguments. Stuff
like "fivefold increase in productivity of our average programmers"
(with the proper caveats, but you don't give these to the PHBs). Or "the
AXP has roughly the same amount of C and Erlang code, with the C code
doing mainly device drivers and the Erlang code implementing roughly 80%
of the functionality".

Regards,
Jo

Joachim Durchholz

unread,
Oct 13, 2006, 2:20:42 PM10/13/06
to
M E Leypold schrieb:

> Sometimes I wish that there would be a well defined subset of my
> favorite functional languages (you case: Erlang, mine: Haskell and
> (S|OCA)ML), which could be compiled into C (i.e. without the necessity
> of garbage collection automatically.

Without automatic GC, you have to explicitly delete every intermediate
result. In other words, you can't simply compose an expression, you have
to keep the elements separate in variables because you need to reference
a second time, when deleting them.
Code with higher-order functions is affected as well. One of the more
useful applications of functional programming is constructing functions
that "do the right thing", on the fly at run-time. These tend to become
intermediate values once the abstraction levels are advanced enough. (I
think that's one of the reasons why all FPLs have automatic GC. FPLs
excel in abstraction, and having to delete values makes it awkward to
abstract.)

I'd rather suggest building such a hybrid C/FPL system on top of the
Boehm-Demers-Weiser GC library.

Regards,
Jo

M E Leypold

unread,
Oct 12, 2006, 4:54:28 PM10/12/06
to

Jon Harrop <j...@ffconsultancy.com> writes:

> M E Leypold wrote:
> > I don't really know who is right in this case, but -- do you really
> > expect Jon Harrop to invest time and work to do those changes just on
> > you say-so? My impression was, that he has given some reasons (right
> > or wrong), why he doesn't expect much from those changes. What you
> > could do, is just apply your suggested changes to Jon's program and
> > post the difference in runtime here. That would prove your point.
>
> His implication is really that all C++ programmers should be willing to
> rewrite the parts of OCaml that they need, everytime they need them.
>
> You can lead a horse to water... ;-)


Any sufficiently complicated C or Fortran program contains an
ad-hoc, informally-specified bug-ridden slow implementation of
half of Common Lisp.

-- Philip Greenspun, often called Greenspun's Tenth Rule of Programming
(http://philip.greenspun.com/research)

s/Lisp/OCaml/;s/C|\(FORTRAN\)/C++/;

:-).

But seriously: What I found more offending is the idea that he "points
out" other peoples "mistakes" and sends them running, instead of
proofing his idea by just applying his changes.

As far as pure execution speed goes, I wouldn't bet that OCaml is
always faster or even comparably fast to C++. But that is not the
point: C++ is simply unmaintainable and programs that "evolve" rather
than being planned "right" from the beginning I'd prefer to write in a
language with a reasonable type system, concentrating on algorithms
and dataflow instead of bothering with initialization, error handling
and resource management half of the time and in half of the source
code. The saved development time can always be spent into
optimizations later, e.g. it might be useful to rewrite some hot spots
into C later. But I'm convinced that will really rarely
happen. Optimzation only pays if the following inequality is valid:


C_dev < N_seats * delta_C_better

where

C_dev - cost of development, a simple guestimator would be time *
hourly rates.

N_seats - Number of computers the program will be running at.

delta_C_better - the additional money the customer has to spent per
node to get a faster and better machine on which
the program runs "fast enough" ( not a machine that
makes it as fast as the C++ version).

In many cases optimization is not really economical. So the whole
discussion is mood, and as as far as maintainability goes ...

Ragards -- Markus


idk...@gmail.com

unread,
Oct 14, 2006, 3:46:54 AM10/14/06
to
M E Leypold wrote:
> Ulf Wiger <etx...@cbe.ericsson.se> writes:
>
> > But this is also part of the reason why we don't really see
> > any point to using C++. When we need a general purpose low-level
> > language, C fits the bill.
>
[snip]

> Bit-C is a bit like this.
>
> Regards -- Markus

Markus, can you provide url to Bit-C?

a cursory google failed for me.

Thanks much.

Paul Rubin

unread,
Oct 14, 2006, 3:51:09 AM10/14/06
to
"idk...@gmail.com" <idk...@gmail.com> writes:
> Markus, can you provide url to Bit-C?
> a cursory google failed for me.

See the links from here:

http://www.coyotos.org/docs/index.html

Tomasz Zielonka

unread,
Oct 14, 2006, 6:40:01 AM10/14/06
to
Ulf Wiger wrote:
> Here's a quote from "Modern Concurrency Abstractions for C#"
> by Benton, Cardelli and Fournet
> (http://research.microsoft.com/Users/luca/Papers/Polyphony%20(TOPLAS).pdf)
>
> "We believe that concurrency should be a language feature and a part
> of language specifications.

I think it's important to realise that one of the easiest ways to make
the language play well with concurrency is to make the core of the language
purely functional. That's the approach used in Erlang and Haskell.

There are other good reasons for starting with a pure core language, so
it's possible to come up with a concurrency friendly language "by
accident". I wonder if it wasn't the case with Haskell, at least
partly.

When you start with an imperative core, as in C#, there are so much more
things you can get wrong.

Best regards
Tomasz

M E Leypold

unread,
Oct 14, 2006, 8:49:35 AM10/14/06
to

"idk...@gmail.com" <idk...@gmail.com> writes:

http://www.coyotos.org/docs/bitc/spec.html

It's BitC not Bit-C (as I wrote). Sorry.

Regards -- Markus

Ulf Wiger

unread,
Oct 16, 2006, 6:53:39 AM10/16/06
to
>>>>> "M E" == M E Leypold <M> writes:

>> For OCaml afficionados, www.felix.org comes pretty close to this

M E> Thanks. :-)

M E> You must mean http://felix.sourceforge.net/, though.

Yes. Sorry.

>> description, I think. I've been lurking on the mailing list for
>> some time now, and it's starting to pick up some speed
>> (relatively speaking). They push it as 'the smart upgrade from
>> C++'... but my interest in it is as a possible way to write
>> linked-in drivers for Erlang with much more safety and
>> expressive power than in C, but without really sacrificing the
>> speed. It does have GC, though. (:

M.E.> And that is exactly what I'm hoping/dreaming of to get rid of
M.E.> Of course that would probably require a certain amount of
M.E.> manual intervention

The idea of Felix is to allow nearly seamless integration with
C++. This includes being able to use manual memory management
where needed.

See for example:
http://www.mail-archive.com/felix-l...@lists.sourceforge.net/msg00390.html

You can also define your own primitive types. You should be able
to define your own array type that bypasses the GC, if you
want.

I can't speak intelligently of how well it works out in practice,
since I only lurk on the list, and haven't tried anything but
the simplest of code in Felix.

idk...@gmail.com

unread,
Oct 17, 2006, 4:54:09 AM10/17/06
to

very cool, thanks much! :)

0 new messages