Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: Alternatives to C: ObjectPascal, Eiffel, Ada or Modula-3?

5 views
Skip to first unread message

fft1976

unread,
Jul 28, 2009, 4:57:05 PM7/28/09
to
On Jul 18, 7:19 am, Andrea Taverna <a.tavs.NOS...@libero.it.invalid>
wrote:
> Hi folks!
>
> I'm a CS student and I often need to write number-crunching code dealing
> with combinatorial optimization problems.
> What I do usually is implementing ad-hoc algorithms and testing their
> performance against other previously-known solutions, including general
> solvers.
>
> In the past I used C, but now I have decided to change language.
> I'm looking for a "better" one.
>
> Here follow the features it should have, ranked approximately by relevance:
>
> 0) open-source support and an alive community
> 1) directly compiled to efficient code
> 2) statically typed and object-oriented, better if multi-paradigm
> 3) general-purpose libraries (possibly standardized, either by standard
> or de facto), including containers and some math abstractions.
> 4) garbage collected. As an alternative, provide memory management
> policies via libraries (e.g. memory pools and such)
> 5) optional run-time checks and some kind of control over compilation
> and low-level issues
> 6) "relatively simple and consistent"
>
> So I have considered these alternatives: FreePascal, Eiffel, Ada and
> Modula-3.
> I have taken a look at all of them and I'm still undecided. Below are
> the impressions I got for each language.
> Can you help me? Feel free to recommend other languages as well.
>
> TIA
>
> --> Impressions I got for each language
>
>      - FreePascal is a safe and modular alternative to C and C++,  but
> it is also close to the latter in terms of expressiveness. Moreover it
> doesn't seem to have the libraries I need.
> ==>Qualifies for 0,1,2,5. Not sure about 3 and 4
>
>      - Eiffel is geared toward application programming in
> medium/large-sized teams relying heavily on OO modelling. It is designed
> for (re)usability, correctness and efficiency in this order.  
> My needs are somewhat different though.
> The main gripe I have with Eiffel is the lack of a well-documented
> standard gpl'ed library.
> GOBO and EiffelBase seem to have incomplete or non-free documentation
> and I couldn't find tutorials; as such, I couldn't get a clear picture
> about them.
> ==> Qualifies for 0,1,2,4,5 and 6.  Not sure about 3.
>
>      - Ada is best suited for large teams and/or critical software, thus
> it may be overkill for my work, OTH it could have anything I might
> happen to need.
> What holds me from jumping onto Ada is the potential complexity
> It would be interesting  to hear the experience of other people learning
> Ada from the C/Java background.
> As for memory management (requirement 4), I heard there are different
> takes on the matter:
>  (a) Ada uses dynamic stack allocation a lot, and in a transparent way,
> reducing the need of manual management (MM)
>  (b) Ada libraries adopt idioms that further simplifies MM issues
>  (c) Conservative garbage collectors such as Bohem's can be used with
> Ada, and they are supposed to work "better" with Ada than with unsafe
> languages such as C and C++
>
> So can MM be said to be easier in Ada than in C? I hope Ada-ers will
> mercifully shed some light on the issue.
>
> There seems to be a lot of Ada95 free documentation on the net, I guess
> it's suitable for Ada05 as well.
> ==> Qualifies for 0,1,2,3,5 and, partially, 4
>
>      - Modula-3 is simpler/smaller than Ada and has been successfully
> used for system/application programming.
> It seems to be the most consistent, simple and easy to grok, but I
> couldn't find any container/math library ready to use.
> ==> Qualifies for 0,1,2,4,5,6.

P.S. I was going to write a 3-sentence reply, but got carried away. I
hope this wasn't a troll...

My needs are similar to yours, and I've been looking for better
languages and learning them for years.

In summary: everything sucks, when you look close enough.

OCaml should probably be your #1 choice (about 2x slower than C
usually, single core). Has its own flaws (Google "Ocaml sucks")

Ada is also 2x slower, but less suitable for your purposes (verbose,
less memory safe than OCaml, free compilers produce GPL-only code)

Haskell is good for prototyping, and performance on par with C can be
achieved (but not reliably, at the cost of writing code 10x more
terrible than C: look at the crap in the shootout).

Java: 1.5x slower than C as a rule of thumb. Safe, verbose,
repetitive, overengineered. Some stuff you get for free with C++ and
OCaml ("clone") or in OCaml ("marshalling"), you have to write by hand
in Java for every single class.

C++: learning curve and safety are the main problems. I'm way past the
former, and I use Visual Studio Debug mode (I develop cross-platform
code) when there is any sign of memory problems (not frequent), but
it's still not completely safe.

Gambit-C Scheme (one of the best of LISPs, IMO): about 2x slower than
C (single core only), but you have to work to get within 2x (unlike
OCaml), and if you want it fast, it can't be safe (switch controlled).

The others you mention are dead, with all the implications.

(replaced dead NGs with more relevant ones)

Georg Bauhaus

unread,
Jul 28, 2009, 5:59:11 PM7/28/09
to
fft1976 wrote:

> In summary: everything sucks, when you look close enough.
>
> OCaml should probably be your #1 choice (about 2x slower than C
> usually, single core). Has its own flaws (Google "Ocaml sucks")
>
> Ada is also 2x slower, but less suitable for your purposes (verbose,
> less memory safe than OCaml, free compilers produce GPL-only code)

Whatever, free Ada (and C++) compilers made from the FSF's GCC
sources produce non-GPL enforcing executables.

Ludovic Brenta

unread,
Jul 28, 2009, 6:01:17 PM7/28/09
to
fft1976 wrote:
> Ada is also 2x slower [than C], but less suitable for your purposes (verbose,

> less memory safe than OCaml, free compilers produce GPL-only code)

Correction: the Ada run-time library from GCC (from the Free Software
Foundation) is licensed under GPLv3 with run-time linking exception,
so does not cause the executables to be under GPL. But that wasn't
the OP's concern, anyway.

--
Ludovic Brenta.

Jon Harrop

unread,
Jul 28, 2009, 7:14:23 PM7/28/09
to
fft1976 wrote:
> C++: learning curve and safety are the main problems. I'm way past the
> former, and I use Visual Studio Debug mode (I develop cross-platform
> code) when there is any sign of memory problems (not frequent), but
> it's still not completely safe.

If you're using VS then I highly recommend F# for numerical work, largely
because it makes parallelism so easy.

> Gambit-C Scheme (one of the best of LISPs, IMO): about 2x slower than
> C (single core only), but you have to work to get within 2x (unlike
> OCaml), and if you want it fast, it can't be safe (switch controlled).

Bigloo?

--
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u

Oxide Scrubber

unread,
Jul 28, 2009, 8:40:01 PM7/28/09
to
fft1976 wrote:
> My needs are similar to yours, and I've been looking for better
> languages and learning them for years.
>
> In summary: everything sucks, when you look close enough.

Except Clojure.

> Java: 1.5x slower than C as a rule of thumb.

I think it can achieve parity.

> Safe, verbose, repetitive, overengineered. Some stuff you get for free
> with C++ and OCaml ("clone") or in OCaml ("marshalling"), you have to
> write by hand in Java for every single class.

Nope. With Java, in most cases you can slap "implements Cloneable" on a
class and make the clone method public, or slap "implements
Serializable" on a class to make its instances marshallable.

Clojure is much better though: safe, non-verbose, non-repetitive, full
access to Java libraries. Clone you mostly don't need as most data
structures are immutable. Anything made purely with numbers, strings,
keywords, symbols, lists, vectors, sets, and maps can be written and
read using Clojure's reader, and can therefore be marshalled easily, and
to a human-editable text file to boot. (This doesn't, however,
interoperate with generic Java objects or Java serialization, and I'm
not sure it works with data structures with circularities. It won't work
with data structures with infinite sequences in them, but if you
represent such sequences symbolically it can.)

Last but not least, numerical Clojure code can fairly easily be tuned to
give comparable performance to C or even hand-tuned assembly, and
Clojure has strong support for parallelism and threading.

> Gambit-C Scheme (one of the best of LISPs, IMO): about 2x slower than
> C

Sucks compared to Clojure.

>(single core only)

Sucks compared to Clojure.

> but you have to work to get within 2x (unlike OCaml)

You need to work a bit to get the most speed out of Clojure too, but you
can then get C-like performance out of it in tight loops.

> and if you want it fast, it can't be safe (switch controlled).

You can have have that cake and eat it too in Clojure, aside from giving
up protection against integer overflow and wrapping for that last bit of
speed in integer operations. (There are "unchecked" integer operations
equivalent to normal C/C++/Java arithmetic on int-like types, and "safe"
ones that promote as needed to int, long, even bignum.)

fft1976

unread,
Jul 28, 2009, 10:52:16 PM7/28/09
to
On Jul 28, 5:40 pm, Oxide Scrubber <jharri...@hatlop.de> wrote:

Clojure is kind of cool, but many corrections are in order:

> fft1976 wrote:
> > Java: 1.5x slower than C as a rule of thumb.
>
> I think it can achieve parity.

I disagree. I don't think the JIT can do much about the memory layout
of the data structures. Compare a vector of complex numbers or of 3D
vectors (NOT of pointers to them) in C/C++ and Java. Run some tests. I
did and I looked at others'.

For me, 1.5x is a good trade-off for safety though.

> With Java, in most cases you can slap "implements Cloneable" on a
> class and make the clone method public,

Java's "clone" does a shallow copy only (check the docs). C++ default
copy constructors call copy constructors on the members.

> or slap "implements
> Serializable" on a class to make its instances marshallable.

Does this handle cycles and memory sharing among data structures?

> (This doesn't, however,
> interoperate with generic Java objects or Java serialization, and I'm
> not sure it works with data structures with circularities. It won't work
> with data structures with infinite sequences in them, but if you
> represent such sequences symbolically it can.)

But you'll need Java's data structures and mutations on Java's arrays
to compete with it in speed of numerical code, so this argument goes
out the window.

> Clojure has strong support for parallelism and threading.

Clojure's support for multithreading is good only as long as your code
is pure-functional. Let's see you add 1.0 to all diagonal elements of
a 1000x1000 matrix.

> You need to work a bit to get the most speed out of Clojure too, but you
> can then get C-like performance out of it in tight loops.

In theory, imperative and Java array using Clojure can be made as fast
as Java (which is slower than C), but in practice, experts seem to
agree that Clojure is 5-10 times slower than Java:

http://groups.google.com/group/clojure/msg/92b33476c0507478

Aside: do you remember to add -O3 when you are compiling C/C++? I use
"-server" when running the JVM.

Oxide Scrubber

unread,
Jul 29, 2009, 12:46:25 AM7/29/09
to
fft1976 wrote:
> On Jul 28, 5:40 pm, Oxide Scrubber <jharri...@hatlop.de> wrote:
>
> Clojure is kind of cool, but many corrections are in order

No, they are not.

>> fft1976 wrote:
>>> Java: 1.5x slower than C as a rule of thumb.
>> I think it can achieve parity.
>
> I disagree. I don't think the JIT can do much about the memory layout
> of the data structures. Compare a vector of complex numbers or of 3D
> vectors (NOT of pointers to them) in C/C++ and Java. Run some tests. I
> did and I looked at others'.
>
> For me, 1.5x is a good trade-off for safety though.

The JIT can't, but the coder can. If you want a vector of N
double-precision complex numbers in Java that is contiguous in memory,
for example, you could use a Java array of 2xN doubles and index into it
appropriately.

This won't work with encapsulated-object bignums, but at that point
you're probably spending most of your time in individual bignum ops,
particularly multiplies, anyway, so traversal from one bignum to another
becomes an insignificant factor in run-time.

>> With Java, in most cases you can slap "implements Cloneable" on a
>> class and make the clone method public,
>
> Java's "clone" does a shallow copy only (check the docs).

True. If you want a deep copy you will have to implement it yourself. Or
you can make a deepCopy static utility method that exploits
"Serializable" to deep copy anything that's serializable simply by
serializing and deserializing it (which can use memory, or a temp file,
and not much memory if you make an auxiliary class implement both
InputStream and OutputStream, with its OutputStream methods writing to a
growable buffer and its InputStream methods consuming (possibly blocking
if necessary) from same. Closing the OutputStream aspect EOFs the
InputStream aspect. Then wrap in ObjectInput/OutputStream. You can use a
BlockingQueue of byte arrays of length 8192 (or whatever), serialize to
one end and deserialize from the other in concurrent threads, and it
will tend to not use much memory. Limit the BlockingQueue length to 1
item and it will never use much more than 8Kbyte (the OutputStream
aspect will block until the InputStream aspect consumes from the queue).

>> or slap "implements
>> Serializable" on a class to make its instances marshallable.
>
> Does this handle cycles and memory sharing among data structures?

It does.

>> (This doesn't, however,
>> interoperate with generic Java objects or Java serialization, and I'm
>> not sure it works with data structures with circularities. It won't work
>> with data structures with infinite sequences in them, but if you
>> represent such sequences symbolically it can.)
>
> But you'll need Java's data structures and mutations on Java's arrays
> to compete with it in speed of numerical code

Not necessarily. It depends on what sort of numerical code.

If the bulk of the time is spent performing arithmetic operations on
just a few values, these inner loops can be tightened up and use only
local variables, no arrays or other structures at all.

If the arithmetic is bignum, the bulk of your time will be spent
performing individual bignum operations. Algorithmic smartness
(Karatsuba multiplication, memoizing or otherwise saving multiply-used
intermediate values, and exploiting concurrency) will win big while
speeding up the loops that invoke bignum operations won't.

If the arithmetic is on smallnums in arrays and the computations are
mostly mutations, then you may want to use Java arrays, and you may want
to use a Java method instead of a Clojure function to do the operation
(and may still call this from Clojure easily enough). Note above about
Java: if you want contiguous memory you'll have to give up using Java
objects for things like complexes. But you can still make quasi-methods
for quasi-objects (e.g. methods that take an array and an index and
treat the two array elements starting at that index as real and
imaginary parts of a complex numbers and perform a complex op on them).

With Clojure code, you can make macros that expand to complex operations
on adjacent pairs of values in a Java primitive array. In practice,
Clojure access to Java arrays, even primitive arrays, seems to be slow.
This may be addressed in a future version. Alternatively, use Java for
this. Macros can also be used to turn code that operates on notional
vector-like data structures of lengths known at compile time into code
that actually operates on large sequences of local variables. The
"vectors" should then end up as contiguous data on the *stack*. If the
lengths are not known until run-time, the same macros can be used along
with "eval" to compile functions on the fly to perform efficient
operations on them. In fact, if other aspects of the computations are
also not known until run-time "eval" can be useful. If at compile time,
for example, an unknown value will be added to the diagonal entries of
lots of 1000x1000 matrices, and at run time that value is pure real, at
run time "eval" can emit in this case specialized code that doesn't
waste time adding thousands of zeros to imaginary parts of matrix entries.

Last but certainly not least, ask if any such algorithms can be
rewritten to be based less on mutation and more on map-reduce type
operations.

>> Clojure has strong support for parallelism and threading.
>
> Clojure's support for multithreading is good only as long as your code
> is pure-functional. Let's see you add 1.0 to all diagonal elements of
> a 1000x1000 matrix.

Clojure has all of Java's support for multithreading too. You could on a
dual-core machine get two threads each mutating 500 of those diagonal
entries -- no locking required since the changes are independent of one
another. Just as you could in Java. Or do it in Java and call this from
Clojure.

>> You need to work a bit to get the most speed out of Clojure too, but you
>> can then get C-like performance out of it in tight loops.
>
> In theory, imperative and Java array using Clojure can be made as fast
> as Java (which is slower than C)

See above: not if you lay out your data right, and in Clojure (unlike
pure Java) you can use macros or similarly to emulate having object
operations on the data despite it really being in a flat array.

> Aside: do you remember to add -O3 when you are compiling C/C++? I use
> "-server" when running the JVM.

I am assuming both. Without -server, Clojure and Java will be
significantly slower than C/C++. With -server, I've seen numerical
calculations in both run at tuned-assembly speeds.

fft1976

unread,
Jul 29, 2009, 1:55:00 AM7/29/09
to
On Jul 28, 9:46 pm, Oxide Scrubber <jharri...@hatlop.de> wrote:

> >> fft1976 wrote:
> >>> Java: 1.5x slower than C as a rule of thumb.
> >> I think it can achieve parity.
>
> > I disagree. I don't think the JIT can do much about the memory layout
> > of the data structures. Compare a vector of complex numbers or of 3D
> > vectors (NOT of pointers to them) in C/C++ and Java. Run some tests. I
> > did and I looked at others'.
>
> > For me, 1.5x is a good trade-off for safety though.
>
> The JIT can't, but the coder can. If you want a vector of N
> double-precision complex numbers in Java that is contiguous in memory,
> for example, you could use a Java array of 2xN doubles and index into it
> appropriately.

This is fanboy fantasy, not reality. If I have a library for dealing
with 3D vectors, I'm not going to lay out my data in flat arrays and
copy the vectors back and forth. Also, this trick won't work with non-
homogeneous records.

> >> With Java, in most cases you can slap "implements Cloneable" on a
> >> class and make the clone method public,
>
> > Java's "clone" does a shallow copy only (check the docs).
>
> True. If you want a deep copy you will have to implement it yourself.

That's what I wrote. What was the point of your comment?


> Or
> you can make a deepCopy static utility method that exploits
> "Serializable" to deep copy anything that's serializable simply by
> serializing and deserializing it (which can use memory, or a temp file,
> and not much memory if you make an auxiliary class implement both
> InputStream and OutputStream, with its OutputStream methods writing to a
> growable buffer and its InputStream methods consuming (possibly blocking
> if necessary) from same. Closing the OutputStream aspect EOFs the
> InputStream aspect. Then wrap in ObjectInput/OutputStream. You can use a
> BlockingQueue of byte arrays of length 8192 (or whatever), serialize to
> one end and deserialize from the other in concurrent threads, and it
> will tend to not use much memory. Limit the BlockingQueue length to 1
> item and it will never use much more than 8Kbyte (the OutputStream
> aspect will block until the InputStream aspect consumes from the queue).

Ouch...

> If the bulk of the time is spent performing arithmetic operations on
> just a few values,

We are talking number crunching here, not useless pissing contests,
like calculating the digits of Pi.

> If the arithmetic is bignum, the bulk of your time will be spent
> performing individual bignum operations.

Contrary to what the term sounds like, number crunching is rarely
about bignums.

> If the arithmetic is on smallnums in arrays and the computations are
> mostly mutations, then you may want to use Java arrays,

That's what I said.

> In practice,
> Clojure access to Java arrays, even primitive arrays, seems to
> be slow. This may be addressed in a future version.

Yeah, yeah...

I noticed that you decided to delete this, by the way:

In practice, experts seem to


agree that Clojure is 5-10 times slower than Java:

http://groups.google.com/group/clojure/msg/92b33476c0507478

(relevant follow-up set)

Elena

unread,
Jul 29, 2009, 3:50:22 AM7/29/09
to
On 28 Lug, 20:57, fft1976 <fft1...@gmail.com> wrote:
> >      - Eiffel is geared toward application programming in
> > medium/large-sized teams relying heavily on OO modelling. It is designed
> > for (re)usability, correctness and efficiency in this order.  
> > My needs are somewhat different though.
> > The main gripe I have with Eiffel is the lack of a well-documented
> > standard gpl'ed library.
> > GOBO and EiffelBase seem to have incomplete or non-free documentation
> > and I couldn't find tutorials; as such, I couldn't get a clear picture
> > about them.
> > ==> Qualifies for 0,1,2,4,5 and 6.  Not sure about 3.

Don't forget that you can download a GPLed version of EiffelStudio,
the Eiffel IDE with all bells and whistles from ISE:

https://www2.eiffel.com/download/download_info.aspx?id=eiffelstudio&info=false&mirrors=eiffelstudio

Ray Blaak

unread,
Jul 29, 2009, 4:11:46 AM7/29/09
to
fft1976 <fft...@gmail.com> writes:
> > � � �- Ada is best suited for large teams and/or critical software, thus

> > it may be overkill for my work, OTH it could have anything I might
> > happen to need.

Ada is fine as far as it goes, and there is a GNU Ada compiler which helps a
lot.

My problem with it is the lack of a garbage collection. That is just not
acceptable to me these days. If you manually allocate memory, then you pretty
much have memory bugs, it's as simple as that.

It's better than C/C++ for sure, but I remember plenty of memory exceptions all
the same.

Also, I find the OO notation a little quirky.

> Ada is also 2x slower, but less suitable for your purposes (verbose,
> less memory safe than OCaml, free compilers produce GPL-only code)

I am pretty sure the GPL-only thing is not true.

My own language choice now would be Java or C#. I am investigating Clojure
since I always have a fondness for Scheme/Lisps, and a modern Lisp on the JVM
solves a lot of problems.

OCaml is something I want to know better as well. Any of the ML-family
languages should improve your type theoretical skills.

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
rAYb...@STRIPCAPStelus.net The Rhythm has my soul.

Oxide Scrubber

unread,
Jul 29, 2009, 4:18:26 AM7/29/09
to
fft1976 failed for the second time to respect the Followup-To header:

> On Jul 28, 9:46 pm, Oxide Scrubber <jharri...@hatlop.de> wrote:
>
>>>> fft1976 wrote:
>>>>> Java: 1.5x slower than C as a rule of thumb.
>>>> I think it can achieve parity.
>>> I disagree. I don't think the JIT can do much about the memory layout
>>> of the data structures. Compare a vector of complex numbers or of 3D
>>> vectors (NOT of pointers to them) in C/C++ and Java. Run some tests. I
>>> did and I looked at others'.
>>> For me, 1.5x is a good trade-off for safety though.
>> The JIT can't, but the coder can. If you want a vector of N
>> double-precision complex numbers in Java that is contiguous in memory,
>> for example, you could use a Java array of 2xN doubles and index into it
>> appropriately.
>
> This is fanboy fantasy, not reality.

Stop attacking me. I have done nothing to deserve your repeated flamage.
You must have the wrong target. Recheck and then go flame the correct
target.

> If I have a library for dealing with 3D vectors, I'm not going to lay
> out my data in flat arrays and copy the vectors back and forth.

If you have a library for dealing with X, in any language, you'll use
the data as laid out by that library.

A performant Java library for numerics should lay out its data in an
appropriate manner so as to achieve performance, of course.

> Also, this trick won't work with non-homogeneous records.

It will work for records of a primitive type, such as double. It won't
work if for some odd reason you want to mix ints and doubles in the same
vector, no. In that unusual case you may be better off either using
multiple separate arrays, but allocated one right after the other so
probably close together, one with all the ints and one with all the
doubles, or else resorting to JNI to implement some things.

>>>> With Java, in most cases you can slap "implements Cloneable" on a
>>>> class and make the clone method public,
>>> Java's "clone" does a shallow copy only (check the docs).
>> True. If you want a deep copy you will have to implement it yourself.
>

>> Or
>> you can make a deepCopy static utility method that exploits
>> "Serializable" to deep copy anything that's serializable simply by
>> serializing and deserializing it (which can use memory, or a temp file,
>> and not much memory if you make an auxiliary class implement both
>> InputStream and OutputStream, with its OutputStream methods writing to a
>> growable buffer and its InputStream methods consuming (possibly blocking
>> if necessary) from same. Closing the OutputStream aspect EOFs the
>> InputStream aspect. Then wrap in ObjectInput/OutputStream. You can use a
>> BlockingQueue of byte arrays of length 8192 (or whatever), serialize to
>> one end and deserialize from the other in concurrent threads, and it
>> will tend to not use much memory. Limit the BlockingQueue length to 1
>> item and it will never use much more than 8Kbyte (the OutputStream
>> aspect will block until the InputStream aspect consumes from the queue).
>
> Ouch...

No ouch. You'd only have to do this once, and then deepCopy would work
on every serializable type in Java, including ones that didn't exist
when you wrote deepCopy.

>> If the bulk of the time is spent performing arithmetic operations on
>> just a few values,
>
> We are talking number crunching here

Yes, I know.

>> If the arithmetic is bignum, the bulk of your time will be spent
>> performing individual bignum operations.
>
> Contrary to what the term sounds like, number crunching is rarely
> about bignums.

I never said otherwise.

>> If the arithmetic is on smallnums in arrays and the computations are
>> mostly mutations, then you may want to use Java arrays,
>
> That's what I said.

Now that the debate is over, please respect the followup-to header.

It would have been better, of course, had you chose to end it *before*
demonstrating to the entire world that you're unimaginative and
ignorant. Ignorant of numerous performance tricks for Java and other JVM
languages, ignorant of performant functional algorithms for various
things, and ignorant of the actual facts from timing operations, which
show all three languages capable of achieving the same top speed in
numeric calculations. And unimaginative enough not only to be unable to
think up any of these things, but also unable to think up arguments that
aren't laced heavily with ad hominems and other rhetoric. "This is
fanboy fantasy, not reality," "What was the point of your comment?,"
"Ouch...," "We are talking number crunching here, not useless pissing
contests," "Yeah, yeah...," and so forth do not constitute rational
arguments.

fft1976

unread,
Jul 29, 2009, 5:11:29 AM7/29/09
to
On Jul 29, 1:18 am, Oxide Scrubber <jharri...@hatlop.de> wrote:
> fft1976 failed for the second time to respect the Followup-To header:
>

Everyone who decides to reply to this joker, watch out: he's silently
adding "Followup-To: alt.olympics" to his messages, trying to trick
you into not posting here, so he would have the last word.

Is this what Clojure fanboys must resort to when they lose an
argument? I thought Haskell and Common Lisp had the worst fanboys till
today.

learn...@yourdesk.com

unread,
Jul 29, 2009, 5:57:04 AM7/29/09
to
["Followup-To:" header set to comp.lang.ada.]
On 2009-07-29, Ray Blaak <rAYb...@STRIPCAPStelus.net> wrote:
> fft1976 <fft...@gmail.com> writes:

> My problem with it is the lack of a garbage collection. That is just not
> acceptable to me these days. If you manually allocate memory, then you
> pretty much have memory bugs, it's as simple as that.

That is simply not true. If you don't know how to do resource management
properly, you're not ready to write commercial code in any environment.

Garbage collection is a throwback to interpreted languages and bloated
run-time systems. Normal compiled languages get along very well without any
such thing at all.

You need to understand your tools and not rely on the rubber crutches
overglorified scripting platforms like Java have taught people to put blind
faith in. What ever happened to competent coders? There is no idiot-proof
system. If you don't know how to manage storage, you shouldn't be allocating
it.

> Also, I find the OO notation a little quirky.

That's a tough proposition from someone advocating C++. C++ notation is
hideous, obfuscated, and error-prone. It's one of the least readable (maybe
the worst in that regard) of any of the languages in common use.

Ada is readable. It's clean, it's orderly, it's so much better and safer
than C++ that there isn't any comparison at all. But that is certainly all
lost on somebody who believes it's impossible to manage memory properly and
thinks garbage collection is a must-have for any language.

>> Ada is also 2x slower, but less suitable for your purposes (verbose,
>> less memory safe than OCaml, free compilers produce GPL-only code)

All utter nonsense. (Understood this was not your comment, you were replying
to the previouis poster...)

> I am pretty sure the GPL-only thing is not true.

Correct.

> My own language choice now would be Java or C#. I am investigating Clojure
> since I always have a fondness for Scheme/Lisps, and a modern Lisp on the
> JVM solves a lot of problems.

Well yes, if you don't know how to code and if you don't understand
fundamental aspects of software engineering like resource management, you
definitely shouldn't be writing code for commercial or industrial
environments and you probably should be using the "protect me from myself"
platforms like Java and C#. I can understand your post better now.

Oxide Scrubber

unread,
Jul 29, 2009, 7:13:30 AM7/29/09
to
fft1976 wrote:
> On Jul 29, 1:18 am, Oxide Scrubber <jharri...@hatlop.de> wrote:
>> fft1976 failed for the second time to respect the Followup-To header:
>
> Everyone who decides to reply to this joker, watch out: he's silently
> adding "Followup-To: alt.olympics" to his messages

No, I am not. I am adding a followup-to for atl.olympics. (Note spelling.)

It's currently an empty newsgroup, so it seemed appropriate as a place
to redirect your useless and illogical flamage.

> Is this what Clojure fanboys must resort to when they lose an
> argument?

I wouldn't know, since that's never happened to me.

Oxide Scrubber

unread,
Jul 29, 2009, 7:38:53 AM7/29/09
to
learn...@yourdesk.com wrote:
> ["Followup-To:" header set to comp.lang.ada.]

Sorry, no can do. You write complete nonsense in four newsgroups, you
get corrected in four newsgroups.

> On 2009-07-29, Ray Blaak <rAYb...@STRIPCAPStelus.net> wrote:
>> fft1976 <fft...@gmail.com> writes:
>
>> My problem with it is the lack of a garbage collection. That is just not
>> acceptable to me these days. If you manually allocate memory, then you
>> pretty much have memory bugs, it's as simple as that.
>
> That is simply not true. If you don't know how to do resource management
> properly, you're not ready to write commercial code in any environment.

What utter balderdash. You make GC sound like training wheels, when in
fact it is very useful even for major production-code systems.

Consider memory management of an object that is shared and passed around
at need among many related parts of a program. Keeping track of when
it's no longer in use rapidly becomes nontrivial as the complexity of
the code using it goes up. Eventually, you'll be reference counting or
doing something else like that, and before long, you'll end up with an
ad-hoc, informally specified, slow, bug-ridden implementation of half of
a garbage collector. (This will probably in turn be a part of an ad-hoc,
informally specified, slow, bug-ridden implementation of half of Common
Lisp.)

Why not save yourself the trouble and use a real GC, then? Especially
since a properly-used GC will actually improve execution speed.

Typical C++ code has to spend time deallocating objects proportional to
the number of objects that need deallocating. GCs tend to spend time
deallocating objects proportional to the number of objects that
*survive* since the last GC. This is often a much smaller number.

On a modern JVM, including Hotspot, the amount of time spent on memory
management for objects that don't survive to be copied to a tenured
generation tends to be two.

Two instruction cycles per object, that is, one to copy a pointer and
one to bump a pointer.

Not even some fiddling around with a free-list on top of that.

The devil is in the details, but the time spent on objects that do
survive tends not to be much worse over time, especially for very long
lived objects, which get tenured once and then are very rarely dealt
with by the garbage collector, which sweeps the tenured space far less
frequently than it does the young-object space.

The new G1 collector is supposed to be even more efficient; TLABs become
boxes of objects, which fill up. Eventually the system needs fresh TLABs
and all the boxes are full, whereupon some of the oldest get garbage
collected. By then almost everything in them tends to be garbage, and
only a very few objects must be copied to empty most of those boxes
while filling a few with live objects. Those become a kind of ad-hoc
"tenured" generation. Or something like that. Sun's web site has
technical information about it, somewhere. A site-scoped google search
of it for "garbage first" should bear fruit.

> Garbage collection is a throwback to interpreted languages and bloated
> run-time systems.

Poppycock.

> Normal compiled languages get along very well without any such thing
> at all.

Many implementations of Common Lisp are compiled. All have GCs. Are none
of them "normal" compiled languages?

> You need to understand your tools and not rely on the rubber crutches
> overglorified scripting platforms like Java have taught people to put blind
> faith in.

Java? Scripting platform? Oh, PUH-LEEZE. You can't script in Java. Too
much static main this and class that boilerplate needed, plus you have
to *compile* it and everything. You can't just write a few lines of code
in a .java file, sic some interpreter on it, and away you go, unlike say
Python.

As for GC being "rubber crutches": see above.

> What ever happened to competent coders?

They've all seen C++ for the unholy mess it is and migrated to languages
like Java, Scala, and Clojure that let you get something done without
worrying about micromanaging memory? Now if only we could stop worrying
about streams and window handles and other such nonsense too and have
the computer automation take care of those sorts of niggling details
too, as is the computer's job. :)

> There is no idiot-proof system.

Hence your characterization of Java as one seems to be rather flawed.

> If you don't know how to manage storage, you shouldn't be allocating
> it.

Who says they don't know how to? Maybe they just don't *want* to, when
the computer is perfectly capable of doing it for them, reliably and
error-free.

Do you think they should also do all sorts of arithmetic manually
instead of having the computer do it faster and more reliably, too?

Take that to its logical conclusion and all computers "should" be used
for is playing Quake, while real work is done entirely by hands-on human
labor.

How positively Luddite of you.

>> Also, I find the OO notation a little quirky.
>
> That's a tough proposition from someone advocating C++. C++ notation is
> hideous, obfuscated, and error-prone. It's one of the least readable (maybe
> the worst in that regard) of any of the languages in common use.

Funnily enough, it's similar to Java notation. Of course, you might find
CLOS notation worse -- all those parentheses. Smalltalk too -- no actual
monolithic class file, just individual methods browsable from a list,
and possibly mixed in with methods of other classes when dumped to a
file. C# is like Java, with some funky extras. Am I missing any? Oh,
yeah, Modula 3. And don't get me started on Objective C...

> Ada is readable. It's clean, it's orderly, it's so much better and safer
> than C++ that there isn't any comparison at all.

And it's at least as verbose as Java. Eeeuw. If you can cope with deeply
nested parentheses, Lisp FTW. Otherwise maybe stick with C? :)

> But that is certainly all
> lost on somebody who believes it's impossible to manage memory properly

I doubt any of us do so. Indeed, a counterexample seems to be Sun's
Hotspot GC, which seems to manage memory properly. I've never known it
to make a mistake, so it *must* be possible.

> and thinks garbage collection is a must-have for any language.

I'm more worried about the wackos that think manual memory management is
a must-have.

>> My own language choice now would be Java or C#. I am investigating Clojure
>> since I always have a fondness for Scheme/Lisps, and a modern Lisp on the
>> JVM solves a lot of problems.
>
> Well yes, if you don't know how to code and if you don't understand
> fundamental aspects of software engineering like resource management, you

Horsefeathers.

> the "protect me from myself" platforms like Java

Codswallop.

> and C#.

Blatherskite.

Lew

unread,
Jul 29, 2009, 8:38:34 AM7/29/09
to

fft1976 is the one who "silently" decided to drag this flamewar into
clj.programmer. We were doing just fine without it.

f/u set to comp.programming and please keep your flamewar out of clj groups.

--
Lew

learn...@yourdesk.com

unread,
Jul 29, 2009, 10:03:29 AM7/29/09
to
["Followup-To:" header set to comp.lang.ada.]
On 2009-07-29, Oxide Scrubber <jhar...@hatlop.de> wrote:
> learn...@yourdesk.com wrote:
>> ["Followup-To:" header set to comp.lang.ada.]
>
> Sorry, no can do. You write complete nonsense in four newsgroups, you
> get corrected in four newsgroups.

Same to you. Besides, I was just responding to the silly post, if he hadn't
splattered his goo all over usenet my response would have been in one group
as well.

>> On 2009-07-29, Ray Blaak <rAYb...@STRIPCAPStelus.net> wrote:
>>> fft1976 <fft...@gmail.com> writes:
>>
>>> My problem with it is the lack of a garbage collection. That is just not
>>> acceptable to me these days. If you manually allocate memory, then you
>>> pretty much have memory bugs, it's as simple as that.
>>
>> That is simply not true. If you don't know how to do resource management
>> properly, you're not ready to write commercial code in any environment.
>
> What utter balderdash. You make GC sound like training wheels

Exactly what it is, excellent characterization!

> when in fact it is very useful even for major production-code systems.

No, it's not useful or even necessary at all unless you have a virtual
machine or other runtime. I work on large systems and we write all our code
in assembler. We don't have GC, we just know how to code. Simple is good for
performance, for readability, and for just about everything else. That's
another reason I like Ada, at least through the 95 version.

I would hate to see the "major production-code system" that relied on GC. I
can tell you this with certainty, no bank, insurance company, airline, or
any other online realtime operation uses such nonsense. Of course they have
some bits here and there written in C++ but the code that keeps them online
and serving customers is written in COBOL, assembler or Ada and doesn't need
or have GC. That's production.

If you're talking about academic (know-nothing) or hobbyist languages I can
understand how GC would be a virtue along with all the other mind-numbing
"improvements" made over the years.

> Consider memory management of an object that is shared and passed around
> at need among many related parts of a program. Keeping track of when
> it's no longer in use rapidly becomes nontrivial as the complexity of
> the code using it goes up. Eventually, you'll be reference counting or
> doing something else like that, and before long, you'll end up with an
> ad-hoc, informally specified, slow, bug-ridden implementation of half of
> a garbage collector. (This will probably in turn be a part of an ad-hoc,
> informally specified, slow, bug-ridden implementation of half of Common
> Lisp.)

This is how most uninformed people deal with fundamental lack of knowledge
and discipline in the design and coding of systems of any significant size
and scope. They just go from bad to worse by using spit and bailing wire
instead of understanding the issues and avoiding fundamentally incorrect and
inappropriate practices and language implementations. Know your tools and
you can avoid these problems entirely.

Reference counting and other such silly schemes are as much rubber crutches
as GC. All of the need for this sort of rubbish is based on the basic
inability to design and implement properly. If you would just have control
and understand your platform, all of these problems would go away.

> Why not save yourself the trouble and use a real GC, then? Especially
> since a properly-used GC will actually improve execution speed.

Because we have never needed them. And nobody else does unless they want to
strap themselves into Java or other bloated run-time systems, as I
said. Traditional compiled languages have no need for GC.

>> Garbage collection is a throwback to interpreted languages and bloated
>> run-time systems.
>
> Poppycock.

And then you go on to give examples of interpreted languages with bloated
runtimes like Lisp, Smalltalk, Clojure, etc. We don't get fooled by VMs,
they're still interpreters. Compiled code runs on bare metal with no
runtime. That's the distinction.

>> Normal compiled languages get along very well without any such thing
>> at all.
>
> Many implementations of Common Lisp are compiled. All have GCs. Are none
> of them "normal" compiled languages?

No, they're still based on run-time systems and have fundamental flaws that
create the need for GC. At any rate, they're not used in large systems and
not where performance is essential. I don't have any interest in those sorts
of applications. Academic computing is only interesting to academics. I work
on code that has to work and has to perform. None of the languages you
mention will do for any serious sort of work.

>> You need to understand your tools and not rely on the rubber crutches
>> overglorified scripting platforms like Java have taught people to put

>> blind faith in. What ever happened to competent coders?


>
> They've all seen C++ for the unholy mess it is

On this we agree!

> and migrated to languages like Java, Scala, and Clojure that let you get
>something done without worrying about micromanaging memory?

Memory and resource management generally are fundamental aspects of software
engineering discipline. Java is not a language, it's a scripting platform on
a dedicated VM. It's not progress to dumb-down programming to where you need
GQ and a VM and then say why should we worry about micromanaging? It's all
in the details. I think you ought to have total control over what you're
writing and how it works and some of the languages that offer that better
than others are of course assembler but include Ada.

> Now if only we could stop worrying about streams and window handles and
>other such nonsense too and have the computer automation take care of
>those sorts of niggling details too, as is the computer's job. :)

That's a fundamental point of disagreement. I don't want anything done for
me. I'll tell the machine what to do, and that's what I want. The father
you get from your hardware, the less efficient your executable, the less
control you have, and the more protection from yourself you need. I don't
want anybody to blame but myself.

If you don't agree, then why stop there? Just write application and code
generators and be done with it.

> Funnily enough, it's similar to Java notation.

That's not by accident, that's by ripoff.

> Of course, you might find CLOS notation worse -- all those
> parentheses. Smalltalk too -- no actual monolithic class file, just
> individual methods browsable from a list, and possibly mixed in with
> methods of other classes when dumped to a file. C# is like Java, with
> some funky extras. Am I missing any? Oh, yeah, Modula 3. And don't get

> me started started on Objective C...

The pattern seems to be that OO implementations are necessarily inefficient
compared to their predecessors and as a result of a few years of relying on
their self-protection, coders have become less and less competent, further
away from the machine, and more dependent on nannying. I don't approve of
this. I don't want my car to drive itself to the store, I'll steer it, use
the gas and brakes, and make it do what I want, thanks very much.

I realize every so-called coder under the age of 30 has been brainwashed
into thinking OO is the silver bullet but that's simply not the case. This
misapplication and misimplementation of OO has done far more damage than
good and now we are seeing the results as people become more reliant on
self-protection and less and less capable of doing anything themselves
without 3rd party libraries, GC, and layer upon layer of middleware,
etc. Where does it stop?

>> Ada is readable. It's clean, it's orderly, it's so much better and safer
>> than C++ that there isn't any comparison at all.
>
> And it's at least as verbose as Java. Eeeuw. If you can cope with deeply
> nested parentheses, Lisp FTW. Otherwise maybe stick with C? :)

Ada is not verbose at all. I don't understand how you can make that
statement. Java is a sloppy mess like its cousins. Ada makes reading the
code simple and that was a design goal of Ada.

> I'm more worried about the wackos that think manual memory management is
> a must-have.

And why is that? Is personal responsibility somehow not relevant to coding?
Is understanding your hardware and making it do what you want no longer the
goal of programming?

Andrea Taverna

unread,
Jul 29, 2009, 11:06:48 AM7/29/09
to
On 29 Lug, 09:50, Elena <egarr...@gmail.com> wrote:
> Don't forget that you can download a GPLed version of EiffelStudio,
> the Eiffel IDE with all bells and whistles from ISE:
>
> https://www2.eiffel.com/download/download_info.aspx?id=eiffelstudio&i...

Already done. It's cool, and compiling it felt like unwrapping
birthday gifts.
Still, Eiffel libraries are "complex" due to the use of MI and
descendant hiding and I couldn't find a good tutorial. Documentation,
considering such complexity and the difference between Eiffel and
lower-level languages like Ada, M3 and the like, is too scarce.

So far Eiffel seems the language I'd hope to code in for other tasks
than number crunching, where OO software engineering weights more.

Andrea

Andrea Taverna

unread,
Jul 29, 2009, 11:19:07 AM7/29/09
to
On 28 Lug, 22:57, fft1976 <fft1...@gmail.com> wrote:
> P.S. I was going to write a 3-sentence reply, but got carried away. I
> hope this wasn't a troll...
>
> My needs are similar to yours, and I've been looking for better
> languages and learning them for years.
>
> In summary: everything sucks, when you look close enough.
With respects to other posters, that's what I'm thinking, but I
believe it's unavoidable, that's reality.

> OCaml should probably be your #1 choice (about 2x slower than C
> usually, single core). Has its own flaws (Google "Ocaml sucks")

That's why it was discarded. It differs a lot from other algol's
relatives and I don't have the time to check how much it (doesn't) suck
(s).

> Ada is also 2x slower,

Are you sure?


> but less suitable for your purposes (verbose,

I have to say that translating my C graph library with Ada, leaving
aside memory management, lead to shorter and readable code, something
I've been dreaming of for the past 3 years

> C++: learning curve and safety are the main problems. I'm way past the
> former, and I use Visual Studio Debug mode (I develop cross-platform
> code) when there is any sign of memory problems (not frequent), but
> it's still not completely safe.

I have my own reason not to use C++ ;)

Andrea

Oxide Scrubber

unread,
Jul 29, 2009, 1:02:10 PM7/29/09
to
learn...@yourdesk.com wrote:
> ["Followup-To:" header set to comp.lang.ada.]

Sorry, no can do. You post tripe to four newsgroups, you get corrected
in four newsgroups.

> On 2009-07-29, Oxide Scrubber <jhar...@hatlop.de> wrote:
>> learn...@yourdesk.com wrote:
>>> ["Followup-To:" header set to comp.lang.ada.]
>> Sorry, no can do. You write complete nonsense in four newsgroups, you
>> get corrected in four newsgroups.
>
> Same to you.

But I don't write nonsense, unlike you.

> Besides, I was just responding to the silly post, if he hadn't
> splattered his goo all over usenet my response would have been in one group
> as well.

Ray Blaak's post did not strike me as either "silly" or "goo".

>>> On 2009-07-29, Ray Blaak <rAYb...@STRIPCAPStelus.net> wrote:
>>>> fft1976 <fft...@gmail.com> writes:
>>>> My problem with it is the lack of a garbage collection. That is just not
>>>> acceptable to me these days. If you manually allocate memory, then you
>>>> pretty much have memory bugs, it's as simple as that.
>>> That is simply not true. If you don't know how to do resource management
>>> properly, you're not ready to write commercial code in any environment.
>> What utter balderdash. You make GC sound like training wheels
>
> Exactly what it is

Bull.

>> when in fact it is very useful even for major production-code systems.
>
> No, it's not useful

Ridiculous.

> I work on large systems and we write all our code in assembler.

Then may God have mercy on your soul.

> Simple is good for performance, for readability, and for just about
> everything else.

Assembly is good for ought but performance. For readability? Ludicrous.

> I would hate to see the "major production-code system" that relied on GC.

Well too bad, because there's more and more of them every day and some
such confrontation is inevitable, probably the next time you use a web
server since so many of them use JSP or other JVM-based technologies.

> I can tell you this with certainty, no bank, insurance company, airline, or
> any other online realtime operation uses such nonsense.

Folderol. Lots of these use Java on their web sites.

> Of course they have some bits here and there written in C++ but the code
> that keeps them online and serving customers is written in COBOL, assembler
> or Ada

Rubbish. Dollars to doughnuts they all use ten times as much Java as Ada.

> That's production.

No, that's hokum, directly out of your very own fertile imagination.

> If you're talking about academic (know-nothing) or hobbyist languages I can
> understand how GC would be a virtue along with all the other mind-numbing
> "improvements" made over the years.

GC is hardly restricted to "academic or hobbyist languages". Java, for
one, is neither. There are also practical uses for Lisp and, though it's
really *not* a very good idea, technically for C# also.

>> Consider memory management of an object that is shared and passed around
>> at need among many related parts of a program. Keeping track of when
>> it's no longer in use rapidly becomes nontrivial as the complexity of
>> the code using it goes up. Eventually, you'll be reference counting or
>> doing something else like that, and before long, you'll end up with an
>> ad-hoc, informally specified, slow, bug-ridden implementation of half of
>> a garbage collector. (This will probably in turn be a part of an ad-hoc,
>> informally specified, slow, bug-ridden implementation of half of Common
>> Lisp.)
>
> This is how most uninformed people deal with fundamental lack of knowledge
> and discipline in the design and coding of systems of any significant size
> and scope. They just go from bad to worse by using spit and bailing wire
> instead of understanding the issues and avoiding fundamentally incorrect and
> inappropriate practices and language implementations. Know your tools and
> you can avoid these problems entirely.

The "spit and bailing wire" would be destructors, auto_ptr,
roll-your-own reference counted pointers, and so forth, of course.

> Reference counting and other such silly schemes are as much rubber crutches
> as GC.

There are significant applications that cannot be developed without one of:
1. Reference counting.
2. GC.
3. Memory leaks.

Smart programmers avoid #3, and really smart ones avoid #1 as well,
since it copes poorly with circular data structures. :)

> All of the need for this sort of rubbish is based on the basic
> inability to design and implement properly.

Nonsense.

> If you would just have control and understand your platform, all of these
> problems would go away.

BS.

>> Why not save yourself the trouble and use a real GC, then? Especially
>> since a properly-used GC will actually improve execution speed.
>
> Because we have never needed them.

Speak for yourself.

> And nobody else does unless they want to strap themselves into Java or
> other bloated run-time systems, as I said.

Yes, you did say that, along with quite a lot of other tripe.

> Traditional compiled languages have no need for GC.

Utter crap. Common Lisp is a glaring counterexample; often compiled to
native code.

>>> Garbage collection is a throwback to interpreted languages and bloated
>>> run-time systems.
>> Poppycock.
>
> And then you go on to give examples of interpreted languages with bloated
> runtimes like Lisp, Smalltalk, Clojure, etc.

Codswallop.

1. All of those tend to be bytecode-compiled. I'm unaware of any
Smalltalk dialect that's interpreted, and the only implementation of
Clojure thus far also compiles to bytecode. And then, via JIT, to
native code. Many Lisps are directly compiled to native code.
2. The "bloated runtimes" you speak of are a result of having a large,
featureful standard library and using much of it, instead of the
impoverished, miserable excuse for one that C/C++ comes with. (What,
no graphics? Sound? Networking? Threads? Memory-mapped file I/O? Oh,
come ON! You can't even write a portable C++ app with a real user
interface FFS.) This has nothing whatsoever to do with either GC or
compiled vs. interpreted.
3. Some of the most bloated apps it's ever been my displeasure to
audit via top and ps were written in C++. DESPITE C++'s lack of a
really featureful standard library.


> We don't get fooled by VMs, they're still interpreters.

What a silly thing to say.

> Compiled code runs on bare metal with no runtime. That's the distinction.

Bunkum. If that were true, then the only "compiled code" in existence
would be operating system kernels. Which are often written in assembly,
NOT compiled.

Yeesh.

>>> Normal compiled languages get along very well without any such thing
>>> at all.
>> Many implementations of Common Lisp are compiled. All have GCs. Are none
>> of them "normal" compiled languages?
>
> No, they're still based on run-time systems and have fundamental flaws that
> create the need for GC.

Bullshit. Name one such "fundamental flaw".

> At any rate, they're not used in large systems and not where performance
> is essential.

Horsefeathers. Lisp has been used in a lot of performance-requiring
applications, including on NASA fucking space probes. Ever heard of
"Deep Space 1"? As for "large systems", some very large ecommerce
companies have used Lisp to implement very large databases or web apps.

Java is used even more in such situations (excluding NASA probes).

> I don't have any interest in those sorts of applications. Academic
> computing is only interesting to academics.

Large order-fulfillment systems are "academic computing" to you?

Then you truly are lost.

> I work on code that has to work and has to perform.

As do I.

> None of the languages you mention will do for any serious sort of work.

Balderdash.

>>> You need to understand your tools and not rely on the rubber crutches
>>> overglorified scripting platforms like Java have taught people to put
>>> blind faith in. What ever happened to competent coders?
>> They've all seen C++ for the unholy mess it is
>
> On this we agree!
>
>> and migrated to languages like Java, Scala, and Clojure that let you get
>> something done without worrying about micromanaging memory?
>
> Memory and resource management generally are fundamental aspects of software
> engineering discipline.

True. Although a little automation can go a long way, as it can in many
areas of endeavor.

> Java is not a language, it's a scripting platform

Ludicrous.

> I think you ought to have total control over what you're writing and how
> it works

Then you should not be posting anywhere but comp.lang.asm.*.

>> Now if only we could stop worrying about streams and window handles and
>> other such nonsense too and have the computer automation take care of
>> those sorts of niggling details too, as is the computer's job. :)
>
> That's a fundamental point of disagreement.

I would not be surprised if you disagreed with me if I said the Klein
4-group was abelian, so you'll pardon me if I don't consider your
statement here to be very informative or meaningful. :)

> I don't want anything done for me.

Fine. But not every programmer feels as you do.

> I'll tell the machine what to do, and that's what I want. The father
> you get from your hardware, the less efficient your executable

Poppycock. I've got Clojure code right here that takes arithmetic
expressions, massages them in various ways, compiles up some Java byte
code, lets the Hotspot server JIT have at it, and winds up running at
the same speed as hand-tuned assembly. Except that it can cobble this
stuff up on the fly from Lisp s-expressions constructed or obtained at
run-time.

Try doing THAT with actual assembly.

Clojure sits on Java, sits on JVM bytecode, sits on C/C++ implementation
of JVM, sits on assembly, sits on x86 bare metal yet can do something as
efficiently as assembly -- self-modifying machine code even -- and yet
do so much more safely and from the comfort of a high level language
with garbage collection.

You lost this fight the instant you picked it, because you are, quite
simply, dead wrong.

> the less control you have

A properly designed HLL provides ways to gain back some control if need
be. Java has WeakReference, finalizers, and other means to gain back
some from the GC when necessary (which is rarely). It has JNI and JIT
for when you need assembly's speed or to interface to C libraries. Lisps
tend to have some form of foreign function interface (Clojure can easily
call Java code, so Java foreign functions are easy there; indirectly it
can call C foreign functions via Java code that uses JNI.)

> and the more protection from yourself you need.

Ridiculous. You need "more protection from yourself" when you're doing
raw pointer arithmetic and unchecked casts and crap like that. In other
words, C and assembly. And that's when you have the least protection.

> I don't want anybody to blame but myself.

Well, then you'll just have to go into the woods, use some vines and
sticks to make a crude shovel, dig up some dirt, make a fire, somehow
manage to forge silicon chips in the dirt, mine other materials with
your bare hands, slap together a ZX Spectrum kit or something from all
this, assemble it, and start toggling in an operating system and
compiler one opcode at a time.

Most of us prefer to use the fruits of industrial civilization, even
though sometimes that means getting hit by other peoples' bugs. *shrug*

> If you don't agree, then why stop there? Just write application and code
> generators and be done with it.

Be sure to let me know when they've solved strong AI, and then I'll do that.

>> Funnily enough, it's similar to Java notation.
>
> That's not by accident

Whoosh!

>> Of course, you might find CLOS notation worse -- all those
>> parentheses. Smalltalk too -- no actual monolithic class file, just
>> individual methods browsable from a list, and possibly mixed in with
>> methods of other classes when dumped to a file. C# is like Java, with
>> some funky extras. Am I missing any? Oh, yeah, Modula 3. And don't get
>> me started started on Objective C...
>
> The pattern seems to be that OO implementations are necessarily inefficient
> compared to their predecessors

Piffle.

> and as a result of a few years of relying on their self-protection, coders
> have become less and less competent, further away from the machine, and more
> dependent on nannying.

Poppycock.

> I don't want my car to drive itself to the store, I'll steer it, use
> the gas and brakes, and make it do what I want, thanks very much.

Luddite.

> I realize every so-called coder under the age of 30 has been brainwashed
> into thinking OO is the silver bullet but that's simply not the case.

Indeed; functional is the silver bullet. OO is only bronze.

> This misapplication and misimplementation of OO has done far more damage
> than good and now we are seeing the results as people become more reliant
> on self-protection and less and less capable of doing anything themselves
> without 3rd party libraries, GC, and layer upon layer of middleware, etc.

Hogwash.

> Where does it stop?

With the signing of your civil commitment papers.

>>> Ada is readable. It's clean, it's orderly, it's so much better and safer
>>> than C++ that there isn't any comparison at all.
>> And it's at least as verbose as Java. Eeeuw. If you can cope with deeply
>> nested parentheses, Lisp FTW. Otherwise maybe stick with C? :)
>
> Ada is not verbose at all.

Fiddle-faddle. Unless it has syntactic abstraction, anyway.

> I don't understand how you can make that statement.

It's quite simply, really. First I typed an 'A', then an 'n', then a 'd'
and a space, then "it's", and "at", and "least" and "as", and "verbose",
and "as" again, and then, finally, "Java" and a period.

You really ought to try it sometime.

> Java is a sloppy mess like its cousins.

Yes, it is. Fortunately, Lisp isn't.

> Ada makes reading the code simple and that was a design goal of Ada.

They said that about BASIC too.

>> I'm more worried about the wackos that think manual memory management is
>> a must-have.
>
> And why is that? Is personal responsibility somehow not relevant to coding?

Non-sequitur. The one has nothing whatsoever to do with the other.
Unless you genuinely believe that the use of automation and personal
responsibility are mutually exclusive, in which case there's a
hunter-gatherer tribe out there somewhere that is missing its village idiot.

> Is understanding your hardware and making it do what you want no longer the
> goal of programming?

It still is if you're just a hobbyist hacker. Us professionals are in
the business of understanding customers' hardware and making it do what
they want at their push of some buttons. :)

Ray Blaak

unread,
Jul 29, 2009, 1:22:01 PM7/29/09
to
learn...@yourdesk.com writes:
> Garbage collection is a throwback to interpreted languages and bloated
> run-time systems. Normal compiled languages get along very well without any
> such thing at all.

GC is an advancement of the state of the art. In general, GC manages memory
better than people do.

Is it always appropriate? No, it depends on what you are doing. Sometimes you
need precise control. Fair enough.

But the default of no GC forces the programmer to spent artificial effort on
the memory management problem that could be better spent elsewhere.

Hmm, this is also directed to comp.lang.ada. I recall debating this here
before. Just google "GC" and my name for another thread arguing about GC in
cla.

> > Also, I find the OO notation a little quirky.
>
> That's a tough proposition from someone advocating C++. C++ notation is
> hideous, obfuscated, and error-prone. It's one of the least readable (maybe
> the worst in that regard) of any of the languages in common use.

I was not advocating C++. I despise it too.

There is nothing "wrong" with the Ada notation. It is just that my way of
thinking prefers to conceptualize objects owning methods,
e.g. obj.doSomething() vs doSomething(obj).

That is just my preference.

Actually doesn't Ada 2005 allow obj.method notation in more circumstances?

> Well yes, if you don't know how to code and if you don't understand
> fundamental aspects of software engineering like resource management, you
> definitely shouldn't be writing code for commercial or industrial
> environments and you probably should be using the "protect me from myself"
> platforms like Java and C#. I can understand your post better now.

Your debating style sucks. No need to be insulting just because I don't agree
with you about GC. I have reasons and experience with using GC and not using
GC, and I can back up my positions. I can also see the arguments for
preferring explicit manual control.

And you know, we can still validly disagree.

Just don't be an asshole about things.

Jon Harrop

unread,
Jul 29, 2009, 3:24:51 PM7/29/09
to
fft1976 wrote:
> On Jul 28, 9:46 pm, Oxide Scrubber <jharri...@hatlop.de> wrote:
>> >> fft1976 wrote:
>> >>> Java: 1.5x slower than C as a rule of thumb.
>> >> I think it can achieve parity.
>>
>> > I disagree. I don't think the JIT can do much about the memory layout
>> > of the data structures. Compare a vector of complex numbers or of 3D
>> > vectors (NOT of pointers to them) in C/C++ and Java. Run some tests. I
>> > did and I looked at others'.

Right. Writing such algorithms generically incurs huge performance
degradation in Java and Clojure.

>> > For me, 1.5x is a good trade-off for safety though.
>>
>> The JIT can't, but the coder can. If you want a vector of N
>> double-precision complex numbers in Java that is contiguous in memory,
>> for example, you could use a Java array of 2xN doubles and index into it
>> appropriately.

That workaround sucks because you've either lost polymorphism or, if you
rewrite the entire compiler to use whole program optimizations and change
the calling convention globally, you've lost incremental compilation and
dynamic loading.

> This is fanboy fantasy, not reality.

Yes. Clojure has some nice features but its most serious deficiencies are
inherited from the JVM and there is nothing Clojure can do about it, e.g.
value types and TCO. Moreover, according to Sun employees on the JVM
languages group this is never likely to be fixed.

That is a major problem with all JVM-based languages in the context of
technical computing. Perhaps the best solution for technical users would be
for languages like Scala and Clojure to target MLVM instead of the JVM and
leverage its existing tail calls and then work towards getting value types
implemented as well.

Jon Harrop

unread,
Jul 29, 2009, 3:25:48 PM7/29/09
to
Andrea Taverna wrote:
>> OCaml should probably be your #1 choice (about 2x slower than C
>> usually, single core). Has its own flaws (Google "Ocaml sucks")
>
> That's why it was discarded.

From your options?

Martin

unread,
Jul 29, 2009, 2:44:48 PM7/29/09
to
On Jul 29, 9:11 am, Ray Blaak <rAYbl...@STRIPCAPStelus.net> wrote:

> fft1976 <fft1...@gmail.com> writes:
> > >      - Ada is best suited for large teams and/or critical software, thus
> > > it may be overkill for my work, OTH it could have anything I might
> > > happen to need.
>
> Ada is fine as far as it goes, and there is a GNU Ada compiler which helps a
> lot.
>
> My problem with it is the lack of a garbage collection. That is just not
> acceptable to me these days. If you manually allocate memory, then you pretty
> much have memory bugs, it's as simple as that.

There's little demand for GC from Ada users for a number of reasons:

1) You very rarely need to explicitly manage memory using Ada -
there's a shed load of predefined containers should you need lists,
vectors, set, etc. and you rarely need to dynamically allocate
anything on a heap - there's just other ways to do it.

2) The traditional non-deterministic nature of when and for how long a
GC was going to run ruled it out for a lot of Ada systems.

Real-time Java (see Aonix's PERC) seems to have got round the non-
deterministic problems but that still doesn't mean there will be an
similar extension to Ada, as there are still ways to do it would
recourse to pointers et al.


> It's better than C/C++ for sure, but I remember plenty of memory exceptions all
> the same.
>
> Also, I find the OO notation a little quirky.

Obj.Method is now supported for classes.

> > Ada is also 2x slower, but less suitable for your purposes (verbose,
> > less memory safe than OCaml, free compilers produce GPL-only code)
>
> I am pretty sure the GPL-only thing is not true.

FSF versions allow you to produce proprietary programs, GNAT GPL 200x
versions do not.

Cheers
-- Martin

Jon Harrop

unread,
Jul 29, 2009, 5:10:55 PM7/29/09
to
learn...@yourdesk.com wrote:
> I would hate to see the "major production-code system" that relied on GC.
> I can tell you this with certainty, no bank, insurance company, airline,
> or any other online realtime operation uses such nonsense. Of course they
> have some bits here and there written in C++ but the code that keeps them
> online and serving customers is written in COBOL, assembler or Ada and
> doesn't need or have GC. That's production.

That has not been true for decades. Here is a trivial counter example of
Jane St. Capital using OCaml in the finance industry:

http://ocaml.janestreet.com/?q=node/61

My company specializes in the use of OCaml and F# for high performance
technical computing including scientific computing and finance. These
garbage collected languages are common there precisely because they make it
easy to implement complicated algorithms and data structures very
efficiently.

For example, I recently implemented QR decomposition via Householder
reductions in F# that was generic over the element type. I wrote it for fun
but it turned out to be 3x faster than the Intel MKL and 35x shorter than
the reference implementation in LAPACK.

Jon Harrop

unread,
Jul 29, 2009, 4:58:05 PM7/29/09
to
Martin wrote:
> There's little demand for GC from Ada users for a number of reasons...

The main reason is surely that they are self-selected: former Ada
programmers who wanted the benefits of garbage collection migrated to other
languages and do not demand GC for Ada.

Jon Harrop

unread,
Jul 29, 2009, 5:18:35 PM7/29/09
to
Dmitry A. Kazakov wrote:
> There are relationships between the object and its clients around the
> program which are far more complex and beyond "you die before me", the
> only relationship maintained by GC.

That is incorrect. You are describing reference counting.

> The point is, relationships between objects is key a part of OO design. To
> leave that to GC in hope that it will somehow sort out things is
> irresponsible. It did not, does not and it will not do.

Note that you were long since disproven by the JVM and CLR.

>> Why not save yourself the trouble and use a real GC, then?
>

> Sure, by using scoped objects whenever possible. That is 90% of all cases.

Functional programming languages are trivial counter examples. Scope alone
cannot even support first-class lexical closures.

Arne Vajhøj

unread,
Jul 29, 2009, 10:06:12 PM7/29/09
to
Oxide Scrubber wrote:

> fft1976 wrote:
>> Is this what Clojure fanboys must resort to when they lose an
>> argument?
>
> I wouldn't know, since that's never happened to me.

Thanks for that very informative comment !

:-)

Arne

tmo...@acm.org

unread,
Jul 29, 2009, 10:11:26 PM7/29/09
to
> Ada is also 2x slower,
Where'd you get that idea?
When comparing to C, you should remember to turn off all run-time
checking, but even with it all on normal code shouldn't have more than a
10-15% slowdown. Note that Gnat uses the same code generator as Gnu C, so
one would expect the same semantics to generate the same code and run at
the same speed. And if you are calling a library of course they would be
the same. Note also that it's very easy, and safer, in Ada to use
multiple cores, on those occasions when that would help.

fft1976

unread,
Jul 29, 2009, 10:11:32 PM7/29/09
to
On Jul 29, 12:24 pm, Jon Harrop <j...@ffconsultancy.com> wrote:

> > This is fanboy fantasy, not reality.
>
> Yes. Clojure has some nice features but its most serious deficiencies are
> inherited from the JVM and there is nothing Clojure can do about it, e.g.
> value types and TCO.

Not as far as speed is concerned, in practice. If you give up 1.5x
speed by going from C++ to Java, and 5-10 by going from Java to
Clojure [1], than the latter is much more relevant.

I actually think 1.5x is a good trade for memory safety, as I stated.
It's beyond me why this "jhar...@hatlop.de" fella decided to argue
about it, when he obviously knows nothing about number crunching. What
a nut case.

[1] http://groups.google.com/group/clojure/msg/92b33476c0507478

Lew

unread,
Jul 29, 2009, 10:19:09 PM7/29/09
to
fft1976 wrote:
> I actually think 1.5x is a good trade for memory safety, as I stated.
> It's beyond me why this "jhar...@hatlop.de" fella decided to argue
> about it, when he obviously knows nothing about number crunching. What
> a nut case.

Flame war, flame war, flame war. Go on to comp.programming and have a great time.

--
Lew

fft1976

unread,
Jul 29, 2009, 10:34:02 PM7/29/09
to
On Jul 29, 7:11 pm, tmo...@acm.org wrote:
> > Ada is also 2x slower,
>
>   Where'd you get that idea?

http://shootout.alioth.debian.org/u32/benchmark.php?test=all&lang=gnat&box=1

Of the languages I commented on, I actually know Ada the least, so
take it up with the shootout authors (no bitching to me, please). I
actually think the Ada was run in the shootout with all safety OFF.

Paul Rubin

unread,
Jul 29, 2009, 10:48:37 PM7/29/09
to
fft1976 <fft...@gmail.com> writes:
> > � Where'd you get that idea?
> http://shootout.alioth.debian.org/u32/benchmark.php?test=all&lang=gnat&box=1

Given that Ada is such a verbose language, the generally smaller
source sizes of the Ada programs suggests that they weren't optimized
very carefully. I know that some C, Java, and functional language
users treat the shootouts fairly competitively and tune their code
carefully, but I don't know if the Ada users are the same way. Ada
is unfortunately kind of a niche language these days.

fft1976

unread,
Jul 29, 2009, 11:04:04 PM7/29/09
to
On Jul 28, 3:01 pm, Ludovic Brenta <ludo...@ludovic-brenta.org> wrote:
> fft1976 wrote:
> > Ada is also 2x slower [than C], but less suitable for your purposes (verbose,

> > less memory safe than OCaml, free compilers produce GPL-only code)
>
> Correction: the Ada run-time library from GCC (from the Free Software
> Foundation) is licensed under GPLv3 with run-time linking exception,
> so does not cause the executables to be under GPL.  But that wasn't
> the OP's concern, anyway.

I've read somewhere that the quality of those FSF Ada tools/libraries
is not as good (if it were, what would keep the commercial vendors in
business?)

fft1976

unread,
Jul 29, 2009, 11:40:59 PM7/29/09
to
On Jul 29, 7:48 pm, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:

> fft1976 <fft1...@gmail.com> writes:
> > >   Where'd you get that idea?
> >http://shootout.alioth.debian.org/u32/benchmark.php?test=all〈=gna...

>
> Given that Ada is such a verbose language, the generally smaller
> source sizes of the Ada programs suggests that they weren't optimized
> very carefully.  I know that some C, Java, and functional language
> users treat the shootouts fairly competitively and tune their code
> carefully, but I don't know if the Ada users are the same way.  Ada
> is unfortunately kind of a niche language these days.

This is a hypotheses you are entertaining, right? Or did you notice
anything "suboptimal" in the Ada code?

Paul Rubin

unread,
Jul 29, 2009, 11:53:30 PM7/29/09
to
fft1976 <fft...@gmail.com> writes:
> This is a hypotheses you are entertaining, right?

Yes.

> Or did you notice anything "suboptimal" in the Ada code?

I haven't looked at the code. I do notice from the shootout that in
some examples, the Ada code is significantly smaller both code size
and memory consumption than the C++ code, but the Ada code is slower.
Since Ada and C/C++ have pretty similar semantics, it suggests there
was a time/memory tradeoff that was resolved in different ways between
the programmers.

Stephen Horne

unread,
Jul 29, 2009, 11:58:57 PM7/29/09
to
On Wed, 29 Jul 2009 08:19:07 -0700 (PDT), Andrea Taverna
<a.t...@hotmail.it> wrote:

>> Ada is also 2x slower,
>Are you sure?

I'm not going to say yes or no, but it isn't unreasonable.

Ada compilers are required to do range checking on array bounds,
integer types etc, along with other checks that C simply doesn't
bother with. A lot of that is handled using static analysis, but where
static checks can't be sure, run-time checks are done instead.

>I have to say that translating my C graph library with Ada, leaving
>aside memory management, lead to shorter and readable code, something
>I've been dreaming of for the past 3 years

The transition from C to Ada 83 is pretty good, especially if (like
me) you used Pascal and/or Modula 2 before you used C. There are
run-time costs, but those costs are there for a reason.

If you want object oriented, Ada has OO features since Ada 95 - but
they are not what a C++/Java/whatever developer would expect. IIRC,
the original designer of the Ada language (back when it was called
Green) resigned over language design decisions for the Ada 95
standardisation.

Basically, you don't have classes, you have tagged types - ie variant
records - which support inheritance. That isn't a big deal. But you
don't have methods. You have normal non-member functions/procedures,
but some parameters can be designated as special 'classwide'
parameters - parameters that are run-time bound rather than
compile-time overloaded.

This approach *IS* justifiable, as it supports multiple dispatch. But
I haven't used this in Ada. By 1998 I had moved on to C++, I only used
Ada 83 professionally, and I never got very familiar with Ada 95.

That said, I have a DSL I have written that supports multiple
dispatch, targetting C++ as an output language, which is based on a
program called treecc that does very similar. My DSL translates
'nodes' (it is primarily aimed at AST handling) to C++ structs with
inheritance, but with a minimal set of methods. Multiple dispatch
operations are translated to C++ functions with dispatch resolution
code built in so that, as in Ada, you don't really have traditional
"methods".

IIRC one of the OOP gurus - Bertrand Meyer? - in the 90s described
classes as a combination of two concepts - records/structs (WRT data
members) and modules/packages (WRT method ownership). An additional
issue, however, of method "ownership" is that one parameter (in C++,
"this") is treated as special. Only this parameter participates in
run-time binding resolution. In fact, we often forget that it *is* a
parameter.

With multiple dispatch, the idea that classes "own" methods makes less
sense. Any or all parameters may influence the run-time binding
resolution. It still makes sense for a method to be placed in a module
for the usual modularity and data hiding reasons, but the logic for
that module being the class itself is much more tenuous. In
particular, despite the old OOP theory claims, classes very often need
to work in closely-coupled subsystems. Having classes and functions
wrapped in a separate subsystem-level module, so that the class doing
module/data hiding duties, makes a lot of sense.

BTW - Multiple dispatch isn't just about ASTs. For example, using
multiple dispatch (and a sufficiently rich specification of what cases
each implementation of a function covers), the visitor pattern is
completely unnecessary.

Multiple dispatch is certainly more complex than single dispatch, but
remember that it isn't wasted effort. Using multiple dispatch is
simply a way of implementing a decision that would otherwise have to
be implemented manually, just like single dispatch but more so. Some
decisions are easier to handle using multiple dispatch (exploiting the
fact that the compiler is responsible for the low-level
decision-making code) whereas others are easier to handle manually. In
efficiency terms, if the compiler is doing a good job, there should be
little or no difference.

One problem with multiple dispatch, however, is that it isn't really
compatible with separate compilation. A new compilation unit may
include definitions that affect dispatch decisions in an already
compiled unit, by overriding some special cases. DSLs such as mine
handle this by being DSLs - by limiting their scope, basically, and
leaving larger scale issues to the main language. Presumably, Ada has
to delay the dispatch decision code generation to what is essentially
link-time, though the Ada build process is a bit non-conventional
anyway due to things like having types which are named in a package
specification, but are only defined privately in the body - a bit
better hidden than you get in C/C++, and it means that separate
compilation has what-size-is-that and similar issues.

Georg Bauhaus

unread,
Jul 30, 2009, 2:51:50 AM7/30/09
to

It is true that Ada programs have been ranking lower at the
Shootout than they did before, and the (two) reasons
are interesting.
Some time ago, many algorithms were supposed to use just the
language, and sequential programs, with few excceptions.
Now with Multicore CPUs everywhere, many Shootout programmers
have started to include threading libraries and thus
perform different algorithms, having their programs perform
devide and conquer and such.

(One might wonder whether or not having concurrency support
built into the language will become the great new thing. :-)

Last time I looked, the Ada programs had not been
updated to use Ada's concurrent types to express the same
devide and conquer strategy, which seems to be allowed now...
The reported speedups for some C versions of the programs
can be used as an estimate of a statistical correction
to the (still sequential) Ada performance.
This then will explain why a 2x slowdown of Ada,
when compared to C, is not a realistic estimate.

A second reason why Ada has dropped at the Shootout is
that the systems they use have older interim (from the Ada
point of view) GCCs that are known to be broken.
This makes some perfectly normal Ada programs fail there.
As the code is available and did not fail when it was
first ranked, and does not fail when used with an apt
compiler like GCC 4.3.x, the Shootout is just showing its
information potential ;-)

fft1976

unread,
Jul 30, 2009, 3:52:31 AM7/30/09
to
On Jul 29, 11:51 pm, Georg Bauhaus <rm.tsoh.plus-

Try again. The link above is for single-threaded code.

>
> A second reason why Ada has dropped at the Shootout is
> that the systems they use have older interim (from the Ada
> point of view) GCCs that are known to be broken.
> This makes some perfectly normal Ada programs fail there.
> As the code is available and did not fail when it was
> first ranked, and does not fail when used with an apt
> compiler like GCC 4.3.x, the Shootout is just showing its
> information potential ;-)

How would this explain Ada's slow speed? I don't understand you.

Ludovic Brenta

unread,
Jul 30, 2009, 4:34:25 AM7/30/09
to
fft1976 wrote:

> Georg Bauhaus wrote:
>> A second reason why Ada has dropped at the Shootout is
>> that the systems they use have older interim (from the Ada
>> point of view) GCCs that are known to be broken.
>> This makes some perfectly normal Ada programs fail there.
>> As the code is available and did not fail when it was
>> first ranked, and does not fail when used with an apt
>> compiler like GCC 4.3.x, the Shootout is just showing its
>> information potential ;-)
>
> How would this explain Ada's slow speed? I don't understand you.

Remember that the Shootout does not compare languages; it compares
compilers (i.e. particular implementations of some languages) combined
with programs (i.e. particular implementations of some algorithms).
So, upgrading the compiler is likely to improve the performance of Ada
programs; similarly, spending time to hand-optimize the programs is
also likely to improve their performance.

--
Ludovic Brenta.

Stephen Horne

unread,
Jul 30, 2009, 4:58:37 AM7/30/09
to

Appols. for continuing this off-topic stuff - I hope it isn't too
annoying.

After replying, I got curious, and spent some time looking at the Ada
95 rationale. As a result...


On Thu, 30 Jul 2009 04:58:57 +0100, Stephen Horne
<sh006...@blueyonder.co.uk> wrote:

>The transition from C to Ada 83 is pretty good, especially if (like
>me) you used Pascal and/or Modula 2 before you used C. There are
>run-time costs, but those costs are there for a reason.

There are exceptions to this that I had forgotten about. Access types
(pointers) have quite severe restrictions, for instance - derestricted
somewhat in Ada 95, but the bad-pointer risk-aversion may still be a
touch excessive.

I have no idea what Ada 2005 adds to the mix.

>Basically, you don't have classes, you have tagged types - ie variant
>records - which support inheritance. That isn't a big deal. But you
>don't have methods. You have normal non-member functions/procedures,
>but some parameters can be designated as special 'classwide'
>parameters - parameters that are run-time bound rather than
>compile-time overloaded.
>
>This approach *IS* justifiable, as it supports multiple dispatch. But
>I haven't used this in Ada. By 1998 I had moved on to C++, I only used
>Ada 83 professionally, and I never got very familiar with Ada 95.

Actually, it doesn't support multiple dispatch. The term is used in
documentation, but refers to a coding trick in which a sequence of
single-dispatch calls give the effect of multiple dispatch - an
approach which is sometimes used in C++ and other languages too, but
which has a significant manual coding overhead.

The syntax of Ada 95 would clearly support multiple dispatch, but the
semantics disallow it. Based on what the rationale says, the reason is
basically that there are insufficient features to manage the dispatch
decision specification properly.

My impression is that the Ada 95 syntax (if you simply remove the
semantic restrictions) would allow the same degree of control as the
treecc domain-specific language, though it has been about 18 months
since I used that. So - based on one tool that has apparently found a
niche - those semantic restrictions may have been a mistake.

That said, I wrote my own DSL because I didn't like the restrictions
in treecc, and since Ada 95 takes a different approach to treecc and
my DSL WRT dispatch handling.

Treecc and my DSL use a distinction between what I call the operation
and the variants (different implementations) of that operation. In
implementation terms, and in my DSL, each variant maps to an inline
function containing the code for one case. Each operation maps to a
normal function which evaluates the dispatch decision, then calls the
appropriate variant-function inline. Variants are inline because they
only really exist as a way of getting the environment right (parameter
types etc) for the code fragments. Each variant inline is called from
precisely one location in the operations dispatch handler.

Having this syntactic concept of a variant means I can have special
kinds of variants, including some that have no implementation code at
all. My DSL defines final variants (can't be overridden), noinherit
variants (handles exact match only), block variants (no implementation
- prevent higher level variants from being inherited, thus causing
compile-time errors if lower level cases aren't all covered), fail
variants (no implementation - if the case occurs, throw an exception)
and more.

Ada 95 tagged types support only early binding, but each tagged type
has an associated "classwide" type which is the true
variant-record-like entity, though Ada has a separate mechanism for
standard variant records. A "classwide" type can have the value of the
basis tagged type, or of any descendant tagged type (or any descendant
tagged types classwide type, if I read it correctly).

Passing a classwide type to a normal Ada function doesn't trigger
run-time binding. The function sees the variant-record-style entity.
However, if I read correctly, a call using a classwide type to a
function that has a range of non-classwide tagged-type-accepting
overloads causes dispatch resolution to be done at the call site.

The point here is that the non-classwide tagged-type-accepting
overloads are just normal overloaded functions - they can be called
using early binding too. They aren't variants of an operation, as in
my DSL - they are full functions in their own right. The called
function isn't special - the call is special.

It's interesting but, truth told, disappointing. No true multiple
dispatch, at least in Ada 95 - don't know about Ada 2005.

>One problem with multiple dispatch, however, is that it isn't really
>compatible with separate compilation. A new compilation unit may
>include definitions that affect dispatch decisions in an already
>compiled unit, by overriding some special cases. DSLs such as mine
>handle this by being DSLs - by limiting their scope, basically, and
>leaving larger scale issues to the main language.

This may also be part of why Ada95 doesn't support true multiple
dispatch.

>though the Ada build process is a bit non-conventional
>anyway due to things like having types which are named in a package
>specification, but are only defined privately in the body - a bit
>better hidden than you get in C/C++, and it means that separate
>compilation has what-size-is-that and similar issues.

This is just wrong - my excuse being that it has been more than a
decade since I used Ada. An Ada package specification has a public
part in which a type can be declared as simply 'private', but if so,
the specification must also have a private part at the end which fills
in the detail of that type so that the compiler can work out the
instance size etc.

Colin Paul Gloster

unread,
Jul 30, 2009, 8:28:43 AM7/30/09
to
On Thu, 30 Jul 2009, tmo...@acm.org wrote:

|-----------------------------------------------------------------|
|"[..] |


|When comparing to C, you should remember to turn off all run-time|

|checking, [..]" |
|-----------------------------------------------------------------|

What would be the point of using Ada then? Ada is good, partially
because it has checks.

Martin

unread,
Jul 30, 2009, 7:49:20 AM7/30/09
to
On Jul 30, 1:28 pm, Colin Paul Gloster <Colin_Paul_Glos...@ACM.org>
wrote:

The shootout is /completely/ about speed / resource usage "bragging
rights" and nothing to do with how languages are actually used day-to-
day.

You could alternatively start a separate 'shootout' that emphasised
other aspects of programming to highly strengths in other languages,
e.g. "Without recourse to non-language defined libraries, start 2
tasks that..." etc :-)

Cheers
-- Martin

Jon Harrop

unread,
Jul 30, 2009, 11:29:10 AM7/30/09
to
fft1976 wrote:
> On Jul 29, 12:24 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
>> > This is fanboy fantasy, not reality.
>>
>> Yes. Clojure has some nice features but its most serious deficiencies are
>> inherited from the JVM and there is nothing Clojure can do about it, e.g.
>> value types and TCO.
>
> Not as far as speed is concerned, in practice. If you give up 1.5x
> speed by going from C++ to Java, and 5-10 by going from Java to
> Clojure [1], than the latter is much more relevant.

Lack of value types can cost you a lot more than 1.5x though. Try writing an
FFT over boxed complexes and Java and compare with unboxed in C99, for
example.

> I actually think 1.5x is a good trade for memory safety, as I stated.
> It's beyond me why this "jhar...@hatlop.de" fella decided to argue
> about it, when he obviously knows nothing about number crunching. What
> a nut case.

This goes way beyond number crunching though. Lots of applications benefit
enormously from value types. They are used extensively in the .NET
framework.

Georg Bauhaus

unread,
Jul 30, 2009, 12:10:23 PM7/30/09
to
fft1976 schrieb:

> On Jul 29, 11:51 pm, Georg Bauhaus <rm.tsoh.plus-
> bug.bauh...@maps.futureapps.de> wrote:
>> fft1976 wrote:
>>> On Jul 29, 7:48 pm, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
>>>> fft1976 <fft1...@gmail.com> writes:
>>>>>> Where'd you get that idea?
>>>>> http://shootout.alioth.debian.org/u32/benchmark.php?test=all〈=gna...

>> Last time I looked, the Ada programs had not been


>> updated to use Ada's concurrent types to express the same
>> devide and conquer strategy, which seems to be allowed now...
>> The reported speedups for some C versions of the programs
>> can be used as an estimate of a statistical correction
>> to the (still sequential) Ada performance.
>> This then will explain why a 2x slowdown of Ada,
>> when compared to C, is not a realistic estimate.
>
> Try again. The link above is for single-threaded code.

My fault. I had more general observations in mind
which don't apply here, sorry.

So I have looked at the fasta program as an example.
It has a relation of roughly 1:2 in running time
comparing C (No.1) to Ada at Shootout. But the relation
is easily improved without really changing the program:

First, I could confirm the relation of 1:2 on a GNU/Linux
machine.
Two changes shifted the relation towards 6.3s to ~8s
in favor of C, starting from 6.6s to ~12.5s (which
confirms the ratio 1:2).

Change 1: Add pragma Inline(Select_Random);

I did this because it seemed like the C compiler
would do just this. (Looking at the disassembly.)
Have the Ada compiler inline, too.
This made the Ada program run another 2s faster
(i.e. down another 2s from ~10s after Change 2).

Change 2: Turn off I/O in both programs.

While this defeats the purpose, it shows where
much of the difference comes from:

This change accounted for about 2s (down from ~12.5s)
for the Ada program and for about 0.3s (down from
~6.6s) for the C program. So the I/O part in C
is obviously high speed on Unix, not surprisingly,
and Ada.Text_IO is notoriously slow (since it is
a lot like an even more blown up printf(3)).

Conclusions
(so far, if the above can be reproduced):

A fair bit of Ada's disadvantage is remedied by using
Inline; another drag is Text_IO which is indeed slow.
Real world needs might (and will) consider I/O
routines that use OS functions, much like C on Unix does,
more or less.

The seemingly shaky results (if confirmed) also let me
think that without constant attention, the Shootout
can give a wrong impression (for any language :).


>> A second reason why Ada has dropped at the Shootout is
>> that the systems they use have older interim (from the Ada
>> point of view) GCCs that are known to be broken.
>> This makes some perfectly normal Ada programs fail there.
>> As the code is available and did not fail when it was
>> first ranked, and does not fail when used with an apt
>> compiler like GCC 4.3.x, the Shootout is just showing its
>> information potential ;-)
>
> How would this explain Ada's slow speed? I don't understand you.

In the overall rating, failing programs or missing programs
used to add to where a language was rated IIRC?

Isaac Gouy

unread,
Jul 30, 2009, 12:25:32 PM7/30/09
to


Authoritative and in this case wrong.

After the benchmarks game caught up with quad-core hardware there were
different sets of measurements - measurements of programs allowed to
use all the cores, and measurements of programs forced onto a single
core.

That URL linked to measurements of programs forced onto a single
core.

> A second reason why Ada has dropped at the Shootout is
> that the systems they use have older interim (from the Ada
> point of view) GCCs that are known to be broken.
> This makes some perfectly normal Ada programs fail there.
> As the code is available and did not fail when it was
> first ranked, and does not fail when used with an apt
> compiler like GCC 4.3.x, the Shootout is just showing its
> information potential ;-)


Authoritative and seemingly wrong again.

GNAT 4.3.3

http://shootout.alioth.debian.org/u32/benchmark.php?test=all&lang=gnat&box=1#about


fft1976

unread,
Jul 30, 2009, 1:09:52 PM7/30/09
to
On Jul 30, 9:10 am, Georg Bauhaus <rm.dash-bauh...@futureapps.de>
wrote:

> A fair bit of Ada's disadvantage is remedied by using
> Inline;

Can Ada be asked to inline automatically?

By the way, can the latest FSF GNAT run on Windows (MinGW) and OSX?

Pascal Obry

unread,
Jul 30, 2009, 1:20:49 PM7/30/09
to fft1976
Le 30/07/2009 19:09, fft1976 a �crit :

> Can Ada be asked to inline automatically?

Yes. See gnatmake's options -gnatn and -gnatN. Some inlining are also
done at -O3 IIRC.

--

--|------------------------------------------------------
--| Pascal Obry Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--| http://www.obry.net - http://v2p.fr.eu.org
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver keys.gnupg.net --recv-key F949BD3B

Ludovic Brenta

unread,
Jul 30, 2009, 1:23:29 PM7/30/09
to
Georg Bauhaus wrote:
> A fair bit of Ada's disadvantage [compared to C] is remedied by using

> Inline; another drag is Text_IO which is indeed slow.

I'm quite sure that replacing Text_IO with streams would make the Ada
program much faster.

--
Ludovic Brenta.

fft1976

unread,
Jul 30, 2009, 1:28:35 PM7/30/09
to
On Jul 30, 10:20 am, Pascal Obry <pas...@obry.net> wrote:

> Le 30/07/2009 19:09, fft1976 a écrit :
>
> > Can Ada be asked to inline automatically?
>
> Yes. See gnatmake's options -gnatn and -gnatN. Some inlining are also
> done at -O3 IIRC.
>

I don't understand Georg's point then, unless C/C++ has explicit
"inline".

Isaac Gouy

unread,
Jul 30, 2009, 1:47:12 PM7/30/09
to
On Jul 30, 10:20 am, Pascal Obry <pas...@obry.net> wrote:
> Le 30/07/2009 19:09, fft1976 a écrit :
>
> > Can Ada be asked to inline automatically?
>
> Yes. See gnatmake's options -gnatn and -gnatN. Some inlining are also
> done at -O3 IIRC.


"Activate inlining for subprograms for which pragma inline is
specified" gives the impression that both a source code change -
pragma Inline(Select_Random); - and a compiler switch change would be
needed?

Isaac Gouy

unread,
Jul 30, 2009, 1:59:33 PM7/30/09
to
On Jul 30, 9:10 am, Georg Bauhaus <rm.dash-bauh...@futureapps.de>
wrote:
-snip-

> Conclusions
> (so far, if the above can be reproduced):
>
> A fair bit of Ada's disadvantage is remedied by using
> Inline; another drag is Text_IO which is indeed slow.
> Real world needs might (and will) consider I/O
> routines that use OS functions, much like C on Unix does,
> more or less.
>
> The seemingly shaky results (if confirmed) also let me
> think that without constant attention, the Shootout
> can give a wrong impression (for any language :).


"shaky results"? Didn't you confirm those results yourself?

Sure we might hope a different program would give better results, and
instructions for contributing better programs are given in the FAQ.


-snip-


> In the overall rating, failing programs or missing programs
> used to add to where a language was rated IIRC?

Not true for the last several years.

fft1976

unread,
Jul 30, 2009, 3:38:43 PM7/30/09
to
On Jul 30, 10:59 am, Isaac Gouy <igo...@yahoo.com> wrote:

> "shaky results"? Didn't you confirm those results yourself?

Perhaps he is referring to I/O being the bottleneck. I have to say, if
this is true, it undermines the usefulness of the benchmark to me.

Take it easy.

Isaac Gouy

unread,
Jul 30, 2009, 5:44:50 PM7/30/09
to


His measurements suggest about 1.7s of a 6s difference might be
accounted for by slow Text_IO in programs that write 245MB.

The benchmarks game measurements are made with stdout redirected to /
dev/null - we don't know if that was also the case for the
measurements reported by Georg Bauhaus.

Paul Rubin

unread,
Jul 30, 2009, 6:14:22 PM7/30/09
to
Isaac Gouy <igo...@yahoo.com> writes:
> His measurements suggest about 1.7s of a 6s difference might be
> accounted for by slow Text_IO in programs that write 245MB.
>
> The benchmarks game measurements are made with stdout redirected to /
> dev/null - we don't know if that was also the case for the
> measurements reported by Georg Bauhaus.

If Text_IO is slow, it could be that it burns a lot of cpu doing
format conversions, or maybe it uses less buffering than stdio and
therefore does more system calls. Redirecting to /dev/null wouldn't
make any difference to either of those.

Oxide Scrubber

unread,
Jul 30, 2009, 8:48:31 PM7/30/09
to
Jon Harrop wrote:
> fft1976 wrote:
>> On Jul 29, 12:24 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
>>>> This is fanboy fantasy, not reality.
>>> Yes. Clojure has some nice features but its most serious deficiencies are
>>> inherited from the JVM and there is nothing Clojure can do about it, e.g.
>>> value types and TCO.
>> Not as far as speed is concerned, in practice. If you give up 1.5x
>> speed by going from C++ to Java, and 5-10 by going from Java to
>> Clojure [1], than the latter is much more relevant.
>
> Lack of value types can cost you a lot more than 1.5x though. Try writing an
> FFT over boxed complexes and Java and compare with unboxed in C99, for
> example.

Clojure has TCO; you just have to make your use of it explicit (and then
the compiler alerts you if it's not really in tail position).

I'm not sure what you mean by "value types". If you mean immutable types
with value-equality semantics remaining consistent over time, then
Clojure is chock full of them, even its collection types can be used as
value types, so as keys in maps for instance.

If you mean non-pointer types, Clojure has access to the full range of
Java non-pointer types: boolean, byte, short, int, long, float, double,
and char.

It can't currently pass them across function call boundaries without
boxing and unboxing, but you can work around that using definline or a
macro.

> This goes way beyond number crunching though. Lots of applications benefit
> enormously from value types. They are used extensively in the .NET
> framework.

The JVM has several non-pointer types too, and does not have the
Microsoft taint.

Georg Bauhaus

unread,
Jul 31, 2009, 6:27:04 AM7/31/09
to
Paul Rubin schrieb:

I measured with redirection > /dev/null for both programs
and got the (expected) significant difference.

Text_IO is slow by design (being designed around page
control, column control, formatting, etc. It's
not just moving chunks of char objects.)

Georg Bauhaus

unread,
Jul 31, 2009, 6:48:20 AM7/31/09
to
Isaac Gouy schrieb:

> On Jul 30, 10:20 am, Pascal Obry <pas...@obry.net> wrote:
>> Le 30/07/2009 19:09, fft1976 a �crit :

>>
>>> Can Ada be asked to inline automatically?
>> Yes. See gnatmake's options -gnatn and -gnatN. Some inlining are also
>> done at -O3 IIRC.
>
>
> "Activate inlining for subprograms for which pragma inline is
> specified" gives the impression that both a source code change -
> pragma Inline(Select_Random); - and a compiler switch change would be
> needed?

In this case, yes, both pragma Inline and -gnatn -gnatN was
used. The Annotated Ada RM says, though, that an implementation
may inline in any case (or not), provided inlining preserves
the semantics. (6.3.2)

Georg Bauhaus

unread,
Jul 31, 2009, 7:13:27 AM7/31/09
to
fft1976 schrieb:

I/O and a few other things are necessrily changing over time,
affecting (perceived) results. Without intending any complaint:
programs like the fine tests presented at Shootout will
almost inevitably produce some variation along
several dimensions, and this should be expected:

- compilers change

- system libraries/services change

- I/O routines, string processing, matrix handling and
more are different between languages, and they are not
always srictly isolated into the test programs.
If someone wanted to do this kind of specific comparison,
different programs might profit from slightly different design
guidelines than those you get for free at the Shootout.

- interpretation of result is correspondingly brittle

An example case is asking (or not) which specific part of a
program is causing its overall running time.

When changing the regex-dna program for Ada some
time ago, I noticed that the GNAT's Spitbol pattern matching
is just as fast as the other fast pattern matching specialists
at the time. However, the Ada program gets really busy when
reassembling the result string. So the less astute human
reader might (wrongly) interpret the regex-dna test result as
indication that pattern matching using Ada is not that fast.

Right now the same regex-dna program is failing on the
new Shootout systems; I suspect that the ubiquituous stack size
thing of newer GCCs may be causing this (dictating that stack frames
should be small, i.e. for non-Pascal languages, IIUC);
$ ulimit -s is only partly helpful. I'm looking into this for the
knuleotide test; a patch and more on the latter in c.l.ada.

Georg Bauhaus

unread,
Jul 31, 2009, 7:29:16 AM7/31/09
to
Isaac Gouy schrieb:

> On Jul 30, 9:10 am, Georg Bauhaus <rm.dash-bauh...@futureapps.de>
> wrote:
> -snip-
>> Conclusions
>> (so far, if the above can be reproduced):
>>
>> A fair bit of Ada's disadvantage is remedied by using
>> Inline; another drag is Text_IO which is indeed slow.
>> Real world needs might (and will) consider I/O
>> routines that use OS functions, much like C on Unix does,
>> more or less.
>>
>> The seemingly shaky results (if confirmed) also let me
>> think that without constant attention, the Shootout
>> can give a wrong impression (for any language :).
>
>
> "shaky results"? Didn't you confirm those results yourself?

Shaky here should not refer to runs or relative runs of two
specific test programs; the results are stable. However, the
interpretations of Shootout test comparisons are
less stable, as should be expected (more in an other
post). So when comparing programming languages in general,
more care needs to be taken when looking at the (list of)
Shootout programs. They _can_ be informative of language
features if looked at closely. (I.e., a ranking does not
suffice then.)


> Sure we might hope a different program would give better results, and
> instructions for contributing better programs are given in the FAQ.

Better results of programs is not that important here, what is important
is an expectation, namely that the specific programs might demonstrate
eternally frozen, irreplaceable language properties by looking at
accumulated statistical results only.


> -snip-
>> In the overall rating, failing programs or missing programs
>> used to add to where a language was rated IIRC?
>
> Not true for the last several years.
>

Please accept my apologies for being authoritatively
wrong on several accounts. Sorry for misrepresenting
rules. BTW, I couldn't think of a better way to rank
a failed program runs using compiler X and system Y
other than at list position n + k.

Isaac Gouy

unread,
Jul 31, 2009, 12:38:01 PM7/31/09
to
On Jul 31, 4:29 am, Georg Bauhaus <rm.dash-bauh...@futureapps.de>
wrote:

> Shaky here should not refer to runs or relative runs of two


> specific test programs; the results are stable.  However, the

> interpretations ofShootouttest comparisons are


> less stable, as should be expected (more in an other
> post).   So when comparing programming languages in general,

> more care needs to be taken when looking at the (list of)Shootoutprograms.  They _can_ be informative of language


> features if looked at closely.  (I.e., a ranking does not
> suffice then.)


If we're thinking, a glance at the wide overlaps on the boxplot should
make us question the usefulness of a simple ranking:


http://shootout.alioth.debian.org/u32/benchmark.php?test=all&lang=all&box=1


> > Sure we might hope a different program would give better results, and
> > instructions for contributing better programs are given in the FAQ.
>
> Better results of programs is not that important here, what is important
> is an expectation, namely that the specific programs might demonstrate
> eternally frozen, irreplaceable language properties by looking at
> accumulated statistical results only.


Is that a valid expectation?

Does the benchmarks game promote or subvert that expectation, or is
that a matter of how specific details are selected and highlighted in
other discussions?

> Please accept my apologies for being authoritatively
> wrong on several accounts. Sorry for misrepresenting
> rules.

Oh your information was just out of date, so that needed correction. I
didn't intend to be so heavy handed - those were ordinary mistakes.

-snip-

frankenstein

unread,
Aug 1, 2009, 3:46:46 PM8/1/09
to
On Jul 29, 12:14 am, Jon Harrop <j...@ffconsultancy.com> wrote:
> fft1976 wrote:

> If you're using VS then I highly recommend F# for numerical work, largely
> because it makes parallelism so easy.
>
> > Gambit-C Scheme (one of the best of LISPs, IMO): about 2x slower than
> > C (single core only), but you have to work to get within 2x (unlike
> > OCaml), and if you want it fast, it can't be safe (switch controlled).
>
> Bigloo?


Bigloo is a good option for numerical work and sometimes beats my
Fortran F90 code. I am not a Fortran 90 expert but that Bigloo stacks
up well compared to Fortran tells a lot.

Fact #1: We must forget about the language shootout page because it is
and always has been a kinda like of Micky Mouse benchmark (any idot
who thought he might make up for an excellent programmer posted crappy
code and once posted it is forever in google's history and a lot of
other idiots will use the result from the benchmark). RIP language
shotout page.


Fact 2: The performance of Bigloo especillay for larger problems where
your simulations will consume 12 hours and more on processor times and
will use 2GB of associated main memory and more does not come for
free. You will have to program your intended code with this in mind.
HOWEVER, turning Bigloo into a numerical powerhorse for large data
simulations is straightforward:

a) use from the very beginning on *fx, *fl ... native Bigloo operators
(IT IS VERY easy to use them and the compiler will help you out a lot
to spotting type mismatches; however, you won't have to use your gun
as you likely would by using OCaml and shooting yourself to stop the
battling for types).


b) use f32 or f64 arrays (I created my own array class) whenever
possible. especially use f32 for large data simulations since it makes
a whole lot of a difference if your data are half the size in the main
memory in 32-bits mode than as it would be in 64-bits even though
internal calculations are always automatically casted to Bigloo its C
type double.

c) use classes to make code clearer: classes are very fast in Bigloo.

d) whenever you have to read in binary data (note there are some
issues with f32 bits; read Bigloo its mailing list) use or check for
the following undocumented functions and your code will fly: (ieee-
string->double), (ieee-string->float), (double->ieee-string), (float-
>ieee-string), etc.

e) use -Obench option when compiling, -Obench covers more or less all
Bigloo to C related and associated compiler options with speed in mind
(no bound checking etc.).

f) add types to your code to make it readable for others and for your
pleasure to read and understand your own code during your debugging
excercise:

==
(define (add-vector::vector vec1::vector vec2::vector name::bstring
x::bint y::my-class)
(...))
==

I haven't realaesed it yet but I have fully fledged with a whole lot
of bells and whistles bindings to:

- full clapack (linear algebra package converted from Fortran to C by
f2c freely downloadable from the net)
- full DISLIN (high level plotting routine)
- binding to a random number generator
- binding to a mie scattering code
- a matrix class (for creating n dimensional f32 and f64 matrices,
mapping over n-dimensional matrices, pretty printing, slicing, etc.)
which does a fantastic job and is up to the task speed wise.

NOTE: Translating code from an imperative language lets say Fortran to
Bigloo is easy. A lot of Fortran code consists of do loops which
Bigloo you might uses as well:

==
(do ((i 0 (+fx i 1))
((=fx i dim))
(do ... etc.
==

The only issue: Bigloo like C is 0 based and in my case I always think
in row order instead of Fortran column order scheme.


If you don't use Bigloo and recipies and suggestions posted above
Bigloo is dog slow. However, it is really very simple and comes more
or less at no cost to tailor it to a bespoken powerhorse.

Whenever anyone writes a binding for a well respected external C
library which a lot of people might be interested in please make it
public (yes, yes, I for mself haven't done it yet for dislin and
clapack, etc.). In the hope scientists will stop using Micky Mouse
languages like Matlab or Python with is a pain in the ass.

Bigloo also makes available classes for java. However, I have no idea
if this works well or not and if there are people out there who are
using Bigloo in Java for numerical work.

That said: a big question mark: I haven't seen any detailed
descriptions of Bigloo and how to employ threads to use dual-core or
multi processors even in numerical code.

If anone likes to come forward please do so and report of your
experience using Bigloo in a multi processor farm.

Thanks, Rumpelstilzchen

Paul Rubin

unread,
Aug 1, 2009, 5:56:44 PM8/1/09
to
frankenstein <klohm...@yahoo.de> writes:
> Fact #1: We must forget about the language shootout page because it is
> and always has been a kinda like of Micky Mouse benchmark (any idot
> who thought he might make up for an excellent programmer posted crappy
> code and once posted it is forever in google's history and a lot of
> other idiots will use the result from the benchmark). RIP language
> shotout page.

The shootout is reasonably informative when it's about languages that
have many active practitioners. If someone posts crappy slow code
that makes the language look bad for a while, someone else can come
along and post faster code. So there is ongoing competition between
GHC, Ocaml, C++ and so forth. It's only for the languages with fewer
practitioners (these can still be perfectly good languages) that the
early crappy submissions don't get improved regularly.

Jon Harrop

unread,
Aug 1, 2009, 7:28:36 PM8/1/09
to

No, I contributed dozens of optimized programs to the shootout only to have
them rejected for subjective reasons because the benchmarks are poorly
designed. Some of the benchmarks still on there have "deoptimized by Isaac
Gouy" written in them, for example. Consequently, you cannot draw any
useful conclusions about languages from it.

frankenstein

unread,
Aug 2, 2009, 5:48:47 AM8/2/09
to
As an addendum: I shall post in the following some bits of my matrix
class. Sorry there are no comments and it lacks all my pretty
printing, slicing and mapping over matrix stuff. However, you should
get an idea. Scroll down until you reach the example of the matrix
matrix multiplication. If you want to bench it againts a C code: use
google for the C code of a matrix matrix multiplication:

copy the code enclosed into a file e.g. foo.scm

and on the command line: bigloo -Obench foo.scm, and time a.out. If
you want to increase the size use 1024 e.g. in (do-main 1024).

Some basics to the class: read through it and you will encounter 3
classes: 64-bit, 32-bit and realmat

you create an n-dimensioanal matrix as follows: (mk-f64mat i j ... n
val: 0.0e0)
you access the values: (f64m& i j ... n)
you store values: (f64m! i j ... n value)

same goes for f32, and realmat. update and accessing is done by the
macro and should be sufficient fast.

Raueber Hotzenplotz


==
(module matrix
(export
(class f64mat
mat
(rank::bint (default 1))
(dims::pair-nil (default '(1)))
(print-index?::bool (default #t)))
(class f32mat
mat
(rank::bint (default 1))
(dims::pair-nil (default '(1)))
(print-index?::bool (default #t)))
(class realmat
mat
(rank::bint (default 1))
(dims::pair-nil (default '(1)))
(print-index?::bool (default #t)))
(inline make-matrix::obj op::pair-nil #!key (val 0.0e0) (type
(lambda (x y) (make-vector x y))))
(inline make-matrix-local::obj op::pair-nil #!key (val 0.0e0)
(type (lambda (x y) (make-vector x y))))
(inline mk-f64mat::f64mat #!rest op::pair-nil #!key (val 0.0e0))
(inline mk-f32mat::f32mat #!rest op::pair-nil #!key (val 0.0))
(inline mk-realmat::realmat #!rest op::pair-nil #!key (val 0.0))
(macro aref-set-helper fun::obj x::obj i::bint . op::obj)
(macro f64m& mat::f64mat . op::obj)
(macro f32m& mat::f32mat . op::obj)
(macro realm& mat::realmat . op::obj)
(macro f64m!c mat::f64mat val::double . op::obj)
(macro f64m! mat::f64mat . op::obj)
(macro f32m!c mat::f32mat val::real . op::obj)
(macro f32m! mat::f32mat val::real . op::obj)
(macro realm!c mat::realmat val::real . op::obj)
(macro realm! mat::realmat val::real . op::obj)))


(define-inline (.1st. x::pair-nil) (car x))
(define-inline (.2nd. x::pair-nil) (cadr x))
(define-inline (.3rd. x::pair-nil) (caddr x))

(define-inline (make-matrix-local::obj op::pair-nil
#!key (val 0.0e0)
(type (lambda (x y) (make-vector x y))))
(if (=fx 1 (length op))
(type (car op) val)
(let ([mx::vector (make-vector (car op))])
(do [(oi 0 (+fx oi 1))]
[(=fx oi (car op))]
(vector-set! mx oi (make-matrix-local (cdr op) val: val type:
type)))
mx)))

(define-inline (make-matrix::obj op::pair-nil
#!key (val 0.0e0)
(type (lambda (x y) (make-vector x y))))
(if (=fx 1 (length op))
(type (car op) val)
(let ([mx::vector (make-vector (car op))])
(do [(oi 0 (+fx oi 1))]
[(=fx oi (car op))]
(vector-set! mx oi (make-matrix (cdr op) val: val type:
type)))
mx)))

(define-inline (mk-f64mat::f64mat #!rest op::pair-nil #!key (val
0.0e0))
(let* ((indx::bint (-fx (length op) 1)))
(if (<fx indx 2)
(begin
(error "mk-f64mat in matrix.scm" "you would have to pass at
least 1 dimension" op))
(begin
(let* ((val::double (list-ref op indx))
(op::pair (take op (-fx indx 1))))
;(print op val indx)
(instantiate::f64mat (mat (make-matrix-local op val: val type:
(lambda (x y) (make-f64vector x y))))
(rank (length op))
(dims op)))))))


(define-inline (mk-f64matr::f64mat #!rest op::obj)
(instantiate::f64mat (mat (make-matrix-local op val: 0.0e0 type:
(lambda (x y) (make-f64vector x y))))
(rank (length op))
(dims op)))

(define-inline (mk-f32matr::f32mat #!rest op::obj)
(instantiate::f32mat (mat (make-matrix-local op val: 0.0 type:
(lambda (x y) (make-f32vector x y))))
(rank (length op))
(dims op)))

(define-inline (mk-f32mat::f32mat #!rest op::pair-nil #!key (val
0.0))
(let* ((indx::bint (-fx (length op) 1)))
(if (<fx indx 2)
(begin
(error "mk-f32mat in matrix.scm" "you would have to pass at
least 1 dimension" op))
(begin
(let* ((val (list-ref op indx))
(op::pair (take op (-fx indx 1))))
(instantiate::f32mat (mat (make-matrix-local op val: val type:
(lambda (x y) (make-f32vector x y))))
(rank (length op))
(dims op)))))))

(define-inline (mk-realmatr::realmat #!rest op::obj)
(instantiate::realmat (mat (make-matrix-local op val: 0.0 type:
(lambda (x y) (make-vector x y))))
(rank (length op))
(dims op)))


(define-inline (mk-realmat::realmat #!rest op::pair-nil #!key (val
0.0))
(let* ((indx::bint (-fx (length op) 1)))
(if (<fx indx 2)
(begin
(error "mk-realmat in matrix.scm" "you would have to pass at
least 1 dimension" op))
(begin
(let* ((val::double (list-ref op indx))
(op::pair (take op (-fx indx 1))))
(instantiate::realmat (mat (make-matrix-local op val: val type:
(lambda (x y) (make-vector x y))))
(rank (length op))
(dims op)))))))

(define-macro (m& x::obj i::bint . op::obj)
(if (null? op)
`(vector-ref ,x ,i)
`(a& (vector-ref ,x ,i) ,@op)))


(define-macro (m! x::obj val::obj i::bint . op::obj)
(if (null? op)
`(vector-set! ,x ,i ,val)
`(a! (vector-ref ,x ,i) ,@op)))

(define-macro (f64m& mat::f64mat . op::obj)
`(with-access::f64mat ,mat (mat)
(aref-set-helper (lambda (xx yy) (f64vector-ref xx yy))
mat ,@op)))


(define-macro (f32m& mat::f32mat . op::obj)
`(with-access::f32mat ,mat (mat)
(aref-set-helper (lambda (xx yy) (f32vector-ref xx yy))
mat ,@op)))

(define-macro (realm& mat::realmat . op::obj)
`(with-access::realmat ,mat (mat)
(aref-set-helper (lambda (xx yy) (vector-ref xx yy))
mat ,@op)))


(define-macro (f64m! mat::f64mat . op::obj)
(let* ((indx::bint (-fx (length op) 1))
(val::double (list-ref op indx))
(op::pair (take op indx)))
`(with-access::f64mat ,mat (mat)
(aref-set-helper (lambda (xx yy) (f64vector-set! xx yy ,val))
mat ,@op))))


(define-macro (f32m! mat::f32mat . op::obj)
(let* ((indx::bint (-fx (length op) 1))
(val::double (list-ref op indx))
(op::pair (take op indx)))
`(with-access::f32mat ,mat (mat)
(aref-set-helper (lambda (xx yy) (f32vector-set! xx yy ,val))
mat ,@op))))


(define-macro (realm! mat::realmat . op::obj)
(let* ((indx::bint (-fx (length op) 1))
(val::double (list-ref op indx))
(op::pair (take op indx)))
`(with-access::realmat ,mat (mat)
(aref-set-helper (lambda (xx yy) (vector-set! xx yy ,val))
mat ,@op))))


(define-inline (aref-set-helper-new fun::obj x::obj i::bint op::obj)
(if (null? op)
(fun x i)
(aref-set-helper-new fun (vector-ref x i) (car op) (cdr op))))


(define-macro (aref-set-helper fun::obj x::obj i::bint . op::obj)
(if (null? op)
`(,fun ,x ,i)
`(aref-set-helper ,fun (vector-ref ,x ,i) ,@op)))


(define-macro (f64m!c mat::f64mat val::double . op::obj)
`(with-access::f64mat ,mat (mat)
(aref-set-helper (lambda (xx yy) (f64vector-set! xx yy ,val))
mat ,@op)))


(define-macro (f32m!c mat::f32mat val::real . op)
`(with-access::f32mat ,mat (mat)
(aref-set-helper (lambda (xx yy) (f32vector-set! xx
yy ,val)))))


(define-macro (realm!c mat::realmat val::real . op::obj)
`(with-access::realmat ,mat (mat)
(aref-set-helper (lambda (xx yy) (vector-set! xx yy ,val))
mat ,@op)))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Test: matrix -matrix multiplication
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; we craete a matrix here:
;; (mk-f64mat rows cols val: 0.0e0)
;; will create a 64bit matrix of size (rows x cols)
;;
;; we can update individual values by:
;; (f64m! matrix i j value)
;; matrix is the matrix
;; i, j are the indices (based and rows runs first)
;; value is the value
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(define-inline (mkmatrix::f64mat rows::bint cols::bint)
(let ((mx::f64mat (mk-f64mat rows cols val: 0.0e0))
(count::double 0.5e0))
(do ((i 0 (+fx i 1)))
((=fx i rows))
(let ((row (make-f64vector cols 0.0e0)))
(do ((j 0 (+fx j 1)))
((=fx j cols))
(f64m! mx i j count)
(set! count (+fl count 0.5)))))
mx))


;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; we do the matrix matrix multiplication
;;
;; we cann access values in the matrix class by:
;; (f64m& matrix i j)
;; and we set values by: (f64m! matrix i j value)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(define-inline (mmult::f64mat rows::bint cols::bint m1::f64mat
m2::f64mat)
(let ((m3::f64mat (mk-f64mat rows cols val: 0.0e0)))
(do ((i 0 (+fx 1 i)))
((=fx i rows))
(do ((j 0 (+fx 1 j)))
((=fx j cols))
(let ((val::double 0.0e0))
(do ((k 0 (+fx 1 k)))
((=fx k cols))
(set! val (+fl val
(*fl (f64m& m1 i k)
(f64m& m2 k j)))))
(f64m! m3 i j val))))
m3))


(define (do-main size::bint)
(let ((mm::double 0.0e0)
(m1::f64mat (mkmatrix size size))
(m2::f64mat (mkmatrix size size)))
(set! mm (mmult size size m1 m2))
(let ((r0 (vector-ref (f64mat-mat mm) 0)))
(print (f64vector-ref r0 0)))))


(do-main 512)
==

frankenstein

unread,
Aug 2, 2009, 6:07:16 AM8/2/09
to
On Aug 1, 10:56 pm, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:


The shootout is completely useless when it comes to real life. I have
to deal with large data sets and simulations. My problem is that most
of my simulations are stuck in reading in the data. Data from global
chemistry transport models and satellite date make quickly up for 500
GB for a year worth of data and observations. Even only processing day
after day most of the 12 hours the simulation takes is spent in
reading in the data.

Fortran by nature is fast even when using object oriented programming.
However, I am now changing my code to Bigloo. I have no actual figures
if my simulations (basically eigenvalues and inverse and inversion
from data and observations; i am using my binding to clapack for this)
will be as fast as the ones written in Fortran. However, I made the
observation that my class for reading the data from the global
chemistry transport model is faster than the origial code which comes
shipped with the model for reading in the binary files created by the
model. I guess the reason being this: they read in record after record
without ever jumping to the match. My class (based on (ieee-string-
>float and mmap)) jumps around, e.g. the fortran program (with -O3
option) and ifort compiler would take 6 seconds for reading in an
array whereas my bigloo code takes 4 seconds. Disclaimer: (ieee-string-
>float) as it now stands has a bug in Bigloo and I was forced to use
a software conversion (code for converting 4-bytes to float posted
long time ago here on comp.lang.scheme by Oleg) which puts the figures
to 8 seconds. However, I am quite sure Bigloo developers will resolve
the bug. So my benchmark of seconds is based on a wrong conversion (it
will always give 0.0 instead of the actual value). I also made the
observations that during the read process my Bigloo program and
Fortran consumes the same amount of real memory. I am not expecting
that bigloo will be more memory efficient but I hope the grabage
collector will serve me well. allocating deallocatin (since Fortran 90
standard) is very efficient in Fortran.


Frau Holle

Jon Harrop

unread,
Aug 2, 2009, 2:55:20 PM8/2/09
to
frankenstein wrote:
> Fortran by nature is fast even when using object oriented programming.

That was true many years ago but Fortran has fallen a long way behind now.
Applications like numerical methods from linear algebra operating on
matrices with single, double or single/double complex elements were
Fortran's last refuge for many years but even they have come under fire
now.

For example, I recently implemented QR decomposition in F#:

http://flyingfrogblog.blogspot.com/2009/07/ocaml-vs-f-qr-decomposition.html

Fortran is not only incapable of expressing the algorithm that generically,
it is also up to 3x slower than F#!

The reason is the efficiency of parallelism in F#, using the wait-free
work-stealing concurrent deques of the Task Parallel Library for the
efficient dynamic load balancing of fine-grained parallel work items.

If Fortran cannot even compete there, where it is strongest, then it has no
hope for other applications. Hence the defacto-standard libraries for
things like FFTs have been written in a mix of higher-level languages and C
for many years now. For most scientific computing, it is no longer
practically feasible to use Fortran because the language is too cumbersome
and inexpressive.

0 new messages