Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

No call for Ada (was Re: Announcing new scripting/prototyping language)

0 views
Skip to first unread message

Ludovic Brenta

unread,
Feb 7, 2004, 8:00:35 AM2/7/04
to
"Carroll-Tech" <and...@carroll-tech.net> writes:

> I tell the students that I tutor that learning some Pascal or Ada
> would help them and they get scared. I mention doing a project in
> Ada and everyone looks at me like I'm out to punish myself. To me
> it isn't any easier to use C/C++, Java, Perl, Lisp or Prolog than it
> is to use Ada. How is it that Ada has this "super powerful", "super
> difficult", "there's not much call for it because it's too advanced
> and powerful" air about it when it's just another language? It's
> like saying that the machine code spit out of an Ada compiler has
> some mystical, magical properties that makes the Ada language more
> difficult to use.

I was thinking along the same lines last evening, and I came up with a
small theory that explains why so few pople can be bothered to learn
Ada. It goes like this: There are 3 types of languages.

The first type of language says "we're going to make programming
easy". Of course, this is a lie, because programming is inherently
difficult and no language can make it easy. These languages fake it
by being simplistic. Java is the most prominent member of this family
of languages; most scripting languages also fall in this category.
Beginners tend to flock to these "easy" languages and never learn
proper programming skills (like e.g. memory management. If some Java
"guru" reads this, ask yourself this one question: how many threads
does your program have, and please justify the existence of each
thread).

The second type says "we will let you do anything, absolutely anything
you want, and the power is in the hands of the True Programmers".
Languages in this category include, among others, C and C++. Many
people take a foolish pride in being called a True Programmer, and
therefore like these languages. I myself once was in this category: I
would show off my skills by writing a single-line program that nobody
else could read. But humans write bugs, and these languages don't
lend a hand finding these. Hence the famous buffer overflows.

The third type is what I would call the "zen master" type of
languages. They treat you like an apprentice, slapping you on the
hand each time you make a small mistake, and they scorn at you for
choosing the quick and easy path -- which leads to the Dark Side. If
you accept their teachings, you quickly become a Master yourself. If
you rebel against them, you will never achieve Enlightenment and will
always produce bugs. The "zen master" languages are Pascal, Modula,
Oberon, and, master of masters, Ada. The beauty of these languages is
that, once you are Enlightened, you can apply your wisdom to other
languages as well -- but often would prefer not to.

--
Ludovic Brenta.

David Rasmussen

unread,
Feb 7, 2004, 8:19:11 AM2/7/04
to
Ludovic Brenta wrote:
> "Carroll-Tech" <and...@carroll-tech.net> writes:
>
>
>>I tell the students that I tutor that learning some Pascal or Ada
>>would help them and they get scared. I mention doing a project in
>>Ada and everyone looks at me like I'm out to punish myself. To me
>>it isn't any easier to use C/C++, Java, Perl, Lisp or Prolog than it
>>is to use Ada. How is it that Ada has this "super powerful", "super
>>difficult", "there's not much call for it because it's too advanced
>>and powerful" air about it when it's just another language? It's
>>like saying that the machine code spit out of an Ada compiler has
>>some mystical, magical properties that makes the Ada language more
>>difficult to use.
>

I don't know. Ada rocks.

/David

David Harmon

unread,
Feb 7, 2004, 9:56:42 AM2/7/04
to
On 07 Feb 2004 14:00:35 +0100 in comp.lang.c++, Ludovic Brenta
<ludovic...@insalien.org> was alleged to have written:
>Newsgroups: comp.lang.ada,comp.lang.c,comp.lang.c++,comp.lang.java

You damned crossposting troll, there is _no_ subject on-topic in all
those newsgroups.

See the welcome message posted twice per week in comp.lang.c++ or
available at http://www.slack.net/~shiva/welcome.txt

Robert I. Eachus

unread,
Feb 7, 2004, 10:03:20 AM2/7/04
to
Ludovic Brenta wrote:

> The third type is what I would call the "zen master" type of
> languages. They treat you like an apprentice, slapping you on the
> hand each time you make a small mistake, and they scorn at you for
> choosing the quick and easy path -- which leads to the Dark Side. If
> you accept their teachings, you quickly become a Master yourself. If
> you rebel against them, you will never achieve Enlightenment and will
> always produce bugs. The "zen master" languages are Pascal, Modula,
> Oberon, and, master of masters, Ada. The beauty of these languages is
> that, once you are Enlightened, you can apply your wisdom to other
> languages as well -- but often would prefer not to.

I think you are on the right track. When I am programming in Ada, I
often spend most of a day coding. If I am exhausted at the end of it, I
will put off compiling until the next day. Otherwise, I hand all the
code to the compiler, and I am not surprised to be handed back dozens of
error messages. Fix the syntax bugs, and now I get twices as many
semantic errors. Kill all those and I am surprised if the test
programs--often written between the package interface and the package
bodies--don't run correctly.

For example, I recently finished writing a library of matrix operations
which works with "views" that may be a submatrix of an existing matrix,
and supports operations like Add(A,B) where the result is written in A.
That's a bit tricky, but the real complex one is Mult(A,B) where the
amount of temporary storage space for two N by N matricies is N. (A
buffer that stores one row.)

Why am I mentioning this? After all the coding I had a bug in the Mult
routine that the compiler didn't catch. A wrong subscript inside a
loop. (Why am I doing this? To submit as a new benchmark for SPECfp.
All that stuff is just scaffolding for implementing Strassen's algorithm
efficiently.)

Since I don't take what the compiler tells me personally, I love the
ratio of a hundred to one or so between compile errors and run-time
bugs. Some people though look at a list of compiler error messages as
if each one was a major failing on their part. Me? I could proofread
the code carefully, but it is easier to let the compiler find out where
I typed a comma for a period, and so on. And IMHO it would be nice if
the compiler found all the typos, not just most of them. ;-)

Could I write the same code in C, C++, or Java? Sure. It is just much
easier to let the Ada compiler do the heavy lifting part of the
debugging, so I would still write in Ada, then modify the code to match
the C, C++, or Java syntax. So to me, all that frequent hand-slapping
is a major benefit.

--
Robert I. Eachus

"The war on terror is a different kind of war, waged capture by capture,
cell by cell, and victory by victory. Our security is assured by our
perseverance and by our sure belief in the success of liberty." --
George W. Bush

MSG

unread,
Feb 7, 2004, 2:24:47 PM2/7/04
to
Ludovic Brenta <ludovic...@insalien.org> wrote in message news:<m3isij11...@insalien.org>...

[...]

> The "zen master" languages are Pascal, Modula,
> Oberon, and, master of masters, Ada. The beauty of these languages is
> that, once you are Enlightened, you can apply your wisdom to other
> languages as well -- but often would prefer not to.


Can you do the following in Ada:

1. Write *one* bubble-sort function that will work on different
types given an appropriate comparison function

2. If B is a subtype of A, can you pass it to any function that
takes A as an argument? (covariance)

3. If B is a subtype of A, and FA and FB are functions accepting A
and B as arguments, can you use FA wherever FB could be used?
(contravariance)

4. If B is a subtype of A, is list/array/vector/set/etc. of Bs a
subtype of list/array/vector/set/etc of As? (covariance)

Unless you can show us how to do this in a way that will keep Ada a
"safe" (third category) language you say it is, I will not believe
that it's a "master of of the masters", I'm afraid.

If you answer "yes" to any of the questions, post *compilable*
snippets: we don't want to learn Ada just to verify your claims,
we simply won't believe you.

BTW, the esteemed Mr. E. Robert Tisdale (ER for short) isn't
letting on about why Ada isn't used much at NASA any more.
Perhaps *you* have an explanation?

MSG

David Rasmussen

unread,
Feb 7, 2004, 2:32:29 PM2/7/04
to
MSG wrote:
>
> Unless you can show us how to do this in a way that will keep Ada a
> "safe" (third category) language you say it is, I will not believe
> that it's a "master of of the masters", I'm afraid.
>
> If you answer "yes" to any of the questions, post *compilable*
> snippets: we don't want to learn Ada just to verify your claims,
> we simply won't believe you.
>
> BTW, the esteemed Mr. E. Robert Tisdale (ER for short) isn't
> letting on about why Ada isn't used much at NASA any more.
> Perhaps *you* have an explanation?
>

So you have theories about Ada, but you don't really know it? That's
credible...

Just for the record: I am posting from comp.lang.c++.

/David

Keith Thompson

unread,
Feb 7, 2004, 5:47:55 PM2/7/04
to
msg...@yahoo.com (MSG) writes:
> If you answer "yes" to any of the questions, post *compilable*
> snippets: we don't want to learn Ada just to verify your claims,
> we simply won't believe you.

Feel free to post compilable Ada snippets in comp.lang.ada, but please
don't cross-post to comp.lang.c, comp.lang.c++, or comp.lang.java.
The last thing any of these newsgroups needs is yet another language
flame war. (I'm posting from comp.lang.c.)

I've redirected followups to comp.lang.ada.

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://www.sdsc.edu/~kst>
Schroedinger does Shakespeare: "To be *and* not to be"

tmo...@acm.org

unread,
Feb 7, 2004, 6:19:15 PM2/7/04
to
>4. If B is a subtype of A, is list/array/vector/set/etc. of Bs a
> subtype of list/array/vector/set/etc of As? (covariance)
What do you mean by a "set" being a "subtype" of another "set"?

David Starner

unread,
Feb 7, 2004, 7:25:33 PM2/7/04
to
On Sat, 07 Feb 2004 14:00:35 +0100, Ludovic Brenta wrote:

> Of course, this is a lie, because programming is inherently
> difficult and no language can make it easy.

That's exactly what the assembly language programmers said about the
first Fortran compiler, and it's equally wrong now. Sure, there are cases
where you need to run DSP code and coordinate with the home base thirty
million miles away using one space-hardened 386, and that's hard. Then
there's the cases where you need two lines of shell to simplify moving
files around, and that's something assembly or Fortran or Ada or Java
would make much more complex then it is.

> (like e.g. memory management. If some Java
> "guru" reads this, ask yourself this one question: how many threads
> does your program have, and please justify the existence of each
> thread).

In the Jargon file, there's a story of a man who bummed every cycle out of
a poker program, even the initialization code, who spurned assembly
language because it was too inefficient. How would you explain your choice
of programming language to him? Who cares if there's a couple extra
threads running? You make a big deal about languages that protect you
against buffer overflows, why not use a language that protects you against
memory leaks?

> The "zen master" languages are Pascal, Modula,
> Oberon, and, master of masters, Ada.

Pascal is hardly usable, unless you use one of a dozen proprietary
extensions. That's hardly "zen master".

Nick Landsberg

unread,
Feb 7, 2004, 9:54:55 PM2/7/04
to

David Starner wrote:

> On Sat, 07 Feb 2004 14:00:35 +0100, Ludovic Brenta wrote:
>
>
>>Of course, this is a lie, because programming is inherently
>>difficult and no language can make it easy.
>
>
> That's exactly what the assembly language programmers said about the
> first Fortran compiler, and it's equally wrong now. Sure, there are cases
> where you need to run DSP code and coordinate with the home base thirty
> million miles away using one space-hardened 386, and that's hard. Then
> there's the cases where you need two lines of shell to simplify moving
> files around, and that's something assembly or Fortran or Ada or Java
> would make much more complex then it is.
>
>
>>(like e.g. memory management. If some Java
>>"guru" reads this, ask yourself this one question: how many threads
>>does your program have, and please justify the existence of each
>>thread).
>
>
> In the Jargon file, there's a story of a man who bummed every cycle out of
> a poker program, even the initialization code, who spurned assembly
> language because it was too inefficient. How would you explain your choice
> of programming language to him? Who cares if there's a couple extra
> threads running? You make a big deal about languages that protect you
> against buffer overflows, why not use a language that protects you against
> memory leaks?

"Who cares it there's an extra couple of threads running?" you ask.
I do! I work on systems which are required to process thousands
of requests per second. Unnecessary threads (just because the language
makes it easy to create them), waste precious CPU cycles. (But that's
OT in c.l.c, I think).

If your last statement refers to Java protecting you against memory
leaks, then you have a surprise in store for you.
If it protects you against memory leaks why is there a need for
a garbage collector in the first place?
I have found many situations where the garbage collector did
not clear unused memory. The garbage collector is just as
buggy as any other code. (This is also OT in the c.l.c group.
and I don't want to subsribe to c.l.java :)

>
>
>>The "zen master" languages are Pascal, Modula,
>>Oberon, and, master of masters, Ada.
>
>
> Pascal is hardly usable, unless you use one of a dozen proprietary
> extensions. That's hardly "zen master".
>

--
Ñ
"It is impossible to make anything foolproof because fools are so
ingenious" - A. Bloch

Simon Wright

unread,
Feb 8, 2004, 7:12:44 AM2/8/04
to
"Robert I. Eachus" <riea...@comcast.net> writes:

> I think you are on the right track. When I am programming in Ada, I
> often spend most of a day coding. If I am exhausted at the end of it,
> I will put off compiling until the next day. Otherwise, I hand all
> the code to the compiler, and I am not surprised to be handed back
> dozens of error messages. Fix the syntax bugs, and now I get twices
> as many semantic errors.

I don't know if I'm just too impatient, but I much prefer to implement
each subprogram body and then compile it (they are usually separates,
so GNAT is quite happy do do this from within Glide).

If nothing else, it means that any problems with specs or
inter-package relationships appear earlier.

> Kill all those and I am surprised if the
> test programs--often written between the package interface and the
> package bodies--don't run correctly.

--
Simon Wright 100% Ada, no bugs.

Martin Krischik

unread,
Feb 8, 2004, 7:57:07 AM2/8/04
to
Ludovic Brenta wrote:

> Not automatically. You would define new containers explicitly for A
> and B, using generics; see for example the Booch components[1] or the
> Charles library[2], which is modelled after the C++ STL. If you want
> polymorphic containers, you store pointers in them.

> [1] http://www.pogner.demon.co.uk/components/bc/case-study.html
> [2] http://home.earthlink.net/~matthewjheaney/charles/

AdaCL (adacl.sf.net) has polymorphic containers whichout (exposed) pointers
- that is somthing C++ can't do ;-) because C++'s RTTI is only a cheap
excuse compared with Ada's tags.

For the C++ programmes: If you copy contruct a class in Ada then Ada will
use the tag of the class to determine the actual child class and copy that
instead of the currently visible prarent view of the class.

With Regards

Martin
--
mailto://kris...@users.sourceforge.net
http://www.ada.krischik.com

Martin Krischik

unread,
Feb 8, 2004, 8:05:53 AM2/8/04
to
MSG wrote:

> Ludovic Brenta <ludovic...@insalien.org> wrote in message
> news:<m3isij11...@insalien.org>...
>
> [...]
>
>> The "zen master" languages are Pascal, Modula,
>> Oberon, and, master of masters, Ada. The beauty of these languages is
>> that, once you are Enlightened, you can apply your wisdom to other
>> languages as well -- but often would prefer not to.
>
>
> Can you do the following in Ada:
>
> 1. Write *one* bubble-sort function that will work on different
> types given an appropriate comparison function

Shure you can. Ada invented templates long bevore C++ was even though of.
Ada templates are far more powerfull then C++.

> 2. If B is a subtype of A, can you pass it to any function that
> takes A as an argument? (covariance)

subtype as in object orientation:

type Parent is tagged null record;
type Child is new Parent with null record;

or subtype as in simple types:

type Day_of_Month is range 1 .. 31;
subtype Day_of_Febuary is Day_of_Month range 1 .. 29;

Well, the answer is yes in both cases.

> 3. If B is a subtype of A, and FA and FB are functions accepting A
> and B as arguments, can you use FA wherever FB could be used?
> (contravariance)

Of corse.



> If you answer "yes" to any of the questions, post *compilable*
> snippets: we don't want to learn Ada just to verify your claims,
> we simply won't believe you.

Others did that allready. Besides, you would need an installed Ada compiler
to verify anyway.



> BTW, the esteemed Mr. E. Robert Tisdale (ER for short) isn't
> letting on about why Ada isn't used much at NASA any more.

He also won't tell you why Spirit died.

Josh Sebastian

unread,
Feb 8, 2004, 11:20:17 AM2/8/04
to
On Sun, 08 Feb 2004 14:05:53 +0100, Martin Krischik wrote:

> MSG wrote:
>
>> Can you do the following in Ada:
>>
>> 1. Write *one* bubble-sort function that will work on different
>> types given an appropriate comparison function
>
> Shure you can. Ada invented templates long bevore C++ was even though of.

Factoid: Alex Stepanov started off his research that led to the STL in Ada.

> Ada templates are far more powerfull then C++.

Why do you say that? I haven't used Ada terribly much, but from what I
remember, I don't think Ada generics are Turing-complete, which C++
templates are.

Josh

Martin Krischik

unread,
Feb 8, 2004, 1:02:02 PM2/8/04
to
Josh Sebastian wrote:

> On Sun, 08 Feb 2004 14:05:53 +0100, Martin Krischik wrote:
>
>> MSG wrote:
>>
>>> Can you do the following in Ada:
>>>
>>> 1. Write *one* bubble-sort function that will work on different
>>> types given an appropriate comparison function
>>
>> Shure you can. Ada invented templates long bevore C++ was even though of.
>
> Factoid: Alex Stepanov started off his research that led to the STL in
> Ada.

I am not quite shure what you want to say. As far as I know generics had
been part of Ada 83. But of course without some STL.

>> Ada templates are far more powerfull then C++.
>
> Why do you say that? I haven't used Ada terribly much, but from what I
> remember, I don't think Ada generics are Turing-complete, which C++
> templates are.

Well I have used C++ for 10 years and Ada for just 1 year - I can express my
will better with Ada generics then with C++ templates.

Josh Sebastian

unread,
Feb 8, 2004, 2:06:11 PM2/8/04
to
On Sun, 08 Feb 2004 19:02:02 +0100, Martin Krischik wrote:

> Josh Sebastian wrote:
>
>> On Sun, 08 Feb 2004 14:05:53 +0100, Martin Krischik wrote:
>>
>> Factoid: Alex Stepanov started off his research that led to the STL in
>> Ada.
>
> I am not quite shure what you want to say.

I was saying that the STL started off in Ada before it was moved to (and
completed in) C++. I wasn't disagreeing with you (yet :).

>>> Ada templates are far more powerfull then C++.
>>
>> Why do you say that? I haven't used Ada terribly much, but from what I
>> remember, I don't think Ada generics are Turing-complete, which C++
>> templates are.
>
> Well I have used C++ for 10 years and Ada for just 1 year - I can express my
> will better with Ada generics then with C++ templates.

Maybe you just weren't very good at C++ templates. I don't mean to be
insulting, but personal preferences do play a huge roll here. Unless
someone can prove Ada's generics are Turing-complete, though (a quick
google doesn't turn up anything), I'd say that we'll have to call C++
templates more powerful.

Josh

Martin Ambuhl

unread,
Feb 8, 2004, 3:39:36 PM2/8/04
to
Josh Sebastian and Martin Krischik are continuing their silly language
advocacy.
Perhaps comp.lang.ada cares about the preference of one for Ada;
perhaps comp.lang.c++. Who knows what the people in comp.lang.java care
about? I know for damn sure that comp.lang.c is not the place for your
silly discussion of Ada vs. C++.
I have set follow-ups to comp.lang.ada and comp.lang.c++. Even that might
be too much. In any event, take comp.lang.c off any future exchanges you
idiots have. Even better: take it to e-mail.

--
Martin Ambuhl

MSG

unread,
Feb 8, 2004, 6:25:26 PM2/8/04
to
Martin Krischik <kris...@users.sourceforge.net> wrote in message news:<2460735.u...@linux1.krischik.com>...

> Others did that allready. Besides, you would need an installed Ada compiler
> to verify anyway.

time apt-get install gnat
=> 15 seconds

> >
> > Can you do the following in Ada:
> >
> > 1. Write *one* bubble-sort function that will work on different
> > types given an appropriate comparison function
>
> Shure you can. Ada invented templates long bevore C++ was even though of.
> Ada templates are far more powerfull then C++.

Example?

> > 2. If B is a subtype of A, can you pass it to any function that
> > takes A as an argument? (covariance)
>
> subtype as in object orientation:
>
> type Parent is tagged null record;
> type Child is new Parent with null record;

"null record" ? How about non-null ones? Is a 3D point (x, y, z) a
subtype of a 2D one (x, y) ?

BTW, does Ada have discriminated unions? (if you don't know what they
are, probably none of the language you used had them)

Also, is it possible to corrupt memory in Ada?
Is it possible to leak memory in Ada?

> or subtype as in simple types:
>
> type Day_of_Month is range 1 .. 31;
> subtype Day_of_Febuary is Day_of_Month range 1 .. 29;

That's neat.



> > BTW, the esteemed Mr. E. Robert Tisdale (ER for short) isn't
> > letting on about why Ada isn't used much at NASA any more.
>
> He also won't tell you why Spirit died.

It's not dead, it's sleeping!

Cheers,
MSG

James Rogers

unread,
Feb 8, 2004, 7:11:53 PM2/8/04
to
Given the quite reasonable objections presented by some in the
cross-posted news-groups I would like to carry on this conversation
with you via email. My return email is valid.

Jim Rogers

Ludovic Brenta

unread,
Feb 8, 2004, 8:24:58 PM2/8/04
to
msg...@yahoo.com (MSG) writes:

> Martin Krischik <kris...@users.sourceforge.net> wrote:
>
> > Others did that allready. Besides, you would need an installed Ada compiler
> > to verify anyway.
>
> time apt-get install gnat
> => 15 seconds

Thanks, I made that package :)

> > >
> > > Can you do the following in Ada:
> > >
> > > 1. Write *one* bubble-sort function that will work on different
> > > types given an appropriate comparison function
> >
> > Shure you can. Ada invented templates long bevore C++ was even though of.
> > Ada templates are far more powerfull then C++.
>
> Example?

I gave one earlier on c.l.ada. I did not cross-post it to the other
newsgroups.

Ada templates are superior to C++ templates in several respects. For
one thing, you can pass procedures and functions as generic
parameters, and the generic formal parameters specify the signature
for them. Another thing is that the generic formal parameters can
specify several kinds of restrictions to types that are acceptable.

> > > 2. If B is a subtype of A, can you pass it to any function that
> > > takes A as an argument? (covariance)
> >
> > subtype as in object orientation:
> >
> > type Parent is tagged null record;
> > type Child is new Parent with null record;
>
> "null record" ? How about non-null ones? Is a 3D point (x, y, z) a
> subtype of a 2D one (x, y) ?

type Point_2D is tagged record
X, Y : Float;
end record;

type Point_3D is new Point_2D with record
Z : Float;
end record;

> BTW, does Ada have discriminated unions? (if you don't know what they
> are, probably none of the language you used had them)

Yes, they are called variant records.

> Also, is it possible to corrupt memory in Ada?

Yes, if you try hard enough and use Unchecked_Conversion and
System.Address instead of the more usual stuff. Very hard in
practice. I once tried to do a buffer overflow in Ada, and managed
it, but the code to achieve this is rather ugly. Basically, you have
to go the extra mile to corrupt memory in Ada.

> Is it possible to leak memory in Ada?

Yes, just like in C. However, since Ada programmers do much less
dynamic memory allocation, this happens much less often than it does
in C. Note that there also exists a garbage collector in Ada, as part
of AdaCL.

> > or subtype as in simple types:
> >
> > type Day_of_Month is range 1 .. 31;
> > subtype Day_of_Febuary is Day_of_Month range 1 .. 29;
>
> That's neat.

If this is neat then dig this:

for J in Day_Of_Month loop
exit when Month = February and then J >= Day_Of_February'Last;
Put_Line (Day_Of_Month'Image (J));
end loop;

And think about the implications of these ranges on subprograms that
accept only a small set of integer constants as a parameter:

type Assessment is (Yes, No, Maybe, Maybe_Not);

procedure P (A : Assessment);

which is not possible in C:

typedef enum { YES, NO, MAYBE, MAYBE_NOT } assessment_t;

void P (assessment_t A) { }

int main () {
P (42); // no compiler check!
return 0;
}

I just tried the above with gcc -Wall and got no warning. This means
that only a human carefully reviewing this program will see the
mistake (you may also try lint, but this is a tool external to the
language and based on heuristics, not language rules). By contrast,
the Ada compiler catches the mistake automatically in seconds and
leaves you with the confidence that 100% of your code has been
screened for such stupid mistakes.

--
Ludovic Brenta.

Thomas Stegen CES2000

unread,
Feb 9, 2004, 6:20:15 AM2/9/04
to
Josh Sebastian wrote:
> Maybe you just weren't very good at C++ templates. I don't mean to be
> insulting, but personal preferences do play a huge roll here. Unless
> someone can prove Ada's generics are Turing-complete, though (a quick
> google doesn't turn up anything), I'd say that we'll have to call C++
> templates more powerful.

Turing completeness is only one measure of power. And not a very
good one for measuring different systems of templates. If you use
templates for any sort of computational programming beyond some
very simple things you are stretching the rubber band beyond its
limit.

--
Thomas.

MSG

unread,
Feb 9, 2004, 9:26:33 PM2/9/04
to
Thanks very much to everyone for the interesting info. It made me look
more closely at Ada. It looks like it is indeed one of the safest
languages among the ones that aren't garbage collected, which probably
makes it suitable for programming things like airplanes, etc.:

1. hard real-time
2. bug-averse
3. not very performance demanding (don't know about other compilers,
but they say GNAT produces slow executables)

However, it does not look like it's a good match for me, since my
needs are the exact opposite:

1. no real time
2. bugs welcome (but not wrong results) - lusers will not come near my
programs
3. performance is highly important


James Rogers <jimmaure...@att.net> wrote in message news:<Xns9489AEC0F8702...@204.127.36.1>...

Ed Falis

unread,
Feb 9, 2004, 9:37:52 PM2/9/04
to
On 9 Feb 2004 18:26:33 -0800, MSG <msg...@yahoo.com> wrote:

> 3. not very performance demanding (don't know about other compilers,
> but they say GNAT produces slow executables)

"... but they say ..."

Great research!

;-)

James Rogers

unread,
Feb 9, 2004, 9:45:27 PM2/9/04
to
msg...@yahoo.com (MSG) wrote in
news:54759e7e.0402...@posting.google.com:

> Thanks very much to everyone for the interesting info. It made me look
> more closely at Ada. It looks like it is indeed one of the safest
> languages among the ones that aren't garbage collected, which probably
> makes it suitable for programming things like airplanes, etc.:
>
> 1. hard real-time
> 2. bug-averse
> 3. not very performance demanding (don't know about other compilers,
> but they say GNAT produces slow executables)
>
> However, it does not look like it's a good match for me, since my
> needs are the exact opposite:
>
> 1. no real time
> 2. bugs welcome (but not wrong results) - lusers will not come near my
> programs
> 3. performance is highly important

I find your list of needs interesting.

How do you distinguish between bugs and wrong results? My experience
is that bugs are detected because they produce incorrect results.
If nothing goes wrong we do not declare the presence of a bug.

I think you will find, if you look into hard real-time systems,
that performance is critical. While it is true that GNAT has
produced relatively slow executables in the past, those same
executables are often 3 to 5 times faster than early Java
programs. I know that current JVMs have improved performance
significantly. I speak of JVMs from around the year 2000. Other
Ada compilers produce faster code than GNAT. Sometimes you get
what you pay for. (GNAT is a free compiler in the GNU compiler
chain).

What kind of performance measures do you use in your problem
domain? C programmers are fond of fast code execution and fast
compilation. C++ programmers have similar performance priorities,
but are willing to sacrifice some compiler speed for the
flexibility of templates. Java programmers frequently prize
speed of coding, with the clever use of the large set of API
libraries available to them. Ada programmers are fond of fast
code and early detection of coding defects.

Jim Rogers

Dmitry A. Kazakov

unread,
Feb 10, 2004, 5:05:20 AM2/10/04
to
On 9 Feb 2004 18:26:33 -0800, msg...@yahoo.com (MSG) wrote:

>Thanks very much to everyone for the interesting info. It made me look
>more closely at Ada. It looks like it is indeed one of the safest
>languages among the ones that aren't garbage collected, which probably
>makes it suitable for programming things like airplanes, etc.:
>
>1. hard real-time
>2. bug-averse
>3. not very performance demanding (don't know about other compilers,
>but they say GNAT produces slow executables)
>
>However, it does not look like it's a good match for me, since my
>needs are the exact opposite:
>
>1. no real time

This is easy in Ada:

loop
null;
end loop;
-- The rest won't meet any deadline!

>2. bugs welcome (but not wrong results) - lusers will not come near my
>programs

It is easy to write a virus program scanning your source codes and
randomly sowing them with bugs, if you so enjoy them...

>3. performance is highly important

GNAT is a front end of GNU C...

--
Regards,
Dmitry A. Kazakov
www.dmitry-kazakov.de

David Rasmussen

unread,
Feb 10, 2004, 5:10:28 AM2/10/04
to
MSG wrote:
> 3. not very performance demanding (don't know about other compilers,
> but they say GNAT produces slow executables)
>

Who says that? Ada can be at least as fast as C++.

/David, writing from comp.lang.c++

Martin Dowie

unread,
Feb 10, 2004, 6:08:16 AM2/10/04
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
news:vjah20tahj48fftkp...@4ax.com...

> >3. performance is highly important
>
> GNAT is a front end of GNU C...

Not quite...
...they share a back-end - GNAT does NOT translate Ada source into C as an
intermediate.


Martin Dowie

unread,
Feb 10, 2004, 6:13:06 AM2/10/04
to
"David Rasmussen" <david.r...@gmx.net> wrote in message
news:c0aa3o$i3t$1...@news.net.uni-c.dk...

> MSG wrote:
> > 3. not very performance demanding (don't know about other compilers,
> > but they say GNAT produces slow executables)
> >
>
> Who says that? Ada can be at least as fast as C++.

Different implementations of any language will produce
different results. Also, where one compiler may do a
good job with float point arithmetic, it may be lousy at
optimising.

The important thing is that there is nothing in the language
definition that _requires_ it to produce 'slow' code. One
of the design aims for Ada95 was to actually introduce
new language constructs that would actually allow faster
code to be produced, while retaining the reliability, ease
of maintenance, etc.


Marin David Condic

unread,
Feb 10, 2004, 7:52:48 AM2/10/04
to
MSG wrote:
> 3. not very performance demanding (don't know about other compilers,
> but they say GNAT produces slow executables)
>
I don't know how this comes up - I've used Gnat for non-realtime code
and found its performance to be as good as most other languages compiled
for PCs or workstations. It is, after all, just a different front end to
the gcc compiler and so the code generation is as good as for Gnu C and
the other languages it supports. (As always, you need to know how to use
the compiler to get optimal results. That's true no matter what language
you're talking about.)

BTW: I use Ada all the time for hard real time systems and its
performance is as good or better than other languages routinely used to
do similar jobs. I have very demanding timing requirements and very old,
slow processors. If you get a good quality embedded Ada compiler, it
works just fine. The "language" can't be slow if good quality
implementations exist to prove the opposite.

So before you go reacting to rumors, I'd suggest you actually look at
some facts about Ada. You might even get yourself some benchmark
algorithms and test it out. That would be the scientific thing to do.
Believing in unsubstantiated rumors is a little like believing in
fairies & pixies because you heard someone tell a story about them.

MDC
--
======================================================================
Marin David Condic
I work for: http://www.belcan.com/
My project is: http://www.jsf.mil/NSFrames.htm

Send Replies To: m o d c @ a m o g
c n i c . r

"Face it ladies, its not the dress that makes you look fat.
Its the FAT that makes you look fat."

-- Al Bundy

======================================================================

Dmitry A. Kazakov

unread,
Feb 10, 2004, 9:13:24 AM2/10/04
to

Yes of course, GNU C is just a name of a set of compilers. Important
is that the back-end is same, so it is unlikely for GNAT to be slower
than GNU C. Theoretically Ada as a language should allow better
optimization than C.

Martin Dowie

unread,
Feb 10, 2004, 9:11:16 AM2/10/04
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
news:22jh209rupt9e957i...@4ax.com...

> >Not quite...
> >...they share a back-end - GNAT does NOT translate Ada source into C as
an
> >intermediate.
>
> Yes of course, GNU C is just a name of a set of compilers. Important
> is that the back-end is same, so it is unlikely for GNAT to be slower
> than GNU C. Theoretically Ada as a language should allow better
> optimization than C.

For total pedantry, GCC is the name for the set of compilers, GNU C is
one element in this set.

Robert I. Eachus

unread,
Feb 10, 2004, 11:46:59 AM2/10/04
to
Martin Dowie wrote:

> The important thing is that there is nothing in the language
> definition that _requires_ it to produce 'slow' code. One
> of the design aims for Ada95 was to actually introduce
> new language constructs that would actually allow faster
> code to be produced, while retaining the reliability, ease
> of maintenance, etc.

Not quite true. There was an infamous feature in Ada 83 that seemed to
require "extra" copies of vectors on certain vector processing CPUs
without precise error checking.

I remember one time when a compiler developer for a manufacturer of such
vector processing supercomputers called with a complex question about
what was required and what wasn't. We had a long discussion and she
concluded that she could use the current Fortran back-end rules except
for one case.

After she hung up, what she had said percolated through my head, and I
e-mailed a short Fortran example. Sure enough it produced garbage. So
yes, Ada 83 required that you not produce garbage output. The Ada 95
rules may be somewhat different, but they still require that you make
temporary copies when the alternative is junk results. In fact, where
the Ada 83 and Ada 95 rules are different is that, in Ada 83, programs
that discarded their results were still required to get the right answer
in some cases. In Ada 95, you can compute wrong answers if the
externally visible behavior of the program doesn't change. ;-)

--
Robert I. Eachus

"The war on terror is a different kind of war, waged capture by capture,
cell by cell, and victory by victory. Our security is assured by our
perseverance and by our sure belief in the success of liberty." --
George W. Bush

Mark McIntyre

unread,
Feb 10, 2004, 3:49:32 PM2/10/04
to
On Tue, 10 Feb 2004 14:11:16 +0000 (UTC), in comp.lang.c , "Martin Dowie"
<martin...@btopenworld.com> wrote:

stuff.

Can you take comp.lang.c off the crossposts. this has wandered miles away
from topicality, even if it were near in the first place....

--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.angelfire.com/ms3/bchambless0/welcome_to_clc.html>


----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---

August Derleth

unread,
Feb 10, 2004, 7:12:37 PM2/10/04
to

GNU C is also used to refer to a nonstandard (extended and modified)
version of C compiled by the GNU Project's C compilers in a
non-conformant mode. A notable extension provided by GNU C over standard
C is the existence of functions that are private to another function,
and are defined within the function they are private to.

An example:

int foo(int x)
{
int bar(int y)
{
return y % 2;
}
int z = bar(x);

return z + 2;
}

That is a compilable GNU C program, and it behaves such that bar() is
not visible outside foo(). It is not conformant to any relevant standard
(as far as I know).

--
My address is yvoregnevna gjragl-guerr gjb-gubhfnaq guerr ng lnubb qbg pbz
Note: Rot13 and convert spelled-out numbers to numerical equivalents.


MSG

unread,
Feb 10, 2004, 9:19:13 PM2/10/04
to
Dmitry A. Kazakov <mai...@dmitry-kazakov.de> wrote in message news:<vjah20tahj48fftkp...@4ax.com>...

> On 9 Feb 2004 18:26:33 -0800, msg...@yahoo.com (MSG) wrote:
> >
> >1. no real time

[...]

> >2. bugs welcome (but not wrong results) - lusers will not come near my
> >programs

[...]

> It is easy to write a virus program scanning your source codes and
> randomly sowing them with bugs, if you so enjoy them...

What I meant was of course that I don't place as much emphasis on
these as, say, Boeing does, and so all other factors in language
choice become relatively more important to me.

> >3. performance is highly important
>
> GNAT is a front end of GNU C...

Can you write (*) a matrix multiplication routine in Ada, compile it
with GNAT and measure the number CPU cycles per FLOP, compare to a
similar routine in C?
The shootout seems to put GNAT closer to Perl and Java than to C/C++.

Cheers,
MSG

(*) Only if you think the one on the shootout page is inadequate.

James Rogers

unread,
Feb 11, 2004, 1:30:17 AM2/11/04
to

> Can you write (*) a matrix multiplication routine in Ada, compile it


> with GNAT and measure the number CPU cycles per FLOP, compare to a
> similar routine in C?
> The shootout seems to put GNAT closer to Perl and Java than to C/C++.

The shootout numbers I saw put vc at .07, gcc at 0.10 and GNAT at .20.
Java was 0.73 and Perl was 34.31.

I do not see how .2 is closer to .7 or 34 than it is to .1.

Your mathematics seems seriously flawed.

Please explain your reasoning.

Jim Rogers

Dmitry A. Kazakov

unread,
Feb 11, 2004, 4:22:37 AM2/11/04
to
On 10 Feb 2004 18:19:13 -0800, msg...@yahoo.com (MSG) wrote:

>Dmitry A. Kazakov <mai...@dmitry-kazakov.de> wrote in message news:<vjah20tahj48fftkp...@4ax.com>...
>> On 9 Feb 2004 18:26:33 -0800, msg...@yahoo.com (MSG) wrote:
>> >
>> >3. performance is highly important
>>
>> GNAT is a front end of GNU C...
>
>Can you write (*) a matrix multiplication routine in Ada, compile it
>with GNAT and measure the number CPU cycles per FLOP, compare to a
>similar routine in C?

There is a problem with that. C does not have arrays. Yet matrices,
you know, are two-dimensional ones. So any comparison here would be
suspicious. A program in C, supposed to multiply matrices would lack
ADT abstraction layer. It is well possible to write something similar
in Ada, using pointers instead of arrays etc. (After all true
programmer can write a FORTRAN program in Pascal, if I correctly quote
the famous sentence) Such a program with all checks surpressed will
take the same number of CPU cycles. But who might be interested in
such comparison?

Note that presence of an abstraction per se does not mean performance
penalty. The effect could be quite opposite. The difference between C
and Ada is that in C you almost cannot express intention. It is too
low-level language. You just order the compiler to do something and it
obeys. This may result in slower code, because the compiler should
deduce that:

char * t;
char * s;

while (*t++ = *s++);

is in fact to copy a string. If it would, it could then apply a
corresponding target CISC machine instruction. In Ada it is easier for
the compiler:

T : String (...);
S : String (...);

T := S;

Even if you work at the array abstraction level:

for I in S'Range loop
T (I) := S (I);
end loop;

there is a lot of useful information for the compiler here, much more
than in pointer increments and dereferencings. Note also, that by
using pointers you commit yourself to only the objects which can be
referenced by a pointer. This might be a heavy burden on some
machines. Compare this with Ada, where you have to *explicitly*
specify that an object is aliased (a subject of referencing). If you
don't, the compiler is free to move such objects to registers, cache,
an external matrix processing unit etc.

Even if GNAT might not use all that, it is at best a GNAT problem, not
one of Ada.

Marin David Condic

unread,
Feb 11, 2004, 8:10:24 AM2/11/04
to
Given that it is 100% legal Ada to build a procedure that contains
nothing but assembly language instructions, I'd be confident that one
could build Ada code that is just as fast as anything produced by any
compiler anywhere. So if one wants to get into high-speed shootouts
between languages, a ground rule has to be that you're comparing similar
code.

If an Ada example uses a high level abstraction of a matrix and C can't
do that sort of abstraction, then C can't play in that game. If the C
example uses some raw chunk of memory and address arithmetic, then the
Ada example would need to be coded up in that style as well (and yes,
that can be done - but nobody who uses Ada typically *wants* to. :-)
Only if you have similarly coded examples can you possibly hope to
determine if one compiler is more efficient than another.

MDC

Dmitry A. Kazakov wrote:
>
> There is a problem with that. C does not have arrays. Yet matrices,
> you know, are two-dimensional ones. So any comparison here would be
> suspicious. A program in C, supposed to multiply matrices would lack

--

Ole-Hjalmar Kristensen

unread,
Feb 11, 2004, 9:23:12 AM2/11/04
to
Marin David Condic <nob...@noplace.com> writes:

> Given that it is 100% legal Ada to build a procedure that contains
> nothing but assembly language instructions, I'd be confident that one
> could build Ada code that is just as fast as anything produced by any
> compiler anywhere. So if one wants to get into high-speed shootouts
> between languages, a ground rule has to be that you're comparing
> similar code.
>
> If an Ada example uses a high level abstraction of a matrix and C
> can't do that sort of abstraction, then C can't play in that game. If
> the C example uses some raw chunk of memory and address arithmetic,
> then the Ada example would need to be coded up in that style as well
> (and yes, that can be done - but nobody who uses Ada typically *wants*
> to. :-)
> Only if you have similarly coded examples can you possibly hope to
> determine if one compiler is more efficient than another.
>
> MDC

Yes, but there are some caveats. Ada insists on getting floating point
arithmetic "right", so it will typically do it differently than C,
even though the Ada and C programs superficially look the same. For
floating-point intensive programs, this may result in quite a
performance hit. I recently ported a small ray-tracing kind of
application from C to Ada, keeping largely to the structure of the
original program, but using sensible Ada constructs where appropriate.
First I verified that it indeed worked the same as the original, then
turned off all checks and compiled both versions with -O3 and
-funroll-all loops. The Ada version was slower by a factor of
2.

Profiling showed that much of the time was spent in sqrt() and other
math functions. Next, I imported the necessary functions from the C
library and used those instead. This resulted in Ada and C versions
which ran at the same speed. Some slight algorithmic optimizations
later, the Ada version was approximately 20% faster than the C
version. Profiling again, I found that about 10 seconds of a total of
40 seconds runtime was consumed in the truncate function. An inlined
assmbler version of this reduced the time spent in truncate to 1-2
seconds, which was more reasonable.

Note that neither the C version nor the Ada version are anywhere close
to the limit in terms of speed, using the short vector instructions of
the processor, I could probably get an overall speedup by a factor of
2-3 for this particular application if coded in assembler.

>
> Dmitry A. Kazakov wrote:
> > There is a problem with that. C does not have arrays. Yet matrices,
> > you know, are two-dimensional ones. So any comparison here would be
> > suspicious. A program in C, supposed to multiply matrices would lack
>
> --
> ======================================================================
> Marin David Condic
> I work for: http://www.belcan.com/
> My project is: http://www.jsf.mil/NSFrames.htm
>
> Send Replies To: m o d c @ a m o g
> c n i c . r
>
> "Face it ladies, its not the dress that makes you look fat.
> Its the FAT that makes you look fat."
>
> -- Al Bundy
>
> ======================================================================
>

--
C++: The power, elegance and simplicity of a hand grenade.

Xenos

unread,
Feb 11, 2004, 11:06:03 AM2/11/04
to

"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
news:qdqj20lomb5865p7q...@4ax.com...

> On 10 Feb 2004 18:19:13 -0800, msg...@yahoo.com (MSG) wrote:
>
> is in fact to copy a string. If it would, it could then apply a
> corresponding target CISC machine instruction. In Ada it is easier for
> the compiler:
>
> T : String (...);
> S : String (...);
>
> T := S;
>
> Even if you work at the array abstraction level:
>
> for I in S'Range loop
> T (I) := S (I);
> end loop;
>
But of course, the first example will only work if S'Length is equal to
T'Length or it will raise a constraint_error. The second will only work if
T'Length is greater than or equal to S'Length.

DrX


Preben Randhol

unread,
Feb 11, 2004, 11:47:10 AM2/11/04
to
On 2004-02-11, Xenos <dont.s...@spamhate.com> wrote:
> But of course, the first example will only work if S'Length is equal to
> T'Length or it will raise a constraint_error. The second will only work if
> T'Length is greater than or equal to S'Length.

Yes, you won't get a buffer overflow.


Preben
--
"When Roman engineers built a bridge, they had to stand under it while
the first legion marched across. If programmers today worked under
similar ground rules, they might well find themselves getting much
more interested in Ada!" -- Robert Dewar

Xenos

unread,
Feb 11, 2004, 12:42:58 PM2/11/04
to

"Preben Randhol" <randhol+valid_fo...@pvv.org> wrote in
message
news:slrnc2kn4d.aav.randhol+v...@k-083152.nt.ntnu.no...

>
> Yes, you won't get a buffer overflow.
>
You wouldn't of gotten a buffer overflow either way, it would have raised a
constraint_error.

DrX


Preben Randhol

unread,
Feb 11, 2004, 1:23:14 PM2/11/04
to
["Followup-To:" header set to comp.lang.ada.]

On 2004-02-11, Xenos <dont.s...@spamhate.com> wrote:
>

Not in C.

But you cut that example out.

--
"Saving keystrokes is the job of the text editor, not the programming
language."

Rob Thorpe

unread,
Feb 11, 2004, 2:31:30 PM2/11/04
to
Ludovic Brenta <ludovic...@insalien.org> wrote in message news:<m3fzdly...@insalien.org>...
...

> which is not possible in C:
>
> typedef enum { YES, NO, MAYBE, MAYBE_NOT } assessment_t;
>
> void P (assessment_t A) { }
>
> int main () {
> P (42); // no compiler check!
> return 0;
> }

In C enums are interchangable with ints so there is no error, though
there maybe the compiler should give a warning.

A C++ compiler should check
I got:

bash-2.05b$ g++ -Wall fiddle.cpp
fiddle.cpp: In function `int main()':
fiddle.cpp:6: error: invalid conversion from `int' to `assessment_t'

Robert I. Eachus

unread,
Feb 11, 2004, 5:58:32 PM2/11/04
to
MSG wrote:

> Can you write (*) a matrix multiplication routine in Ada, compile it
> with GNAT and measure the number CPU cycles per FLOP, compare to a
> similar routine in C?
> The shootout seems to put GNAT closer to Perl and Java than to C/C++.

As a matter of fact I am writing a fairly complex linear algebra package
in Ada that is designed for high-performance in supercomputer type
applications. I probably could write it in C and stay out of the
asylum, but it would be a close call. Why? It does things like A := A*B;
in place, with only a row sized temporary, and using Strassen's
algorithm with almost no copying. For efficiency the code works not
with an array type, but with a view that may share data with another
view. That way I can, for example, divide an existing matrix into four
smaller matrices in O(1) time and space. Eventually I will also have
code present to support both transposed views of matrices, in fact I
just finished the fast transpose code. That way I can transpose the
right argument, and avoid doing it more than once.

But if I did write the same code in C, I would not expect performance to
be better. (There are a few cases where the parameter passing overhead
in C would be higher, so performance would be technically worse, but
only by a few instructions.)

> (*) Only if you think the one on the shootout page is inadequate.

Which shootout page, this one? http://dada.perl.it/shootout/matrix.html

If so the only questions I would have is why are there no default
initial values for the matrices to insure consistancy. (It would be
possible for any implementation to fail unless overflow checking is
turned off, and in Ada that can cause code to run slower.) Incidently
on this page, on this test gcc takes ten milliseconds, GNAT takes 20 ms.
Hardly in the same class as perl, 34.31 seconds, or even java, 73
milliseconds.

Also I just submitted a new version of the strcat routine to fix a
minor problem that resulted in the test being reported as failed. (The
length was printed with leading spaces, and the test harness didn't
expect that.) While I was at it I rewrote basically the whole thing:

with Ada.Command_Line; use Ada.Command_Line;
with Ada.Strings.Unbounded; use Ada.Strings.Unbounded;
with Ada.Strings.Fixed; use Ada.Strings;
with Ada.Text_Io; use Ada.Text_Io;
procedure Strcat is
N: Integer;
Hello : String := "hello" & Ascii.Lf;
begin
if Argument_Count < 1
then N := 10_000;
else N := Integer'Value(Argument(1));
end if;
declare
Buffer : Unbounded_String := N*Hello;
begin
Put_Line(Ada.Strings.Fixed.Trim
(Integer'Image(Length(Buffer)),Left));
end;
end Strcat;

This results in about a 100x speed up. Is this the right way to write
it and the code they had wrong? In one sense, that probably is true.
But if I use Bounded_String, the version on the website is slightly
faster, and both versions fall midway between the fast and slow
Unbounded versions. (A version that just uses String is actually faster
than the Unbounded String version. However, I think that version is a
bit of a cheat. ;-)

Robert I. Eachus

unread,
Feb 11, 2004, 6:29:33 PM2/11/04
to
I wrote:

> After she hung up, what she had said percolated through my head, and I

> e-mailed a short Fortran example. Sure enough it produced garbage...

Hmmm. Trying to be terse, I may have inadvertantly slandered Fortran, or
the (intentionally unnamed) system vendor.

The bug had nothing to do with Fortran as such. The Fortran standard
was just aa clear as the Ada standard in this area. But the hardware
vendor was working on upgrading their compiler for new hardware with a
wider memory interface. So what Carol said on the phone bugged me until
I worked out an example that would determine whether there was a bug in
their compiler, or a misunderstanding. It turned out that the code
worked correctly on their existing hardware, but broke on the new
hardware. So she fixed the optimizer, and was able to share the code
between the Ada and Fortran front-ends. (Unless a user defined
floating-point type had explicit bounds. Then the Ada code would be
much slower, but you have to assume that the user did that intentionally.)

Incidently, I don't know why Ada compiler implementors picked me to call
with that sort of question. (Or maybe they called everyone looking for
an answer they liked to some questions.) There were three areas in the
Ada 83 standard where everyone who was working on an Ada compiler--and
wasn't represented on the then LMC (now ARG)--called me with the exact
same questions.

The simplest was described as "for I in -1..10 loop..." Yes, you were
expected to reject that at compile time. ("for I in Integer(-1)..10
loop..." was okay.) The second, was that yes it was intentional that
the size of record objects with discriminants could change at run-time,
but only if the discriminants had default values. And the final one was
that yes, elaboration of Ada generics happens at run-time not compile
time. This causes some major issues if you want to implement generics
as textual substitutions. Not that you can't do it, but there are some
things that you have to work pretty hard to get right.

I used to joke that I could tell how the vendors were doing on their
compilers by when they called. ;-)

Marin David Condic

unread,
Feb 12, 2004, 7:49:15 AM2/12/04
to
Ole-Hjalmar Kristensen wrote:
>
> Yes, but there are some caveats. Ada insists on getting floating point
> arithmetic "right", so it will typically do it differently than C,
> even though the Ada and C programs superficially look the same. For


Well, I *did* say that only if you had similarly coded examples could
you hope to do any comparison. Not that you couldn't do a comparison and
see a difference. ;-)

Secondly, one needs to insist that some code under evaluation must
produce a *correct* result. If a C coded example computes the wrong
answer at twice the speed of a similar Ada example that gets the answer
right, is it even worth discussing?

My final objection to the whole "Benchmark Wars" is that for 90% of the
uses of compilers, it just plain doesn't matter. If I build a program to
solve a matrix and it gets me an answer displayed on my screen in 10
seconds - but I re-code it in another language and the answer pops up in
8 seconds instead, what am I going to do with those extra two seconds?
Save them up for Christmas? People do this sort of math stuff all day
long in spreadsheets which are interpreting the answers at great
inefficiency and they spend lots of time not caring about it. So why do
programmers without a real performance constraint spend so much time
getting their panties in a bunch over something that never has a real
impact on what they're doing?

Keep in mind that I work with apps where miliseconds count, so I know
how to worry about compiler efficiency. I also know I can get good
compiler efficiency out of Ada (plus all of Ada's other benefits), so
when I *must* be efficient, I know I can get there. But when I go over
to a PC or workstation to develop some hacker tool I need, worrying
about a few extra CPU cycles is way down on my list of concerns.

Too many people spend too much time agonizing over "Compiler Efficiency"
- usually without any real scientific data to back up their perceptions
of what is fast and what is slow - and most of the time it just plain
doesn't matter. I'd bet that if we took most of the applications that
people use on a daily basis and inserted random delay statements
throughout them to double the amount of CPU cycles they use, nobody
would notice any difference in how they got their job done.

MDC

Preben Randhol

unread,
Feb 12, 2004, 8:54:29 AM2/12/04
to
["Followup-To:" header set to comp.lang.ada.]
On 2004-02-12, Marin David Condic <nob...@noplace.com> wrote:
>
> Too many people spend too much time agonizing over "Compiler Efficiency"
> - usually without any real scientific data to back up their perceptions
> of what is fast and what is slow - and most of the time it just plain
> doesn't matter. I'd bet that if we took most of the applications that
> people use on a daily basis and inserted random delay statements
> throughout them to double the amount of CPU cycles they use, nobody
> would notice any difference in how they got their job done.

Yes. It is like debating which car is best and safest by how far the
speedometer goes and not caring if it has a seat belt or not. C/C++ do
not have seat belts.

Ole-Hjalmar Kristensen

unread,
Feb 12, 2004, 10:37:00 AM2/12/04
to
Marin David Condic <nob...@noplace.com> writes:

> Ole-Hjalmar Kristensen wrote:
> >
> > Yes, but there are some caveats. Ada insists on getting floating point
> > arithmetic "right", so it will typically do it differently than C,
> > even though the Ada and C programs superficially look the same. For
>
>
> Well, I *did* say that only if you had similarly coded examples could
> you hope to do any comparison. Not that you couldn't do a comparison
> and see a difference. ;-)
>
> Secondly, one needs to insist that some code under evaluation must
> produce a *correct* result. If a C coded example computes the wrong
> answer at twice the speed of a similar Ada example that gets the
> answer right, is it even worth discussing?
>

It could well be. In the case of an interactive raytracer, minor
numerical errors does not really matter if you can get the results at
twice the speed. I imagine you can find other applications with
similar characteristics. But in general, I agree that for the
majority of applications the difference in speed between languages and
compilers is nothing to worry about.

<snip>

Marin David Condic

unread,
Feb 13, 2004, 8:11:21 AM2/13/04
to
Well, there's *always* exceptional cases and we could sit here all day
long dreaming up applications in which math errors matter or math errors
don't. We could also find lots of apps in which speed matters. The key
factor being that for most of the software that gets built in the world
(look at what's on your desktop for appropriate examples) and for most
of the processors on which they execute (again, look at the computer on
your desk for an appropriate example) the relative efficiency of most
compilers/languages is incredibly unimportant. The word processor I'm
using to type this could have been built in interpretive Basic
functioning at 10x the number of CPU cycles as an equivalent program in
some compiled language and I'd probably never see any difference from my
keyboard.

So rather than talk about language/compiler efficiency its probably more
productive for most apps to discuss what *else* a language/compiler
offers the developer. (Things like safety/reliability, ease of
understanding, developmental leverage, available tools & libraries, etc.)

MDC

Ole-Hjalmar Kristensen wrote:
>
>
> It could well be. In the case of an interactive raytracer, minor
> numerical errors does not really matter if you can get the results at
> twice the speed. I imagine you can find other applications with
> similar characteristics. But in general, I agree that for the
> majority of applications the difference in speed between languages and
> compilers is nothing to worry about.
>
> <snip>
>


--

Ole-Hjalmar Kristensen

unread,
Feb 13, 2004, 11:41:21 AM2/13/04
to
Marin David Condic <nob...@noplace.com> writes:

> Well, there's *always* exceptional cases and we could sit here all day
> long dreaming up applications in which math errors matter or math
> errors don't. We could also find lots of apps in which speed
> matters. The key factor being that for most of the software that gets
> built in the world (look at what's on your desktop for appropriate
> examples) and for most of the processors on which they execute (again,
> look at the computer on your desk for an appropriate example) the
> relative efficiency of most compilers/languages is incredibly
> unimportant. The word processor I'm using to type this could have been
> built in interpretive Basic functioning at 10x the number of CPU
> cycles as an equivalent program in some compiled language and I'd
> probably never see any difference from my keyboard.
>
> So rather than talk about language/compiler efficiency its probably
> more productive for most apps to discuss what *else* a
> language/compiler offers the developer. (Things like
> safety/reliability, ease of understanding, developmental leverage,
> available tools & libraries, etc.)
>
> MDC

Actually, I'm not exactly dreaming up such cases, since I spent the
last two years developing software for seismic visualization. Speed
matters very much, in that if you can double your speed, you can
handle a survey twice the size at the same machine.
The trick is to know where you need to be accurate and where not to be.
And yes, that particular application ran on a desktop PC.

But my mind is probably bent from too many years of programming
graphics and soft real time data bases.

I'm not arguing against discussing what else a language/compiler
offers, just pointing out that for some applications, the need for
speed is very real.

>
> Ole-Hjalmar Kristensen wrote:
> > It could well be. In the case of an interactive raytracer, minor
> > numerical errors does not really matter if you can get the results at
> > twice the speed. I imagine you can find other applications with
> > similar characteristics. But in general, I agree that for the
> > majority of applications the difference in speed between languages and
> > compilers is nothing to worry about.
> > <snip>
> >
>
>
> --
> ======================================================================
> Marin David Condic
> I work for: http://www.belcan.com/
> My project is: http://www.jsf.mil/NSFrames.htm
>
> Send Replies To: m o d c @ a m o g
> c n i c . r
>
> "Face it ladies, its not the dress that makes you look fat.
> Its the FAT that makes you look fat."
>
> -- Al Bundy
>
> ======================================================================
>

--

Marin David Condic

unread,
Feb 14, 2004, 3:53:31 AM2/14/04
to
I agree. I do it all the time (and in Ada). My engine controls have to
react with very real, very hard deadlines and I'd better have a compiler
that squeezes out ever last instruction it possibly can. My point is
that *most* apps *don't* have that kind of requirement, so designers of
those types of apps shouldn't get wrapped around the axle over
evaluation of the relative speed of compilers and languages. In other
words if Ada's critics were correct that "Ada is slow..." (it isn't) it
would still be suitable for proabably 90% of the software development
done in the world.

MDC

Ole-Hjalmar Kristensen wrote:
>
> I'm not arguing against discussing what else a language/compiler
> offers, just pointing out that for some applications, the need for
> speed is very real.
>

--

Jerry Coffin

unread,
Feb 14, 2004, 4:54:20 PM2/14/04
to
In article <402A29B4...@noplace.com>, nob...@noplace.com says...

> Given that it is 100% legal Ada to build a procedure that contains
> nothing but assembly language instructions, I'd be confident that one
> could build Ada code that is just as fast as anything produced by any
> compiler anywhere. So if one wants to get into high-speed shootouts
> between languages, a ground rule has to be that you're comparing similar
> code.
>
> If an Ada example uses a high level abstraction of a matrix and C can't
> do that sort of abstraction, then C can't play in that game. If the C
> example uses some raw chunk of memory and address arithmetic, then the
> Ada example would need to be coded up in that style as well (and yes,
> that can be done - but nobody who uses Ada typically *wants* to. :-)
> Only if you have similarly coded examples can you possibly hope to
> determine if one compiler is more efficient than another.

IMO, this produces a benchmark that is so far departed from the real
world that while it may produce results that are accurate (for some
definition of the word) they're utterly devoid of relationship with
reality, and therefore of any real meaning.

If you want to do a comparison, you need to compare things how they're
really used. There are certainly variations among programmers, but to
be meaningful the test code should fall well within the range of normal
variations. We all know that "real programmers can write Fortran in any
language", but writing Fortran in Ada, C++, Java, etc., doesn't really
accomplish much, and the performance of such code is meaningless at
best, and more likely to be downright misleading.

--
Later,
Jerry.

The universe is a figment of its own imagination.

Marin David Condic

unread,
Feb 15, 2004, 9:15:40 AM2/15/04
to
Well, from my experience with benchmarking for realtime systems, we
generally drew on sample code that was typical of our control systems.
Compiler A might do a real good job of optimizing one algorithm while
Compiler B was better at another. This was done for purposes of
selecting which Ada compiler we wanted to use for the given target - not
for selecting a language.

We never attempted language-to-language benchmarking because we pretty
much figured it was pointless. Too many variables to really get a
meaningful result and we knew we could get realtime quality out of most
languages usually used for the purpose. So we picked the language based
on other factors (error reduction, improved productivity, etc) and then
benchmarked the competitors in that category. Key to trying to do any
evaluation of anything in a scientific manner is to hold "all other
things being equal" and when it comes to code in different languages
with different compilers, you have a tough time doing this.

A key result of our tests is that there are seldom any clear winners. It
depends a lot on what your real-world code is going to look like. Some
languages may be ruled out for lack of a competing implementation (if
nobody makes an embedded compiler for your target, the game is over) but
usually the "conventional" players are around. Then - depending on the
specific compiler - there are various ways of getting optimal code for
the algorithms you're interested in and usually you have to learn that
along the way. Its seldom an exact science. Sooner or later you pick
something and then get on with getting the job done. So long as you
didn't pick something hopelessly inefficient, you usually find a way to
get reasonable results with what you picked.

I'd offer again my observation that for probably 90% of the software
development that goes on in the world, relative compiler/language
inefficiency is a total non-issue and people ought not to sweat over it.
Even if there was any truth to the "Ada is slow..." rumor (and a given
compiler is twice as slow as some highly optimized C example?) you'll
never even see it in most applications. One ought to then focus in on
other important factors that go along with language selection such as
available compilers, reliable implementations, improvements in error
rates and/or productivity, available tools, available libraries,
time-to-market issues, etc.

MDC


Jerry Coffin wrote:
>
> IMO, this produces a benchmark that is so far departed from the real
> world that while it may produce results that are accurate (for some
> definition of the word) they're utterly devoid of relationship with
> reality, and therefore of any real meaning.
>
> If you want to do a comparison, you need to compare things how they're
> really used. There are certainly variations among programmers, but to
> be meaningful the test code should fall well within the range of normal
> variations. We all know that "real programmers can write Fortran in any
> language", but writing Fortran in Ada, C++, Java, etc., doesn't really
> accomplish much, and the performance of such code is meaningless at
> best, and more likely to be downright misleading.
>


--

Jerry Coffin

unread,
Feb 17, 2004, 4:19:52 AM2/17/04
to
In article <402F7EFC...@noplace.com>, nob...@noplace.com says...

> Well, from my experience with benchmarking for realtime systems, we
> generally drew on sample code that was typical of our control systems.

That doesn't sound like anything I'd depend on for a realtime system --
to be meaningful in a realtime context, you normally need to look at a
worst case, not a typical one.

> We never attempted language-to-language benchmarking because we pretty
> much figured it was pointless.

I agree that it usually is -- I didn't intend to advocate comparing
languages at all, but merely to offer my opinion that the method being
advocated would render the results definitely pointless instead of only
probably pointless.

[ ... ]

> I'd offer again my observation that for probably 90% of the software
> development that goes on in the world, relative compiler/language
> inefficiency is a total non-issue and people ought not to sweat over it.

Quite true.

> Even if there was any truth to the "Ada is slow..." rumor (and a given
> compiler is twice as slow as some highly optimized C example?) you'll
> never even see it in most applications.

I suppose that depends on the applications you spend your time writing.
I agree that with the typical office applications (for example) a factor
of 2 (or even 10) in speed will rarely be noticed. OTOH, I've worked on
code for doing MPEG encoding. Back when I was working on it, a 1 GHz
(or so) Pentium III was about the state of the art, and with that my
code took around 3 to 3 1/2 hours to encode one hour of video. Most of
the other code I was aware of at the time was closer to 5 hours on the
same hardware. I suspect even with today's faster hardware this is
still over an hour -- and in a case like this, a factor of 2 is clearly
quite a big win.

I'm the first to admit that most applications aren't this compute-
intensive, but I'll also point out that MPEG encoding isn't exactly
unheard-of either, and there are a number of other tasks that are even
more so (e.g. cryptanalysis in many cases).

> One ought to then focus in on
> other important factors that go along with language selection such as
> available compilers, reliable implementations, improvements in error
> rates and/or productivity, available tools, available libraries,
> time-to-market issues, etc.

Generally quite true.

Marin David Condic

unread,
Feb 17, 2004, 7:23:20 AM2/17/04
to

Jerry Coffin wrote:
> In article <402F7EFC...@noplace.com>, nob...@noplace.com says...
>
>>Well, from my experience with benchmarking for realtime systems, we
>>generally drew on sample code that was typical of our control systems.
>
>
> That doesn't sound like anything I'd depend on for a realtime system --
> to be meaningful in a realtime context, you normally need to look at a
> worst case, not a typical one.
>

"Typical" in the sense that "Typically, we read an A/D converter, apply
some Y = MX + B code to it, check if for validity, rate and range limit
it, etc..." I didn't mean "Typical" in the sense of "Usually it goes
down this path so that's all we need to evaluate..."

MDC

0 new messages