http://farside.ph.utexas.edu/teaching/329/lectures/node7.html
"Scientific programming languages
What is the best high-level language to use for scientific programming?
This, unfortunately, is a highly contentious question. Over the years,
literally hundreds of high-level languages have been developed. However, few
have stood the test of time. Many languages (e.g., Algol, Pascal, Haskell)
can be dismissed as ephemeral computer science fads. Others (e.g., Cobol,
Lisp, Ada) are too specialized to adapt for scientific use.
......
The remaining options are FORTRAN 77 and C. I have chosen to use C "
I find this strange, because I think Ada can be the best programming
language for numerical work. So, I do not know why the author above thinks
Ada is "too specialized to adapt for scientific use". Is there something in
Ada which makes it hard to use for scientific programming?
The main problem I see with Ada for scientific use is that it does not have
as nearly as many packages and functions ready to use output of the box for
this, other than that, the language itself I think is better than Fortran
and C for scientific work.
(the above quote is from a course on Computational Physics at University of
Texas at Austin, may be I should write to the professor and ask him why he
said that, but I am not sure I'll get an answer, my experience is that most
professors do not answer email :)
--Nasser
> I find this strange, because I think Ada can be the best programming
> language for numerical work. So, I do not know why the author above thinks
> Ada is "too specialized to adapt for scientific use". Is there something in
> Ada which makes it hard to use for scientific programming?
>
There are lots of things that make it better: guaranteed accuracy,
including for the mathematical library, convenient manipulation of
arrays, concurrency to name a few.
This kind of remark comes generally from hear-say of people who never
had a serious look at Ada. Unfortunately, it is easier to repeat a rumor
than to investigate seriously...
--
---------------------------------------------------------
J-P. Rosen (ro...@adalog.fr)
Visit Adalog's web site at http://www.adalog.fr
This fall into the category: I want to do C so I say whatever is
necessary to explain scientifically my choice.
> Is there something in
> Ada which makes it hard to use for scientific programming?
Nothing I can think of.
Pascal.
--
--|------------------------------------------------------
--| Pascal Obry Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--| http://www.obry.net - http://v2p.fr.eu.org
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver keys.gnupg.net --recv-key F949BD3B
> I was browsing the net for scientific software written in Ada, and came
> across this strange statement:
>
> http://farside.ph.utexas.edu/teaching/329/lectures/node7.html
>
> "Scientific programming languages
> What is the best high-level language to use for scientific programming?
> This, unfortunately, is a highly contentious question. Over the years,
> literally hundreds of high-level languages have been developed. However, few
> have stood the test of time. Many languages (e.g., Algol, Pascal, Haskell)
> can be dismissed as ephemeral computer science fads. Others (e.g., Cobol,
> Lisp, Ada) are too specialized to adapt for scientific use.
This statement comes purely from ignorance. Any person seriously
interested in good scientific programming who would take the time to
learn what Ada has to offer would find that it is superb for scientific
programming.
Has anyone written a paper "Ada for Scientific Programming"? I
envision such a paper as having all of the tasking-related stuff
stripped out and a heavy emphasis on the numerical issues. Probably the
distributed programming stuff could be eliminated too; I'm not sure. A
Paul Hilfinger comes to mind.
>
> ......
>
> The remaining options are FORTRAN 77 and C. I have chosen to use C"
Both of which can be used to create programs that continue to
happily compute in the presence of incorrect data. Sigh.
>
> I find this strange, because I think Ada can be the best programming
> language for numerical work. So, I do not know why the author above thinks
> Ada is "too specialized to adapt for scientific use". Is there something in
> Ada which makes it hard to use for scientific programming?
>
> The main problem I see with Ada for scientific use is that it does not have
> as nearly as many packages and functions ready to use output of the box for
> this, other than that, the language itself I think is better than Fortran
> and C for scientific work.
>
> (the above quote is from a course on Computational Physics at University of
> Texas at Austin, may be I should write to the professor and ask him why he
> said that, but I am not sure I'll get an answer, my experience is that most
> professors do not answer email :)
I don't know how universal that is, but it is true in my limited
experience.
Charlie
--
All the world's a stage, and most
of us are desperately unrehearsed. Sean O'Casey
In my infinitely small experience with Ada as a CS student and self-
taught practitioner I have to say that's mostly "FUD".
There's a lot of pressure to make you use C or derivatives (C++, Java/
C#) or the language du jour, e.g. Python, because everyone believes it
must be one of the following ways: C's, hard, cryptic, fast, Java's,
quick, tedious, slow, or Python's, quicker, so-good-that-it-can't-be-
serious and slower. Other alternatives must be significantly worse
than one of these, no exceptions.
This is not the reason for which the languages above are used, but
it's the explanation given for not trying the alternatives.
<rant>
The only true reason for which Ada or other languages aren't used is,
as you said, the amount of available software directly usable in those
languages, which depends on the popularity of the language itself,
which, in turn, depends on the ease with which the language can be
implemented in popular architectures (x86 PC). This more or less dates
back to Unix and C being the ultimate computer viruses (cfr. "The Unix
Haters Handbook") ... </rant>
No. I use Ada every day in my personal research and it is outstanding.
I can choose any language that I want and I chose Ada.
I have used other languages for research, mainly Pascal, Fortran--a
long time ago, commercial packages starting with the letter "M", Igor
Pro, SuperCollider, ChucK, and others. Some of these remain in my bag
of tricks and as most on this list will agree, you need knowledge of
several languages and you should pick the best for the job. For
everyday general technical computing, that, for me, is Ada.
There have been discussions on this list in the past about the
relative lack of technical libraries for Ada. I find that (despite
criticism from some quarters) that Numerical Recipes is excellent--it
gets the job done. (There is an Ada version that has been used as a
demonstration of the P2Ada language converter. I don't know of the
copyright issues of this, but I own rights to the Pascal version so
I'm covered, I suppose.) In any case, numerical code is usually pretty
simple (structurally) and you can easily do your own conversion from
Pascal or Fortran or C.
There are also Ada bindings to LAPACK and BLAS. Indeed, I believe that
that is the official implementation in GNAT for the new-to-Ada 2005
numerical functions. LAPACK and BLAS have been around for so long they
are probably bullet-proof by now.
There is also the GNU Scientific Library for which the Ada binding is
sparse, last time I checked. I think it would be an excellent project
to get this up to where it is generally useful.
Jerry
"No. I use Ada every day in my personal research and it is outstanding.
I can choose any language that I want and I chose Ada."
That is good to know.
It would be good if there was a web page which specializes in giving
information on Ada for scientific work (links, etc...). When searching the
web for Ada for scientific/numerical programming, there is very little
information. (few links, but very few examples that I found showing Ada for
numerical work for example).
But one thing I would guess is good in Ada for numerical work, is the
ability to define an array which can start from zero or from one. Many of
the formulas in textbooks assume zero index as the start of the array, but
when using a language with arrays that start at 1, then this was always a
source of errors (the one-off error) when it comes to implementation.
It does not help if the language array starts from 0, because if a formula
is define to start from 1, the same problem, but in reverse will occur.
I think Ada solves this nicely by allowing one to define the type to match
the problem. ofcourse one can say that they can just define a new class in
Java/C++/etc... to do this as well, but I think the Ada solution is better
as it is part of the language itself and is probably safer also.
This is only one thing out of 100's more things that I can see an advantage
of Ada for numerical work (which is full of arrays and matrices types).
If anyone knows of an advanced course in a computational
scientific/engineering field at a US university which uses Ada, I'd like to
know about it.
--Nasser
On Apr 4, 6:46 am, "Nasser M. Abbasi" <n...@12000.org> wrote:
> I was browsing the net for scientific software written in Ada, and came
> across this strange statement:
>
> http://farside.ph.utexas.edu/teaching/329/lectures/node7.html
>
> "Scientific programming languages
> What is the best high-level language to use for scientific programming?
> This, unfortunately, is a highly contentious question. Over the years,
> literally hundreds of high-level languages have been developed. However, few
> have stood the test of time. Many languages (e.g., Algol, Pascal, Haskell)
> can be dismissed as ephemeral computer science fads. Others (e.g., Cobol,
> Lisp, Ada) are too specialized to adapt for scientific use.
Let me add just my 2.0e-2... I do research in communication and signal
processing
and this requires lots of programming (although I am not a
professional programmer).
To be honest, most of my "number crunching" stuff is done in Matlab,
since they usually
are "fast and dirty" tests of new algorithms and Matlab has lots of
numerical algorithms
ready off-the-shelf. However, for my long lived projects Ada is
without doubts
my first choice (that I impose to my students too... :-]). I must
confess also that my
third favorite language (for fast-and-dirty text-crunching scripts) is
something almost
opposite to Ada in term of philosophy: Ruby. ;-)
Back to the main topic, maybe the only defect of Ada in number
crunching is the lack
of some extensive numerical library (but it does not seem to me that
C, C++ or Java
are especially good on this...)
The text, by Professor Fitzpatrick, Physics, 2006, appears to be an
fine example of quite useful justification rhetoric.
(Of the kind well explained by, e.g. Leon Festinger or Irving Goffman.)
It is full of false facts that in general would offer
to an author the comforts of blissful ignorance and a convincing
appearance at the same time. (Who has not been in this situation?
Assuming that a quick proactive defence of your standing is
more tempting than the alternative: is to shut up?)
C99 (note the year) has complex types, says C hasn't. Well, it hadn't,
as some point in the last century.
Inexpensive compilers for Fortran 90 were available that year (2006),
AFAIR, from Intel or the FSF. NAG has academic pricing for its
Fortran compilers, at least now, possibly earlier. There are more.
The info about C++ is rather dated in 2006---templates and
interfaces would be closer to its focus, I'd think. The remark
seems to draw its persuasive power from a brevity that only
repeating hearsay can offer, as someone noted.
During the last few months I had a chance of hearing about Fortran
versus Ada in scientific computing where the subject is concurrent
execution on many processors, with some communication. It turns
out that there is no reason to dismiss either Ada or Fortran,
judging by the results: tasking can be as good as MPI.
There is, however, reason to believe that OpenMP does not scale
well. (From a superficial glance at OpenMP 3.0, I see so many
words sounding familiar in an Ada context (task, barrier, shared,
parallel regions, ...). Are they performing mostly the same experiments
that, I think, were done in the 1970s? I'd speculate that the
parallel constructions aren't novelties in an HPF world, either?)
As Dmitry Kazakov has recently said, when Ada run-time systems
starts addressing the properties of multicore hardware
there is hope that it could really shine: Not just because concurrent
sequential processes are so simple to express using Ada tasks
---and you'd be using only language, not a mix of libraries,
preprocessors, specialized compilers, programming conventions,
etc. But also in case the fine grained complexity of OpenMP 3.0
can be bridled by simple language and a good run-time system.
At little cost.
> Back to the main topic, maybe the only defect of Ada in number
> crunching is the lack
> of some extensive numerical library (but it does not seem to me that
> C, C++ or Java are especially good on this...)
There is a problem specific to Ada. A quality Ada library should deal with
all real types. (Other languages do not have this problem because they are
too primitive.) Technically this results in a bunch of generic packages
instantiated with some actual real type. Apart from being quite boring for
a user this approach has numerical problem. How would you specify and
provide the accuracy for all possible precisions, implicitly defined by the
formal parameter T? Say, the function f(X) should yield the result accurate
within T'Small or, maybe, within [f(X)'Pred, f(X)'Succ] etc. The accuracy
should depend on the precision. You also would certainly need some type
larger than T'Base in order to carry out intermediate calculations. How
would you get such a type in a generic unit?
Ada is an excellent language for number crunching by C's or FORTRAN's
standards. But this is not good enough by the standard of its own.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Unfortunately, the C99 standard has not yet been universally adopted.
Very few compilers fully support it. Many support most of it,
but I understand that Microsoft's compiler still supports only C90
(with maybe a handful of C99-specific features).
Which means that as soon as you write "#include <complex.h>", you've
limited the portability of your program.
--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
> I was browsing the net for scientific software written in Ada, and
> came across this strange statement:
>
> http://farside.ph.utexas.edu/teaching/329/lectures/node7.html
>
> "Scientific programming languages
> What is the best high-level language to use for scientific
> programming?
Just asking "what is best" in any selection process
is going to be contentious except for the simplest
of things. Somewhat like asking "what is the best
laptop?" Too many inputs to consider.
> This, unfortunately, is a highly contentious question.
> Over the years, literally hundreds of high-level languages have been
> developed. However, few have stood the test of time. Many languages
> (e.g., Algol, Pascal, Haskell) can be dismissed as ephemeral computer
> science fads. Others (e.g., Cobol, Lisp, Ada) are too specialized to
> adapt for scientific use.
..
> The remaining options are FORTRAN 77 and C. I have chosen to use C "
>
> I find this strange, because I think Ada can be the best programming
> language for numerical work. So, I do not know why the author above
> thinks Ada is "too specialized to adapt for scientific use".
Surely he expanded on this? How is it too specialized (in his
opinion)?
> Is there
> something in Ada which makes it hard to use for scientific
> programming?
I recently tried to introduce a co-worker to the fact that
Ada is free and easy to install (cygwin gcc-ada) yada yada.
He started reading a couple of free online (PDF) books and
said his eyes glazed over and that he has no further interest.
I suppose things might be different if there was a job market
for that skill here (Toronto). So that factor seems to be one
important reason for some folks.
Library support would be another factor, I would think. People
don't want to become experts in interfacing to C (for example).
But as I do more and more of this, I am finding this to
be less of an issue. Maybe a good "The Art of Binding [GNAT]
Ada" document would help people with this.
I do feel that Ada does require you to "engineer" your
project more, which is a good thing. But this is something
that some folks seem to dislike. It probably impacts the
inexperienced programmers the most. This is the very
crowd that needs to be won over.
Otherwise, it sounds as if the author has too quickly
dismissed the language, without much familiarity in it.
> The main problem I see with Ada for scientific use is that it does not
> have as nearly as many packages and functions ready to use output of
> the box for this, other than that, the language itself I think is
> better than Fortran and C for scientific work.
> --Nasser
One huge improvement I see in Ada-2005, is the built-in
inclusion of Ada.Containers. Having them "included" saves
me from having to instruct downloaders of where and which
version of the Booch components (or was it Charles?) etc.
Having them included means that my project should always
"work" the same, (in theory at least) regardless of the
version of the compiler used.
I am making big use of these containers in the rewrite of
my basic interpreter project. I have got far enough along
now to believe that this is a very practical language
change. I love the higher level of abstraction without
having to forego "repesentation" when it is required.
My only worry remains on compiler portability to HPUX,
Solaris and AIX. But at this point, I am going on faith
that this will not be a huge open source problem
going forward. It is encouraging to see the results of
gcc-ada making it more universally available, and
hopefully increasingly well tested.
I feel that the benefits of an Ada rewrite now seems
to outweigh the risk of compiler availability.
One beauty of open source is that I don't have to
justify the rewrite. But I could easily do that,
as the design is much improved now (I also expect
a big performance improvement later). Rewrites allow
the lessons learned to be applied and Ada is also a
big factor.
It's funny how I got started on this rewrite- it
was the farthest thing from my mind in January. I was
starting to think about threads and matrix operations
that might benefit etc. The Ada tasking model came
into the thought process at some point. So now I'm
looking forward to designing that into the
interpreter when it gets further along.
Just my own nearly at par $0.02 Cdn.
Warren
>
> I am making big use of these containers in the rewrite of
> my basic interpreter project. >
I looked at the Ada 2005 LRM, Containers.Vectors for example, and And I have
small question
http://www.adaic.org/standards/05rm/html/RM-A-18-2.html
Given this below
function Element (Container : Vector;
Index : Index_Type)
return Element_Type;
Then to get the element at index 5, one needs to write something like
Element(V,5).
Is there a way to redefine this so one need to only write V(5) ? because If
I have large equations, it is more clear to write V(i) than Element(V,i)
everywhere? Did you find this "longer" syntax an issue for you?
I have not used Containers before. But the list of functions looks
impressive.
ps. is your interpreter project somewhere on the net to look at?
thanks,
--Nasser
>> I am making big use of these containers in the rewrite of
>> my basic interpreter project. >
>
> I looked at the Ada 2005 LRM, Containers.Vectors for example, and And
> I have small question
>
> http://www.adaic.org/standards/05rm/html/RM-A-18-2.html
>
> Given this below
>
> function Element (Container : Vector;
> Index : Index_Type)
> return Element_Type;
>
>
> Then to get the element at index 5, one needs to write something like
> Element(V,5).
>
> Is there a way to redefine this so one need to only write V(5) ?
You could use V.Element(5), but that doesn't really
shorten the notation.
You could declare an internal function for that purpose, though
there are probably better solutions:
procedure My_Proc(args...) is
package P is new Ada.Containers.Vector(...);
V : P.Vector;
Function XV(X : Index_Type) returns Element_Type is
begin
return V.Element(X);
end;
begin
... := XV(5);
Note however (as I found out, snickers), you can't use the
V.Element() notation when the input value is a cursor. That
kept tripping me up until I clued in-- the cursor specifies
both the container and position internally. So you must use
Element(Csr) form of the call instead, for cursor values.
> because If I have large equations, it is more clear to write V(i) than
> Element(V,i) everywhere? Did you find this "longer" syntax an issue
> for you?
Not really, because when you deal with several containers (Maps mostly
in my case), I tend to spell out the full package heirarchy anyway
for clarity and to avoid hiding caused by "use".
One thing I did do however, was create a SimpMap package that
took care of all of the compare function definitions for me.
I got real tired of redeclaring equality functions for every
new integer/modular type involved.
> I have not used Containers before. But the list of functions looks
> impressive.
>
> ps. is your interpreter project somewhere on the net to look at?
>
> thanks,
> --Nasser
My Ada rewrite is still in the early stages, so it is not up on
SourceForge yet. If you want to experiment with the C version,
or look at the documentation, you can visit:
https://sourceforge.net/projects/bdbbasic
or for language and other documentation:
https://sourceforge.net/apps/mediawiki/bdbbasic/index.php?title=Main_Page
But if you email me directly, I can also send you the tar-balled
Ada version of the project. You'll see a lot of container examples
in it, which might be useful.
Warren
The only reason I can think of is pure fashion - Ada has not been taken up
as a popular programming language for scientific use, therefore it is not
suitable. But choosing a language based on popularity is not the best
approach - although it does have some validity, providing you recognise
that popular today does not mean popular when you *really* need
maintenance on the software.
Dismissing Algol as ephemeral ignores its influence and continuing usage
as a base of pseudo-codes. Important numerical libraries were first
implemented in ALgol, and later translated to Fortran when Algol's
momentum faltered. But that again confuses usefulness of a language for
scientific programming with popularity. Ada is heavily influenced by
Algol, and I can see nothing in Ada that would prohibbit is wider uptake -
other, again, than fashion. It was a language designed to promote
re-usability, maximise correctness, and include efficiency and
portability, and still has a variety of compilers available, so I can't
see any reason why *not* to use it, if you are proficient in its use.
> Then to get the element at index 5, one needs to write something like
> Element(V,5).
>
> Is there a way to redefine this so one need to only write V(5) ?
Not yet.
There is a proposal for this for Ada 2012.
And some other useful syntactic sugar.
- Bob
No, they were first implemented in machine code,
and later rewritten in Algol and FORTRAN.
The numerical procedures of the General Interpretive Programme
were written in machine code, from 1955.
> > Is there a way to redefine this so one need to only write V(5) ?
>
> Not yet.
>
> There is a proposal for this for Ada 2012.
Interesting. Is this proposal publicly available?
The ability to overload the function call and indexing operators in C+
+ is one of the most important language features and its presence in
Ada would be a very welcome addition.
It is only a pity that there would be no way to overload a function
call operator for the parameterless case. ;-)
--
Maciej Sobczak * http://www.inspirel.com
YAMI4 - Messaging Solution for Distributed Systems
http://www.inspirel.com/yami4
http://www.ihr.uni-stuttgart.de/forschung/ada/resources_on_ada/
look into "SEE Ada"
>No, they
Who is "they"? Note the lack of a universal qualifier. Are you claiming
that all algorithms were developed first in machine code, much less all
algorithms developed in the 1960's and 1970's? For that matter, do you
know of *any* algorithm that was first developed in machine code? I'm sure
that there were some, but I'd expect them to be rare as hen's teeth and
mostly limited to the 1950's and very early 1950's.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>
Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to spam...@library.lspace.org
OHOH, scientific programs would require best use of your
computer's resources, wouldn't they? So
(1) why run scientific programs on an OS (still largely written in C
AFAIK ...) that by default makes a herd of programs and services keep
your computer really busy without your program running, and
(2) why not use a better C compiler (if it has to be C) even on
MS Windows, such as the ones listed below---if it has to be C?
(I should add that the MS OS is purchased at a higher price
than most alternatives, too; price was a listed as an issue.)
But indeed, even though there is C in Windows NT,
"Thanks for taking the time to send us your suggestion. Currently, there are
no plans to implement C99 support in VS2010. Once we complete this product
cycle, we will be reviewing all customer suggestions, including this one, for
our future planning.
"Thanks,
Mark Roberts
Visual C++ Team"
http://connect.microsoft.com/VisualStudio/feedback/details/485416/support-c99
So for scientific computing, MS C will be a less attractive choice
than GNU C or Intel C, or Comeaucomputing's C on top of MS C adding
C99 to MS C, or ...
Or less attractive than compilers for one of the other
languages such as Ada or Fortran or ... that support both fairly recent
standards and computing with complex numbers.
> This kind of remark comes generally from hear-say of people who never
> had a serious look at Ada. Unfortunately, it is easier to repeat a rumor
> than to investigate seriously...
What's really sad is when a physics professor does that. Aren't
scientists supposed to be the ones who test and investigate things?
-- Adam
Because you cut the sentence and the one before it,
you lost the significance.
Restoring it, we have:
"| Dismissing Algol as ephemeral ignores its influence and continuing usage
"| as a base of pseudo-codes. Important numerical libraries were first
"| implemented in ALgol,
"No, they were first implemented in machine code,
"and later rewritten in Algol and FORTRAN."
you can see that it is patently obvious that "they" refers
to "Important Numerical libraries".
| Are you claiming
| that all algorithms were developed first in machine code,
You will also realize that it's referring to important ones,
and that it's disputing the claim that such libraries were first implemented in Algol.
| much less all
| algorithms developed in the 1960's and 1970's? For that matter, do you
| know of *any* algorithm that was first developed in machine code? I'm sure
| that there were some, but I'd expect them to be rare as hen's teeth and
| mostly limited to the 1950's and very early 1950's.
Restoring the immediately following sentence that you also cut out,
we see that I said:
"The numerical procedures of the General Interpretive Programme
"were written in machine code, from 1955."
which means that the procedures of "General Interpretive Programme"
were written in machine code, from 1955 --
which predates Algol by several years, does it not?
As for your supercilious question, do I <<know of *any* algorithm that was first
developed in machine code?>> --
Had you actually read my post, you would have seen that I gave
reference to a important numerical library.
Come to think of any numerical algorithm developed before Algol,
you may have heard of J. H. Wilkinson's work on numerical algorithms,
for which he wrote machine code from 1947. In his other early work,
he wrote programs (machine code) to solve simultaneous equations
back in about 1951.
as for any algorhtin
They didn't even take advantage of C's own type system.
Everything flows through a WORD or DWORD. The win api
is so very lame because of this. The C++ layer is
better, but..
> Or less attractive than compilers for one of the other
> languages such as Ada or Fortran or ... that support both fairly
> recent standards and computing with complex numbers.
Obviously Fortran persists because of existing code base and
those that only "know" that. But egads, the current rendition
of Fortran seem to have so many "bags on the side" and is
downright "butt ugly". Why anyone would want to continue
to wallow in that swill, is beyond me. Ada as a language OTOH,
is so nice and clean by comparison.
Warren
> Obviously Fortran persists because of existing code base and
> those that only "know" that. But egads, the current rendition
> of Fortran seem to have so many "bags on the side" and is
> downright "butt ugly". Why anyone would want to continue
> to wallow in that swill, is beyond me. Ada as a language OTOH,
> is so nice and clean by comparison.
I won't even start with your puny attempts at a language crusade,
suffice to say that all the niceness and cleanness is quite unusable if
you don't have a compiler. And on most supercomputers where serious
number crunching is performed, you do not have an Ada compiler and even
building gnat would be a very major pain (bootstrapping ...).
Regards,
Sebastian
What is the objection to using the C++ complex library?
> Warren <ve3...@gmail.com> writes:
>
>> Obviously Fortran persists because of existing code base and
>> those that only "know" that. But egads, the current rendition
>> of Fortran seem to have so many "bags on the side" and is
>> downright "butt ugly". Why anyone would want to continue
>> to wallow in that swill, is beyond me. Ada as a language OTOH,
>> is so nice and clean by comparison.
>
> I won't even start with your puny attempts at a language crusade,
....
> Sebastian
Wooo-oooo, aren't we snubby today.
Warren
(Or, in other circumstances, objections to using a library such
as Leda maybe.)
I'll speculate about two major reasons for not hoping for the C++
complex library to replace Fortran function libraries any time soon.
At least in some domains...
One reason would be successful tradition: a researcher has successfully
written a scientific program using knowledge available with Fortran 77;
moving to Fortran 90 has improved the solution. Why switch to
non-Fortran? The post-hoc fallacy aside, if non-Fortran is C++, to use
C++ effectively it takes learning a language integrating very many parts
in far reaching and novel ways (from the researcher's perspective).
Most parts need to be well understood in order to bridle the compiler.
To him or her, what is the indisputable advantage of C++ in relation to,
say, a modern subset of recent Fortran? Maybe the support of physical
unit checks at compile time is an example. But the mechanisms behind
template specialization based C++ computation are not that easy to
grasp, are they? At least hardly easier than just moving to Fortran 95
or later and manually checking units by paying attention.
Remembering professor Fitzpatrick's published remark that started this
thread, a researcher's job is probably focused on computing scientific
results rather than optimizing language use. So Fortran 90 it is, or
C---until a new generation of researchers and research problems
gives rise to a new tradition of similarly forced attire using another
language. Technical arguments involving language properties beyond
immediate necessity are subordinate, as ever. After all,
we continue to pay them for this style scientific software! ;-)
[end of speculation]
I guess it is this one:
http://www.ada-auth.org/cgi-bin/cvsweb.cgi/ai05s/ai05-0139-2.txt
And this differs from Ada'05 how? How many compilers support it? More
importantly (to me), how many non-compiler tools support it?
--Bg
Thanks for the link. Yes, I see this:
"Essentially we are allowing a container to be "indexed" in the same way as
an array is indexed, with the key (or potentially various different key
types) being arbitrary. "
http://www.ada-auth.org/cgi-bin/cvsweb.cgi/ai05s/ai05-0139-2.txt?rev=1.5
This sounds good. Any guess when will gnat have this? 2012, 2013, 2014?
--Nasser
Look, you were whining about "MS C" not implementing a complex data type.
Well Visual C++ 2008, which is the only "MS C" in current production,
most assuredly DOES implement a standards-compliant complex data type,
so I don't really understand the point of your complaint.
> Look, you were whining about "MS C" not implementing a complex data type.
Did I? I didn't. I remember saying that even in 2006 (from which
the note in question dates) there were well enough compilers
supporting C99 on Windows NT.
If VC2010 doesn't support C99, as reported, then still this perceived
lack would not have been a reason to dismiss C just for lack of a
complex data type. And in fact, VS2005, which was available in 2006,
does not have <complex.h> for C. VC++ does support <complex>,
but enough harm has been done in assuming that writing C using
a C++ compiler is a good idea.
> Well Visual C++ 2008, which is the only "MS C" in current production,
> most assuredly DOES implement a standards-compliant complex data type,
> so I don't really understand the point of your complaint.
My complaint, or observation, is that more than one researcher
talking about programming languages tends to act as a show man
when he or she does not really (need to) know what he or she is
talking about. This creates gossip, perpetuates hearsay, and,
by imitation, drives the choice of programming language for
research. Obviously then, decisions to use this or that language
will not be as informed as could be. Chances are that program
quality suffers. I hope this observation can be shown to be wrong.
> It is only a pity that there would be no way to overload a function
> call operator for the parameterless case. ;-)
Heck, when I think more about this it seems that even this would be
possible by analyzing the context of the given expression. That would
work in a similar way as overloading on return types.
For example, assuming that My_Magic_Type has an overloaded procedure
call operator and My_Other_Magic_Type has an overloaded function call
operator:
declare
X : My_Magic_Type;
Y : My_Other_Magic_Type;
begin
X; -- parameterless procedure call on X
A := Y; -- parameterless function call on Y
X (1, 2); -- procedure call on X with two params
B := Y (3); -- function call on Y with one param
end;
But I'm not really sure if that would be actually useful in the
context of other Ada features. Especially - the only place where it
would make sense is with generics, but then their specification syntax
would blow up even more.
You cannot have everything, I guess...
> On 6 Kwi, 10:07, Maciej Sobczak <see.my.homep...@gmail.com> wrote:
>
>> It is only a pity that there would be no way to overload a function
>> call operator for the parameterless case. ;-)
>
> Heck, when I think more about this it seems that even this would be
> possible by analyzing the context of the given expression. That would
> work in a similar way as overloading on return types.
> For example, assuming that My_Magic_Type has an overloaded procedure
> call operator and My_Other_Magic_Type has an overloaded function call
> operator:
>
> declare
> X : My_Magic_Type;
> Y : My_Other_Magic_Type;
> begin
> X; -- parameterless procedure call on X
A reader would expect this procedure idempotent. Consider a program:
X; X; X; -- What does this do?
> A := Y; -- parameterless function call on Y
I would prefer the ":=" (A, Y); interpretation, here.
> X (1, 2); -- procedure call on X with two params
I played with the idea of adding procedural operators. E.g.
X + 1;
meaning a call to
procedure "+" (X : in out Foo; Increment : Integer);
> B := Y (3); -- function call on Y with one param
Better it be ":="(B, "index" (Y, 3));
> end;
>
> But I'm not really sure if that would be actually useful in the
> context of other Ada features.
The things you could do with your proposal could probably be achieved in
other ways. For example, I considered a "touch" primitive operation, which
similarly to Adjust, to be called each time you access a volatile object in
order to get its value. This could be useful for tracing, interlocking,
garbage collection, persistency layer purposes, etc.
By (1) the number of years between standard publication and
support of basic data types and (2) that complex is much older in
both Fortran and Ada than it is in C99. But I said that C99 *is*
available in 2006, Keith Thompson noted that MS C (not MS C++)
is among those implementations that do not *fully* support C99.
If only there was a 3rd edition of K&R. I'd hope that (since
almost everyone is still relying to C in spite of everything) this
new edition could draw attention to at least the new and better
type stuff, even when it keeps being as suboptimal as C arrays.
> How many compilers support it?
Fewer than the total number of compilers (Ada 95 or Ada 2005)
available, TTBOMK.
> More
> importantly (to me), how many non-compiler tools support it?
Don't know. Syntax tools have few new things to deal with.
X-language tools might even be ahead if they had already supported
multiple inheritance of interfaces. Other tools for source code
analysis announce to support Ada 2005. Some makers depend on customer
demand and either fade or grow.
WRT scientific computing, there is one noteworthy development:
an Ada subset called SPARK, which I guess is sharing perspective
with Fortran subset F in a sense.
In part, SPARK brings back some of the spirit of original Ada 83,
even when including newer language features into a reasonably small
subset. I think that this subset includes stuff very
valuable in scientific programming, insofar as the latter will
profit from data types and array indexing proven mathematically to
be correct, so, for example, leaving out bounds checking is no
longer an adventure but becomes a justified consequence.
Depends on your cost-model. If your man-hours for writing the code
don't count,
go on with C or Fortran. If they are a factor, maybe it's worth to
spend a couple of thousand
for getting support from a compiler vendor to port GNAT.
Thanks btw. for showing quite clearly, that it's not only the "Ada-
Guys" who are
rude.
Marc
> A reader would expect this procedure idempotent. Consider a program:
>
> X; X; X; -- What does this do?
This is the same as today with regular parameterless procedures:
procedure X (Spacing : in Positive_Count := 1)
renames Ada.Text_IO.New_Line;
Now, what the reader would expect from your example?
> > A := Y; -- parameterless function call on Y
>
> I would prefer the ":=" (A, Y); interpretation, here.
As I've pointed out, that would be resolved in the same way as
overloading by return type. The ":=" (A, Y) interpretation might not
match, whereas the overloaded function call on My_Other_Magic_Type
might return the type that is appropriate for assignment to A.
It's a regular overload resolution stuff.
> > B := Y (3); -- function call on Y with one param
>
> Better it be ":="(B, "index" (Y, 3));
Except that the notion of "index" might not be appropriate. Function
is a more general term (indexing is a kind of function, but not the
other way round).
> The things you could do with your proposal could probably be achieved in
> other ways. For example, I considered a "touch" primitive operation, which
> similarly to Adjust, to be called each time you access a volatile object in
> order to get its value. This could be useful for tracing, interlocking,
> garbage collection, persistency layer purposes, etc.
Except that with the overloaded function call operator, you would not
need "touch", as the function body would be already a right place to
put all such tracing.
So what?
> VC++ does support <complex>,
> but enough harm has been done in assuming that writing C using
> a C++ compiler is a good idea.
What "harm" is this? And in point of fact, VS2005 has no C compiler
except the C++ compiler that you say should not be used for writing C.
What you are calling a "C compiler" is in fact a command line switch
applied to the C++ compiler.
>> Well Visual C++ 2008, which is the only "MS C" in current production,
>> most assuredly DOES implement a standards-compliant complex data type,
>> so I don't really understand the point of your complaint.
>
> My complaint, or observation, is that more than one researcher
> talking about programming languages tends to act as a show man
> when he or she does not really (need to) know what he or she is
> talking about. This creates gossip, perpetuates hearsay, and,
> by imitation, drives the choice of programming language for
> research. Obviously then, decisions to use this or that language
> will not be as informed as could be. Chances are that program
> quality suffers. I hope this observation can be shown to be wrong.
My complaint is that you seem to be complaining to be complaining. If
you're using a C++ compiler then write C++, don't whine because its C
support is half-assed.
> On 7 Kwi, 10:24, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>
>> A reader would expect this procedure idempotent. Consider a program:
>>
>> X; X; X; -- What does this do?
>
> This is the same as today with regular parameterless procedures:
>
> procedure X (Spacing : in Positive_Count := 1)
> renames Ada.Text_IO.New_Line;
>
> Now, what the reader would expect from your example?
The reader knows that New_Line is a shortcut for
New_Line (Standard_Input);
It is a bad style to hide the effects of a procedure. New_Line is rare
exception form this rule.
>>> A := Y; -- parameterless function call on Y
>>
>> I would prefer the ":=" (A, Y); interpretation, here.
>
> As I've pointed out, that would be resolved in the same way as
> overloading by return type.
It must be a type different from My_Other_Magic_Type then. But the reader
sees:
Y : My_Other_Magic_Type;
so what is the type of Y? If it effectively is not the type declared, then
this does not look like a good idea.
>>> B := Y (3); -- function call on Y with one param
>>
>> Better it be ":="(B, "index" (Y, 3));
>
> Except that the notion of "index" might not be appropriate. Function
> is a more general term (indexing is a kind of function, but not the
> other way round).
They do not intersect. Function has the syntax f(x,y,z). Index has the
syntax x(y,z).
>> The things you could do with your proposal could probably be achieved in
>> other ways. For example, I considered a "touch" primitive operation, which
>> similarly to Adjust, to be called each time you access a volatile object in
>> order to get its value. This could be useful for tracing, interlocking,
>> garbage collection, persistency layer purposes, etc.
>
> Except that with the overloaded function call operator, you would not
> need "touch", as the function body would be already a right place to
> put all such tracing.
That depends on how you define the function "Y". Is it
function "Y" (This : My_Other_Magic_Type) return My_Other_Magic_Type;
or
function "Y" return My_Other_Magic_Type;
The latter is not "touch", the former is ambiguous:
Foo (Y); -- Is it Foo(Y), Foo("Y"(Y)), Foo("Y"("Y"(Y)))?
If there is no C99 but MS and C and scientific programming is
required, this means you can only write C++ programs using MS
tools if you want objects of standard complex types. (Or choose
Ada or Fortran or ...) But C++ was not mentioned as an option.
>> VC++ does support <complex>,
>> but enough harm has been done in assuming that writing C using
>> a C++ compiler is a good idea.
>
> What "harm" is this? And in point of fact, VS2005 has no C compiler
> except the C++ compiler that you say should not be used for writing C.
> What you are calling a "C compiler" is in fact a command line switch
> applied to the C++ compiler.
C++ overlaps C to a large extent. But the compilers
must arrange for the parts of the languages outside the
respective other language. However little one might think these
differences are, ignoring them can lead to error and to portability
trouble.
MS C and MC C++ are therefore, strictly speaking, impossibly the
same compilers. But: referring to more than a command line switch,
Microsoft compilers for many languages use some of the common MS
translation technology. That does not make the input languages
the same. Just like an Intel C++ compiler and an Intel Fortran
compiler share some circuitry, AFAIK. This still does not make
C++ or Fortran interchangeable. GCC can be made to translate
a number of languages. That does not make the languages basically
the same, and not even does it make the dialects of C the same:
GCC with -std=c99 and with -std=c89 accept a different set of programs.
Even when the effective compiler "program" is changed "merely" by a
switch. You might call this nitpicking, but observing the little
differences contribute to program quality IMO. If the latter
does not count, then why bother to consider language properties
in the first place?
> My complaint is that you seem to be complaining to be complaining. If
> you're using a C++ compiler then write C++, don't whine because its C
> support is half-assed.
Fitzpatrick wanted to write C, not C++, and he wanted standard
complex types. So why should he be using a C++ compiler with
half-assed support for C99 without complex? (He, not me.)
Writing C using a C++ compiler creates, in addition to other things,
the hurdle of having to understand C++ in order to make sense of
error messages. (But OTOH, the C++ error messages of some compilers
*can* be a lot better than C's in some situations.) The unfortunate
notion behind "C/C++" incidentally creates a business opportunity for
those who wish to be consultants, recongnizing the "pragmatically"
blurred approach to language use. Note that this is not the same as
integrating modules written in C and other parts of a program written
in C++.
But this is moving off topic.
This is off-topic, but ...
I'm sure the C++ compiler implements C++'s complex type. Does it
support C99 complex types when invoked as a C compiler? They're
defined quite differently; they have to be, since standard C doesn't
have operator overloading.
Here's a test case, a complete translation unit that should compile
without error with a conforming C99 compiler:
double _Complex new;
C and C++ are two different languages.
--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
> As for your supercilious question, do I <<know of *any* algorithm that
> was first developed in machine code?>> --
Wasn't Ada Augusta's first program an algorithm to compute Fibonacci
numbers? That would certainly have been in machine code.
And Alan Turing thought in machine code ...
Precisely. And Microsoft is not touting any of their current projects
as a C compiler so why should they support C99?
Look, you have a choice, you can use C or you can use Microsoft
compilers, but if you're expecting state of the art C from Microsoft
you've come to the wrong shop.
I just don't understand what's so great about C that one MUST use it in
preference to C++.
Surely not until there is a firm proposal (all that exists now is an outline
with no details). Beyond that, I doubt AdaCore will be announcing firm dates
until its actually available. (They already have some 2012 stuff
implemented, so it might not be a long wait.)
Randy.
> Has anyone written a paper "Ada for Scientific Programming"? I
> envision such a paper as having all of the tasking-related stuff
> stripped out and a heavy emphasis on the numerical issues. Probably the
> distributed programming stuff could be eliminated too; I'm not sure. A
> Paul Hilfinger comes to mind.
A few years ago, as a scientist, I used and programmed for a complex
physical simulation system written mostly in ada, but with some modules
written in other languages (mainly old fortran or pascal models of
physical systems that was interfaced to the simulation system).
Ada is IMO quite suitable for scientific applications.
Rgds
Denis McMahon
> > As I've pointed out, that would be resolved in the same way as
> > overloading by return type.
>
> It must be a type different from My_Other_Magic_Type then. But the reader
> sees:
>
> Y : My_Other_Magic_Type;
>
> so what is the type of Y? If it effectively is not the type declared, then
> this does not look like a good idea.
This is true and this is a result of the lack of parens. If Ada
adopted the convention of empty parens for parameterless functions,
then Y and Y () would not be as confusing.
> >>> B := Y (3); -- function call on Y with one param
>
> >> Better it be ":="(B, "index" (Y, 3));
>
> > Except that the notion of "index" might not be appropriate. Function
> > is a more general term (indexing is a kind of function, but not the
> > other way round).
>
> They do not intersect. Function has the syntax f(x,y,z). Index has the
> syntax x(y,z).
You are not consistent. Index has the syntax x(y,z), which is
interchangeable with "index"(x,y,z). Function has the syntax f(x,y,z),
which might be an overloaded operator with syntax x(y,z). This makes
them overlapping.
> > Except that with the overloaded function call operator, you would not
> > need "touch", as the function body would be already a right place to
> > put all such tracing.
>
> That depends on how you define the function "Y".
"Y" is not a function, it is an object.
However, it can be used in the context where a different type is
expected that can be delivered by:
function "call" (This : My_Other_Magic_Type) return T;
where "call" is a new special operator name that I just invented.
> The latter is not "touch", the former is ambiguous:
>
> Foo (Y); -- Is it Foo(Y), Foo("Y"(Y)), Foo("Y"("Y"(Y)))?
It is not ambiguous if Foo expects T and T /= My_Other_Magic_Type - in
that case this is basic overload resolution.
(Sorry, I only meant to retain this paragraph:)
>>>> Unfortunately, the C99 standard has not yet been universally adopted.
>>>> Very few compilers fully support it.
>> And this differs from Ada'05 how?
>
...
>
>
>> How many compilers support it?
>
> Fewer than the total number of compilers (Ada 95 or Ada 2005)
> available, TTBOMK.
>
Is it more than 1? I don't remember hearing anything about other support.
>
>> More
>> importantly (to me), how many non-compiler tools support it?
>
> Don't know. Syntax tools have few new things to deal with.
> X-language tools might even be ahead if they had already supported
> multiple inheritance of interfaces. Other tools for source code
> analysis announce to support Ada 2005. Some makers depend on customer
> demand and either fade or grow.
>
And some demand customers pay them to develop the upgrade, which the
customer will have to pay for the privilege of using.
If you want sustainable software, you can't rely on languages/versions
that are not widely supported. It may not have been that many years
since Ada'05 (what, basically 3 years, in essense?), but Ada'1z is
already in work. When it's finished, what will the ration of '05 to
'95-only tools be?
Sorry, getting off of soapbox now. Maybe I ought to put wheels on it
and take a ride downhill.
He did, because he wrote programs (including subroutines)
back in 1945 when he designed the Automatic Computing Engine.
> Georg Bauhaus wrote:
>> BrianG schrieb:
>>> How many compilers support it?
>> Fewer than the total number of compilers (Ada 95 or Ada 2005)
>> available, TTBOMK.
>>
> Is it more than 1? I don't remember hearing anything about other support.
Well, if you count each host/target pair as a compiler, there are quite a lot
of Ada compilers supporting the full Ada 2005 language. All
of those are based on GNAT, as far as I know. There are also
non-GNAT ones that have partial support.
- Bob
Can't speak for the makers, and my copies of non-GNAT compilers
would need an update. However, some hints. Even a few years ago
a non-GNAT front end had some messages saying something to the
effect that "this feature is only available in Ada 2005";
I bet that the front end maker focused on analyzing
program text more thoroughly than is needed in order to just compile
it will have some support for the pre/post/inv features of 201Z,
an important addition to the language IMHO.
Likewise, it seems almost necessarily true to me that Janus/Ada
has Ada.Containers. This is still speculation but will
picture the Ada situation quite similar to what you find
to be the case for other multi-vendor languages.
However, the recent Ada standards (2005) are very attractive. It is
possible to interface software from other languages (MPI, Metis,
UMFPACK) and the tool from gnat g++ -c -fdump-ada-spec ... is quite
exciting. The containers are also useful. I have used an instaniation
of Ada.Containers.Ordered_Maps to contain sparse matrices.
Some of the applications are finite element software for Maxwell's
equations.
> On 7 Kwi, 15:44, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>
>>> As I've pointed out, that would be resolved in the same way as
>>> overloading by return type.
>>
>> It must be a type different from My_Other_Magic_Type then. But the reader
>> sees:
>>
>> Y : My_Other_Magic_Type;
>>
>> so what is the type of Y? If it effectively is not the type declared, then
>> this does not look like a good idea.
>
> This is true and this is a result of the lack of parens. If Ada
> adopted the convention of empty parens for parameterless functions,
> then Y and Y () would not be as confusing.
What is wrong with +Y or abs Y? If you have a operator ("+", "abs" or "()")
applied to an object then it is visually a different case.
>>>>> B := Y (3); -- function call on Y with one param
>>
>>>> Better it be ":="(B, "index" (Y, 3));
>>
>>> Except that the notion of "index" might not be appropriate. Function
>>> is a more general term (indexing is a kind of function, but not the
>>> other way round).
>>
>> They do not intersect. Function has the syntax f(x,y,z). Index has the
>> syntax x(y,z).
>
> You are not consistent. Index has the syntax x(y,z), which is
> interchangeable with "index"(x,y,z). Function has the syntax f(x,y,z),
> which might be an overloaded operator with syntax x(y,z). This makes
> them overlapping.
I think you are conflating syntax and semantics. Syntactically function
call is not an operator and neither is indexing. Semantically all three are
just subprograms.
>>> Except that with the overloaded function call operator, you would not
>>> need "touch", as the function body would be already a right place to
>>> put all such tracing.
>>
>> That depends on how you define the function "Y".
>
> "Y" is not a function, it is an object.
> However, it can be used in the context where a different type is
> expected that can be delivered by:
>
> function "call" (This : My_Other_Magic_Type) return T;
>
> where "call" is a new special operator name that I just invented.
OK, I think there is a name for this: "implicit type conversion." Is it
what you are proposing? IMO, arbitrary type conversions are unsafe. If I
would introduce something alike I would simply use the interface
inheritance, which Ada lacks so badly. E.g.
type My_Other_Magic_Type is ... and interface T;
-- T's interface is inherited
...
private
type My_Other_Magic_Type is ...;
function To_T (This : My_Other_Magic_Type) return T;
for Y as T use To_T;
Now Y implements the interface of T (i.e. is a member of T'Class), but has
the representation independent on T. All operations of T are implemented by
the composition of Convert with the corresponding operation of T (if not
overridden, of course).
For in/out operations you will also need a backward conversion:
function From_T (This : T) return My_Other_Magic_Type;
for Y as T use From_T;
>> The latter is not "touch", the former is ambiguous:
>>
>> Foo (Y); -- Is it Foo(Y), Foo("Y"(Y)), Foo("Y"("Y"(Y)))?
>
> It is not ambiguous if Foo expects T and T /= My_Other_Magic_Type - in
> that case this is basic overload resolution.
I meant the case when the result is My_Other_Magic_Type.
just my 2 (numerical) eurocents
I have been developping some Fortran (77) code during my PhD (17 years ago).
It was a basic MonteCarlo Code for simulation of electron-positron
collision.
1001 SLOC (physical Sources Lines Of Code, measured with sloccount)
Not big, most work went into chiseling the analytical low-level
expressions so that the numerical approach would be used at its most.
No dedicated simplification or hardware related optimisation beyond what
the compiler provides with -O3.
Not much software engineering: only using Fortran 77 with COMMON thought
as objects, and a minimum modular approach.
The result was fast, but it would be arrogant boasting to describe how
much ;-)
Time passed and then Open MP appeared. I wanted to test my old pal
reference code on my brand new two-core CPU, without much effort, I thought.
To ease the move to parallel code, and to learn more about the now
unavoidable Fortran 90 (which simply copied the easiest part of Ada 83),
I first translated my code to F90 (not simply compiling the same code
with an F90 compiler).
I turned the COMMON into modules (a.k.a. package) and a type within this
package, and according to the famous equation :
module + type = class
it resulted in an object oriented code ;-) .
1150 SLOC
Time-wise, I was paying only a 20% abstraction penalty with the same
compiler. Not bad.
The objectification process was made in C++ at the same time. A quite
painful translation indeed, but C++ was my way to earn money at that time...
1259 SLOC.
Not only the translation was painful, but the compilation gave me result
flawed with memory faults that I had to debug. In the end, I finally had
the same accuracy as Fortran 77, but...
The result looks like a good start for a flame war: C++ was 7.5 times
slower than Fortran 77. With the same compiler (g++ versus g77/gfortran).
I do not think it is so significant in fact, as my algorithm relies
heavily on complex number, which are well integrated in Fortran, but
pays the full abstraction penalty in C++... The good way to do it in C++
would be to use libraries such as Boost or Blitz.
We can do it, but then do we really want to go through the hassle of
extra layers?
My program wasn't still parallelized, and Ada became the next candidate
for the test.
1120 SLOC.
The translation was swift, and when the compiler finaly let me go, the
code ran (no debug needed) and was delivering the same accuracy as
Fortran 77, and only a factor 2 of speed loss (compared to 7.5 for
C++...). With gnat (to stay in the same compiler family).
Given that complex number are not hard-embedded in Ada, they should pay
the abstraction price, but they keep the price low.
Later, I could capitalize on the tasks and protected objects to
parallelize my program in Ada, which is as yet still not done in
Fortran... But this is another story.
I plan to do it in Java as well, but I expect almost the same thing as
for C++: it will not be testing the strong point of the language, but
really peeking at its weakest point: complex numbers.
So, as far as performance is concerned, Fortran was, IN THIS CASE, the
winner. Ada, was a strong contender.
I will not elaborate too much on the easiness of translation: when you
have worked for many month with such a short program, and converted it
to 2 other languages, the 3rd translation can not be so hard.
I believe that if I had to translate (or even write) other more
significant codes, I would use the ability of Ada to interface with
other languages: I would keep my low level routines in fast Fortran, and
have the general flow of the computation driven by Ada, to make the
whole picture clearer and more efficient.
In between, I try to code more in Fortran 90, which is a close nephew of
Ada 83.
* I did the same kind of multi-language port for my education, with a
quasi-random number generator, starting from a C++ version,
and I have not yet found a way as elegant, and memory-wise efficient, to
store an upper triangular matrix as in C++.
* I also miss some syntaxic sugar of Fortran that allows to refer to
line I of a matrix M as a vector with M(I, *)
=> If someone would like to discuss it, I will be glad to exchange on these.
Best regards,
Vincent
PS : for language crusade, I believe that the problem of Fortran is
"Fortran users", who are mostly not computer scientist. I hear the Ada
fans boast "Ada doesn't make assumptions about the programmer: he can be
uneducated, but the compiler will save him", but I strongly doubt the
actual population of Ada coders be AS uneducated (or
software-engineering-unconscious) as the actual population of Fortran
coders (averaged over the last 50 years).
>Because you cut the sentence and the one before it,
>you lost the significance.
No. You still don't get the significance of what you replied to.
>Restoring it, we have:
Bupkis.
>"| Dismissing Algol as ephemeral ignores its influence and continuing
>usage "| as a base of pseudo-codes. Important numerical libraries were
>first "| implemented in ALgol,
Read it carefully this time and note what words it doesn't contain.
>"No, they were first implemented in machine code,
>"and later rewritten in Algol and FORTRAN."
>you can see that it is patently obvious that "they" refers
>to "Important Numerical libraries".
Then Does "No" also refer to them? Because that "No" is dead wrong.
>You will also realize that it's referring to important ones,
Who decides what's important? Do you believe that no important algorithms
were written in the late 1950's, the 1960's and the 1970's?
>and that it's disputing the claim that such libraries were first
>implemented in Algol.
Yes, because you're confusing existential quantifiers with universal
quantifiers.
>Restoring the immediately following sentence that you also cut out,
Because it was irrelevant.
>we see that I said:
> "The numerical procedures of the General Interpretive Programme
> "were written in machine code, from 1955."
Which has nothing to do with the point in dispute.
>Had you actually read my post,
ROTF,LMAO. Too bad you didn't read your own post before replying to mine.
>you would have seen that I gave reference to a important numerical
>library.
Strangely enough, I also noticed that it was a library, not an algorithm.
I also noticed that the algorithms in it were not the only algorithms ever
to be developed.
>Come to think of any numerical algorithm developed before Algol, you may
>have heard of J. H. Wilkinson's work on numerical algorithms, for which
>he wrote machine code from 1947.
Algorithms that were developed on dead trees. Translations of existing
algorithms are not what is in dispute.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>
Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to spam...@library.lspace.org
>Wasn't Ada Augusta's first program an algorithm to compute Fibonacci
>numbers? That would certainly have been in machine code.
But was it a new algorithm, or merely a transcription of an algorithm that
she already knew? And, more important, do you know for a fact that *Robin*
knew about it? Note carefully what I asked and what I didn't ask.
> Given that complex number are not hard-embedded in Ada, they should pay
> the abstraction price, but they keep the price low.
An example of Ada's complex type (a record) being less efficient
than something else is seen in the following Mandelbrot programs;
the difference between the two Ada entries is in part caused by
one of them using type Complex,
http://shootout.alioth.debian.org/u64q/performance.php?test=mandelbrot
I guess that (some) computations involving object of type Complex will be
faster when compilers generate SSE instructions for complex arithmetic.
> However, the recent Ada standards (2005) are very attractive. It is
> possible to interface software from other languages (MPI, Metis,
> UMFPACK) and the tool from gnat g++ -c -fdump-ada-spec ... is quite
> exciting. The containers are also useful. I have used an instaniation
> of Ada.Containers.Ordered_Maps to contain sparse matrices.
Speaking of 2005, I wouldn't mind acquiring a book on
the essential elements of the 2005 features in Ada,
without trudging through dry RM type prose. However,
it seems that these new books are quite pricey,
even used. Normally I can find a suitable deal on
abebooks.com, but have come up empty so far.
There must be a digestable summary on the net somewhere.
Resources?
Warren
<snip>
>
> > Given this below
>
> > function Element (Container : Vector;
> > Index : Index_Type)
> > return Element_Type;
>
> > Then to get the element at index 5, one needs to write something like
> > Element(V,5).
>
> > Is there a way to redefine this so one need to only write V(5) ?
>
> You could use V.Element(5), but that doesn't really
> shorten the notation.
>
> You could declare an internal function for that purpose, though
> there are probably better solutions:
>
> procedure My_Proc(args...) is
>
> package P is new Ada.Containers.Vector(...);
> V : P.Vector;
>
> Function XV(X : Index_Type) returns Element_Type is
> begin
> return V.Element(X);
> end;
> begin
> ... := XV(5);
>
<snip>
I guess you could also use a renames clause:
package Vec is new Vectors (Index_Type => Positive,
Element_Type => Integer);
use Vec;
function X_At (Container : Vector;
Index : Positive) return Integer renames
Vec.Element;
but you lose the object.method notation:
Put ( X_At (V, 5));
> Speaking of 2005, I wouldn't mind acquiring a book on
> the essential elements of the 2005 features in Ada,
> [...]
>
> There must be a digestable summary on the net somewhere.
> Resources?
The Ada Rationale would be one such resource.
http://www.adaic.org/standards/rationale05.html
Thanks. Someone else emailed me about that as well,
and so I went back and took a more serious look at
it (my lazy- my bad). There is indeed a good summary
of the change there.
Some very happy changes in there!
Warren
|---------------------------------------------------------------------|
|"[..] |
| |
|As Dmitry Kazakov has recently said, when Ada run-time systems |
|starts addressing the properties of multicore hardware |
|there is hope that it could really shine: Not just because concurrent|
|sequential processes are so simple to express using Ada tasks |
|---and you'd be using only language, not a mix of libraries, |
|preprocessors, specialized compilers, programming conventions, |
|etc. But also in case the fine grained complexity of OpenMP 3.0 |
|can be bridled by simple language and a good run-time system. |
|At little cost." |
|---------------------------------------------------------------------|
I met someone today who described himself as "an ordinary FORTRAN
programmer" who advocated C for the practical reason that libraries
are designed for C. He claimed that small tasks are good for multicore
and large tasks are good for GPUs.
|------------------------------------------------------------------------------|
|In my infinitely small experience with Ada as a CS student and self- |
|taught practitioner I have to say that's mostly "FUD"." |
|------------------------------------------------------------------------------|
Agreed.
|------------------------------------------------------------------------------|
|"[..] |
|This is not the reason for which the languages above are used, but |
|it's the explanation given for not trying the alternatives. |
| |
|<rant> |
|The only true reason for which Ada or other languages aren't used is, |
|as you said, the amount of available software directly usable in those |
|languages, which depends on the popularity of the language itself, |
|which, in turn, depends on the ease with which the language can be |
|implemented in popular architectures (x86 PC). This more or less dates |
|back to Unix and C being the ultimate computer viruses (cfr. "The Unix |
|Haters Handbook") ... </rant>" |
|------------------------------------------------------------------------------|
No. Someone who works predominantly as something else (such as a
physicist) lacks the confidence; time; motivation; background;
understanding; and skills to waste time learning another language. It
would be better that the one language which an incidental programmer
did not become completely scared of was Ada, but few incidental
programmers would be taught such a good language to begin with, and
few incidental programmers will try a second language.
That's irrelevant.
I already pointed out that important algorithms were first written
in machine code in the 1950s ; In fact, a whole suite of them --
all before they were written in Algol.
And the Euclidean Algorithm was written in Greek several thousand years
before there was such a thing as "machine code".
I think you're conflating algorithms and programs. An algorithm is a
procedure for doing something. A program implements that algorithm on a
particular set of hardware. Most development of algorithms is done with
pencil and paper, not a programming language.
> I think you're conflating algorithms and programs. An algorithm is a
> procedure for doing something. A program implements that algorithm on a
> particular set of hardware. Most development of algorithms is done with
> pencil and paper, not a programming language.
Algorithm is a program running on the hardware of human brain, "programmed"
in some more or less formal system. Some algorithms can be translated into
other systems (computer programming languages) for other hardware
(computers), which is a part of what we call programming.
> On Sun, 4 Apr 2010, Andrea Taverna suggested:
>
> |-------------------------------------------------------------------------
> |----- "On 4 Apr, 06:46, "Nasser M. Abbasi" <n...@12000.org> wrote:
> | <rant>
> | The only true reason for which Ada or other languages aren't used is, as
> | you said, the amount of available software directly usable in those
> | languages, which depends on the popularity of the language itself,
> | which, in turn, depends on the ease with which the language can be
> | implemented in popular architectures (x86 PC). This more or less dates
> | back to Unix and C being the ultimate computer viruses (cfr. "The Unix
> | Haters Handbook") ... </rant>"
> |-------------------------------------------------------------------------
> |-----
>
> No. Someone who works predominantly as something else (such as a
> physicist) lacks the confidence; time; motivation; background;
> understanding; and skills to waste time learning another language. It
> would be better that the one language which an incidental programmer did
> not become completely scared of was Ada, but few incidental programmers
> would be taught such a good language to begin with, and few incidental
> programmers will try a second language.
I agree with the point of your response but not your choice of
words. Most of us here would not use the phrase "waste time to learn
another language [Ada]." :-)
Charlie
--
All the world's a stage, and most
of us are desperately unrehearsed. Sean O'Casey
|----------------------------------------------------------------------------|
|----------------------------------------------------------------------------|
Okay, to put it another way: a computer scientist would not waste time
learning another language. Someone who is never going to be good at
programming because it is a marginal issue for THAT person does not
have enough of a reason to try to learn another language. For example,
I use a pen and a keyboard and I do not do much fancy writing so I
have enough of a reason to learn how to be a calligrapher. I do not
give many speeches, so I have not taken lessons on oration. A
politician might substantially benefit from lessons on oration, and
though might find a better language in Ada, would not really need to
be much of a software developer. I strained to listen to Tullio
Vardanega trying to speak when he was giving a presentation at a
conference. It would be good for him to take lessons on oration: but
if he had to choose between learning to be audible or defining
RAVENSCAR, which would you have preferred him to do?
> Unfortunately, the C99 standard has not yet been universally adopted.
> Very few compilers fully support it. Many support most of it,
> but I understand that Microsoft's compiler still supports only C90
> (with maybe a handful of C99-specific features).
SPEC2006 contains a benchmark which needs C99 complex values (or some
variant of that), so you better support that, or you can't get a
score.
If you consider what humans do to be "running a program".
On the contrary, I substantiated it twice.
Not only did you not substantiate it, you didn't even instantiate it!
Now the thread is back on-topic!
>I already pointed out that important algorithms were first written in
>machine code in the 1950s
I know what you claimed; you have neither substantiated it nor shown its
relevance to the points in dispute. Which part of ":all" don't you
understand? Why do you believe that "all" is present in sentences that
clearly lack it?
>That's irrelevant.
The dispute is about the development of algorithms, not about their
transcription. The question of whether Ada actually developed the
Fibonacci algorithm is highly relevant to that question.
>On the contrary, I substantiated it twice.
No, you twice made totally irrelevant claims. Nothing that you have
written has any bearing on whether algorithms were developed in Algol 60,
and you haven't even substantiated the claim that important algorithms
were *DEVELOPED* (NOT TRANSLATED INTO) in machine code.
Sorry, can't be bothered. Especially since this thread was all about
algorithms being *implemented* not *developed*.
Had you actually read what I wrote in my first post in this thread,
you would have comprehended that I said "first IMPLEMENTED in machine code"
(emphasis added).
And I twice substantiated my claim.
>Had you actually read what I wrote in my first post in this thread,
I did; it was both irrelevant and unsubstantiated.
>you would have comprehended that I said "first IMPLEMENTED in machine
>code"
See above.
>And I twice substantiated my claim.
No; you neither identified the algorithms to which you were referring nor
demonstrated that they had not previously been implemented on, e.g., dead
trees, mechanical calculators.
>Sorry, can't be bothered. Especially since this thread was all about
>algorithms being *implemented* not *developed*.
How do you develop an algorithm without implementing it, if only and a
pencil and a dead tree? Would you claim that, e.g., Euclid's Algorithm,
was not developed until the advent of computers?
You're wrong on both counts.
| >you would have comprehended that I said "first IMPLEMENTED in machine
| >code"
|
| See above.
|
| >And I twice substantiated my claim.
|
| No; you neither identified the algorithms to which you were referring
What don't you understand about "General Interpretive Programme".
That's the algorithm. It's the one I indentified. Four times now.
nor
| demonstrated that they had not previously been implemented on, e.g., dead
| trees, mechanical calculators.
That's irrelevant.
But if you want an example of that, try computer-produced music.
That is irrelevant. It is not what I claimed.
| 2. You cited a book describing multiple algorithms; you refused to
| identify specific algorithms about which you were making claims.
What don't you understand about the word "Programme"?
It is a computer program.
"General Interpretive Programme" is the name of the program,
and also, incidentally, the name of a book.
>You're wrong on both counts.
1. You have not addressed the question of whether Algol was used
to develop algorithms. Even had you *shown* that other languages
had been used earlier or more often, that would have not addrssed
the issue in dispute.
2. You cited a book describing multiple algorithms; you refused to
identify specific algorithms about which you were making claims.
3. You profused to show that the unspecified algorithms about which
you made claims had not already been in use.
--
No it isn't.
But if you want original development, try
1. Compilers, typically first written in 1950s in machine code.
2. Nuclear codes.
3. Computer-generated music
4. Random number generation.
| not about their
| transcription. The question of whether Ada actually developed the
| Fibonacci algorithm is highly relevant to that question.
That's complerely irrelevant.
I think that's a very solid case--there was no need for such a thing
before there was machine code so there was no incentive for anybody to
even look for the necessary algorithms, although some of the pieces may
have had prior development, and you can't use a high level language
until you have a working compiler for it (although I understand that in
some cases the "compiler" was a grad student).
> 2. Nuclear codes.
Were the algorithms they used developed to be used on computers or were
they computer implementations of the hand and card-machine algorithms
that were used during the development of the bomb? Los Alamos didn't
have a mechanical computer you know--"computer" at Los Alamos was a job
title--but they did have a room full of punch-card machines and a group
of teenagers doing amazing things with them.
> 3. Computer-generated music
Don't know anything about that.
> 4. Random number generation.
How were random numbers generated before computers? Did they not have
viable algorithms for the purpose?
I think the "Chem Rubber Bible" has a table of random numbers you can
use; just pick a spot to start. OTOH, that begs the question of how
they were generated in the first place. I have visions of a roomful of
people flipping coins.
Just take any bad quality resistor, zenerdiode, or a number
of other electronic components, amplify the noise, and use it
with a bit of hardware to produce an endless stream of random numbers.
No computers needed.
Dick Hendrickson
Well, you need at least some digital logic to convert it
into a number. There is a paper by intel on their design for
a random number generator based on such noise sources.
-- glen
> Excellent time to trim nonessential newsgroups
That would be all of them in this case. :-)
--
Richard Maine | Good judgment comes from experience;
email: last name at domain . net | experience comes from bad judgment.
domain: summertriangle | -- Mark Twain
>No it isn't.
When you deny that important numerical algorithms were developed in a
particular language, how is the dispute not about the development of
algorithms?
>But if you want original development, try
>But if you want original development, try
You still are doging the point in dispute. Nobody claimed that everything
was developed in Algol, that most algorithms were developed in Algol or
that Algol was the first language to be used to develop algorithms. You're
attempts to change the subject remind me more and more of your friend
David Frank.
For the record, algorithms in the following areas were
first written in machine code, from the mid-1940s,
and run from the early 1950s.
Differential equations
solving linear equations
latent roots
matrix multiplication
calculate determinants
matrix transpose
matrix inversion
linear porogramming
multple linear regression
statistical tabulations.
input and output
floating-point arithmetic (software)
Most were part of the General Interpretive Programme,
developed at National Physical Laboratory by 1955.
In particular, 129 simultaneous equations solved on Pilot ACE in 1952.
Simultaneous equations and second-order differential equations were solved,
and are documented in proceedings of 1953 NPL symposium.
Aircraft flutter design calculations were done in 1952 with the introduction of jet
aeroplanes -- made compulsory in 1954 following investigations of
crashes of the Comet (the investigation itself required extensive computer calculations).
Crystallography calculations from 1954.
Just to mention a few ...
Sampling speed is another critical factor. If sampling
exceeds the bit flip rate, then it becomes less "random" ;-)
Warren
| Dismissing Algol as ephemeral ignores its influence and continuing usage
| as a base of pseudo-codes. Important numerical libraries were first
| implemented in ALgol, and later translated to Fortran when Algol's
| momentum faltered.
Here's another example that I came across today:
Don Shell published his algorithm in machine code.
(A High-Speed Sorting Procedure, CACM, July 1959, p. 30-32.)
Here's another example.
| I met someone today who described himself as "an ordinary FORTRAN
| programmer" who advocated C for the practical reason that libraries
| are designed for C. He claimed that small tasks are good for multicore
| and large tasks are good for GPUs.
I think you will fnd that libraries are also designed for Fortran.
>Here's another example.
No.
>Don Shell published his algorithm in machine code.
No. Probably CAGE. Possibly SAP. Either you didn't read the article or
you have no idea of what machine code is.
|------------------------------------------------------------------------|
|------------------------------------------------------------------------|
They certainly are. He uses code based on LAPACK. If you are aware of
Fortran bindings to GPUs which you would care to inform me of, then I
could mention to him. Maybe he already knows about them, maybe not,
but I have already informed you of the reason he gave for advocating
C.