Therefore we'd like to know precisely what the reasons for this are.
Naively one would say, compile the Fortran version, append an underscore
(when appropriate) to routine names, and just link those libraries to
your object files.
What are the gotchas, and what are the real stumbling blocks here?
Please spell it out in as much detail as you can muster. This situation
really needs to improve.
V.
--
email: lastname at cs utk edu
homepage: www cs utk edu tilde lastname
The reasons may be more political than technical. At my company, the IT department
is Fortran-phobic, so if someone in the research department hands them a
code with mixed C++ and Fortran 77, they will want the F77 part to be f2c'ed.
They believe that FORTRAN is unmaintainable because they don't know FORTRAN.
I can use Fortran to get work done, but only if I stay out of their sight.
I hate IT people.
I know of at least one other company in my industry (financial services)
with a policy that all production code must be in C++. Researchers can experiment
in Matlab etc., but eventually code must be migrated.
I find it odd that you think people ought to use Fortran LAPACK rather than
clapack, when it appears from http://www.fortranstatement.com/cgi-bin/signatures.pl
that you signed a "petition to retire Fortran", which contains lies such
as "FORTRAN2003 Adds No Functionality".
----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---
As a person who used to use CLAPACK until I found it was just as easy
to link to the FORTRAN libs, I would dare say there are those who use
CLAPACK because they don't realize there are other alternatives. Also,
CLAPACK comes with the header file needed to use either the FORTRAN or
CLAPACK libraries, whereas if you download the pre-built libraries from
http://www.netlib.org/lapack/archives/, the header files are not
present. This is a small issue, but could be a problem for people who
are less familiar with the library structure or the semantics for
calling a FORTRAN library from C/C++. Just my thoughts...
On a slightly related note, is there any effort (or even thought) being
put into the development of a "standardized" C++ object-oriented LAPACK
library? Something to wrap around the BLAS and LAPACK, perhaps?
Best,
Ryan
> I find it odd that you think people ought to use Fortran LAPACK rather
> than clapack, when it appears from
> http://www.fortranstatement.com/cgi-bin/signatures.pl that you signed a
> "petition to retire Fortran", which contains lies such as "FORTRAN2003
> Adds No Functionality".
Hee. I forgot about that. And I don't think it's relevant for my
question.
Lapack/Scalapack is 500,000 lines of tremendously useful legacy code,
and it's not going to go away anytime soon. At the moment my only
ambition is to make that code as cleanly usable from other languages as
possible. Using automatically translated versions is at the least a
support nightmare.
Thanks for your response,
> On a slightly related note, is there any effort (or even thought) being
> put into the development of a "standardized" C++ object-oriented LAPACK
> library? Something to wrap around the BLAS and LAPACK, perhaps?
The thought has occurred to many people. I mean, passing LDA into a C++
routine is the programming equivalent of turning a crank handle on the
space shuttle. Now, what to do about it....
I've started looking at Sidl/Babel to get a more native looking
interface in other languages than F77. Not sure to what extent that will
satisfy you.
>The authors of lapack/scalapack are starting to work on a new release of
>these packages. One of the things we want to address is the sillyness
>that C users, instead of linking to the binaries of the Fortran version,
>use an automatically translated C version.
>
>Therefore we'd like to know precisely what the reasons for this are.
>
>Naively one would say, compile the Fortran version, append an underscore
>(when appropriate) to routine names, and just link those libraries to
>your object files.
>
>What are the gotchas, and what are the real stumbling blocks here?
>Please spell it out in as much detail as you can muster. This situation
>really needs to improve.
The issue that springs to mind is that Fortran and C store matrices
differently: Fortran does it column-wise (the first index varies fastest),
C does it row-wise (the last index varies fastest).
The fact that Fortran array indexing is 1-based (by default) and C array
indexing is 0-based shouldn't cause any problems when using the compiled
Fortran code from C.
Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Dan...@ifh.de
Currently looking for a job in the European Union
The reasons that folks translate are more at the level of urban legends
and about as well informed as your wanting to retire Fortran.
The urban legends may relate to the problems of doing mixed languages.
You might require run time libraries for both but only have one
compiler. There is surely some amount of pure bigotry which may
be passed along by PHBs (Pointy Haired Boss - see Dilbert for a full
explanation). Policy may dictate that full source must be available
even if it is unreadable and unmaintainable in practice. This allows
the source to be recompiled on new and yet unknown machines. The new
machine may actually be the same old machine but with a new version
of its system software.
Each of these is of the "enduring nonsense" variety as there is a small
amount of technical justification which might apply is some circumstance
that typically has no relevance to the situation in which it is now
applied.
> V.
> The issue that springs to mind is that Fortran and C store matrices
> differently: Fortran does it column-wise (the first index varies fastest),
> C does it row-wise (the last index varies fastest).
>
> The fact that Fortran array indexing is 1-based (by default) and C array
> indexing is 0-based shouldn't cause any problems when using the compiled
> Fortran code from C.
These sort of issues make me wonder why clapack is considered so great:
it does not provide arrays in the C-natural order, and in some places
the 1-based indexing is exposed, such as in pivoting of the
factorization.
When we write a more C-native interface we'll try to cover these issues.
It will mean some minimal bit of glue code that will hardly impact
performance.
Thanks for your response.
The following are just my opinions based on my experience.
One reason is, the clapack's installation instructions are crystal
clear. I mean it is very easy and gives step by step instructions as to
what to do.
But on the other hand, lapack gave me hell of a trouble while compiling.
Compiling with intel compiler gave some IEEE errors (cant recall them
exactly, but if you are interested, they could be found on c.l.f
archives) The lapack/lapack95 instructions are cryptic and probably can
be understood only by experts. But now I have intel mkl library
available for free. So I dont have to compile them myself.
raju
Dan Pop wrote:
(snip regarding lapack and clapack)
> The issue that springs to mind is that Fortran and C store matrices
> differently: Fortran does it column-wise (the first index varies fastest),
> C does it row-wise (the last index varies fastest).
> The fact that Fortran array indexing is 1-based (by default) and C array
> indexing is 0-based shouldn't cause any problems when using the compiled
> Fortran code from C.
Some people who write numerical algorithms in Fortran exchange
the subscripts from the mathematical description, and others don't.
With cache memory and virtual memory one should always be sure that
the loops correspond to the memory layout in the appropriate language.
-- glen
Hi Victor,
For the last few years I have been working on a
set of C++ classes to do linear algebra. I was trying to
acheive most of the functionality of Lapack/Blas but
in a very user friendly fashion. The classes center
arround a general purpose vector class along with a dense rectangular Matrix
class and then some more compilcated classes like a vector of vectors and
vector of matrices.
All of them use operator overloading so you can write
A = B + C * D where A,B,C, D are matrices and all of the operators
work for both real and complex matrices. Akll of the basic Linear algebra
functions are supported
ie solution of equations, decompositions, SVD eigenvectors etc. etc.
The entire library is less than 5000 lines of code
and compiles into a static library of ~~ 3 megabytes
You don't have to compile it but you will need either a
Miccrosoft VC6 or Linux GCC compiler to use it.
I have demo version available and if anyone in the group wants to play
with it and has a compatible compiler send me a line. Once you've used it
you will never go back to LAPACK
pe...@cminet.net
regards...Bill
> For the last few years I have been working on a
> set of C++ classes to do linear algebra. I was trying to
> acheive most of the functionality of Lapack/Blas but
> in a very user friendly fashion. The classes center
> arround a general purpose vector class along with a dense rectangular Matrix
> class and then some more compilcated classes like a vector of vectors and
> vector of matrices.
Bill,
please mail me a pointer to your software.
However, without wanting to be disrespectful of your software, I wonder
if this is such a wise approach. Lapack is a high quality code base that
supports several data formats: dense, symmetric, banded, tridiagonal.
There is use for that, so before you can state
> Once you've used it
> you will never go back to LAPACK
you have to duplicate a lot of that functionality.
Then, Lapack has (especially in the eigenvalue part) state of the art
algorithms. The people coming up with the best algorithms are
implementing them straight in Lapack. What eigenvalue algorithms do you
use? Condition estimation, iterative refinement, et cetera.
What I have often wished for was a C implementation that represented the
same quality and attention to detail as the FORTRAN version.
I have no idea what you could do about any of that as long as FORTRAN is the
highest priority language to support, the "mother language", if you will.
Obviously FORTRAN is still heavily used in some niches. I'm not questioning
this, nor passing any kind of value judgement on it, but I would definitely
look very hard at the question of what the priority *should* be, basing this
on what you are trying to accomplish with a new version of LAPACK. You
might get one answer if, for example, you were primarily interested in
supercomputing or are only planning a relatively minor incremental update,
and you might reach a different conclusion if you wanted use of the library
to be easy and convenient to incorporate for the widest possible audience.
--
Mike
"Victor Eijkhout" <see...@for.addy> wrote in message
news:1gokq2z.1qtze21fbcwaoN%see...@for.addy...
Hi Victor,
I need a e-mail address to send you the algebra package
its not on the internet
do you want the microsoft VC6 or the the Linux GCC 2.7
version. The package is about 750 kilobytes. I wonder
how big Lapack/Blas is ?
Sure the Lapack people might have some more sophisticated algorithms but
only a few problems need them. The advantages of working in C++ and having
all its tremendous libraries available to you are overwhelming.
The eigenvalue methods I use are the old standbys
reduction to either tri or bi diagonal then QR iteration. For
real general matrices I use reduction to hessian then QR iteration for the
eigenvalues and inverse iteration for the eigenvectors.
Very interesting response. Thanks.
I'm not sure that what we're doing right now is going to help you any.
However, Lapack has traditionally had a very clean coding style with
rigidly enforced style, so I imagine that translating automatically will
always stay a possibility.
>Michael Hosea <mho...@mathworks.com> wrote:
(stuff)
can you please stop crossposting to comp.lang.c as this is offtopic here.
thanks
--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.ungerhu.com/jxh/clc.welcome.txt>
Victor Eijkhout wrote:
> The authors of lapack/scalapack are starting to work on a new release of
> these packages. One of the things we want to address is the sillyness
> that C users, instead of linking to the binaries of the Fortran version,
> use an automatically translated C version.
>
> Therefore we'd like to know precisely what the reasons for this are.
I am developing a suite of C functions consisting of the
Symmetric Eigenvalue problem, the SVD and the Block Reflector.
I have used Lapack as a reference to write a Divide and Conquer
solver for the Symmetric Eigenvalue problem and am now doing
the D&C SVD. I am quite well informed.
Many people nowadays perform numerical computations in C
or C++ which are just as suitable as Fortran.
There are many engineering applications with substantial
number crunching which are written in C.
Most individuals and many smaller Engineering companies do
not want to be perpetually burdened with having to purchase
a license for a Fortran compiler for their workstations and
would rather spend a few man days converting the Lapack
modules which their application requires to C.
Have you ever seen those bills for a Fortran compiler for an
AIX machine? Have you ever had to fight SUN Fortran?
Those mentioned would love to have the main factorizations in
C code rather than being burdened with having to purchase a
license for a Fortran compiler if one is at all available,
and having to cross language link yet another huge library
of which only a fraction is used.
I estimate there must be a fair contingent of people who
have extracted the CLapack modules required by their
application, found out what auxiliary modules are required
and absorbed these into a C application.
The problem with Lapack is that you can't read the code,
that it is neither commented nor documented with a Technical
Manual and that the algorithms referred to are detailed in
obscure internal reports at certain universities.
This is fine if there is reaaaallly no maintenance required
ever but this is never the case.
And Fortran to C conversion make the code even worse.
Although it is certainly possible to write code of almost the
same quality as good C in Fortran, the Lapack code uses 60's
style programming:
No dynamic memory allocation but the eternal WORK and IWORK
arrays which are partitioned up in intricate ways.
Does Lapack really need to cater to prehistorical compilers
which do not support dynamic memory allocation?
The functions generally are too large with the familiar gobs
of variables at the start and no attempt at resticting scope.
There are no Modules containing related functions but every
function is separate.
The single precision versions are useless in my opinion but
others may disagree, at the very least relegate them to
recognizeable modules so users who do not want them can leave
them out.
There is more: insightful remarks on how to re-use arrays to
conserve memory, fear of copying arrays resulting in a milliped
matrix multiply functionality and so on.
"Lame comments" and huge headers abound but the intricacies of
the algorithms are not mentioned.
If you want to sink more time into the Fortran Lapack, you
might want to look into the temporary storage issue, allocating
the memory instead so the user is not burdened with having to
calculate the "work storage" required.
Next get rid of the single precision versions or at least
modularize them and generally aggregate the code into a few
dozen modules of related functions.
For the QR type methods, store the Givens transformations and
allow the user to pass a function to select Eigenvectors or
Singular Vectors for computation rather than having to estimate
their index for computation beforehand which really sucks.
Just some pointers.
Lapack's use of a BLAS is great but Lapack is tailored to the
traditional BLAS rather than the other way around and the
functionality of these BLAS functions is not optimal.
Simple Example: BLAS provides DNORM2 and DSCALE functions but
90% plus of the use of these is to Unitize a vector which should
be a single function. Currently this requires two function calls
between which the all important norm information is truncated
from 80 bits accuracy to 64 on a PC for example. Likewise with
the computation of the Householder vector. Is this a big deal?
Well, there are quite a few of these instances, and in the end
it all adds up. The point is that you might want to look beyond
the traditional BLAS which is just historical accident.
Lapack is quite accurate and fast although the accuracy of some
of the order two factorizations and some of the orthogonal
transformations can still be improved here and there.
> Naively one would say, compile the Fortran version, append an underscore
> (when appropriate) to routine names, and just link those libraries to
> your object files.
>
> What are the gotchas, and what are the real stumbling blocks here?
> Please spell it out in as much detail as you can muster. This situation
> really needs to improve.
I have spelled it out; this is not intended as an attack but
as constructive criticism.
Lapack is free and has been useful to me as a reference and
for accuracy tsting my own code.
I realize that a lot of people have spent considerable effort
in its design and I am grateful for that. And it works.
Thanks,
> V.
Jentje Goslinga
my home site is:
http://www.numerical-algorithms.com/
> Currently this requires two function calls
> between which the all important norm information is truncated
> from 80 bits accuracy to 64 on a PC for example.
Wow. I can't believe anyone would post this to sci.math.num-analysis!
I thought that non-explicit use of extended precision variables was
universally considered to be a dumb idea. If you want explicit use
of extended precision with LAPACK, you need extra functions.
BTW, modern compilers on PCs generally default to using SSE2
instructions on modern PC cpus, hence no more extended precision
unless you ask for it. This became true soon after the Pentium 4 was
released, which was a few years ago.
-- greg
"Mark McIntyre" <markmc...@spamcop.net> wrote in message
news:bl9kr0dm6la8r0uus...@4ax.com...
Fortran 95 still has many advantages over C and C++:
(1) true multidimensional arrays, with a wide range of intrinsic
functions like SUM and MAXVAL and the ability to use array sections
like a(1:10,:,2:10:2).
(2) the ability to pass one- and multi-dimensional arrays to functions
in a SIMPLE manner and to have functions returning arrays.
(3) arrays that can start at 0, 1, or any other user-specifed integer
(4) intrinsic functions like SIN that are generic to single, double,
and often quadruple precision, for both scalar and array arguments
(5) user-defined PURE and ELEMENTAL functions, to facilitate
parallelization and reduce the need for separate functions handling
scalar and array arguments
(6) better handling of complex numbers
(7) a form of DO loop that prevents the user from inadvertently
changing the loop variable. WHILE loops and and a DO-ENDDO loop with an
EXIT can be used when more flexibility is needed.
(8) the ability to EXIT from nested loops
superior facilities for reading and writing formatted tables of numbers
(9) a cleaner, less error-prone syntax. It is natural for the
end-of-line to terminate a statement by default, and to allow
continuation lines when needed. A misplaced semi-colon won't
drastically change the meaning of a Fortran code.
>There are many engineering applications with substantial
>number crunching which are written in C.
In my company substantial number crunching is done in Excel, because
some people refuse to learn anything else. Many C programmers seem to
have similar blinders.
>Most individuals and many smaller Engineering companies do
>not want to be perpetually burdened with having to purchase
>a license for a Fortran compiler for their workstations and
>would rather spend a few man days converting the Lapack
>modules which their application requires to C.
>Have you ever seen those bills for a Fortran compiler for an
>AIX machine? Have you ever had to fight SUN Fortran?
Much scientific computation that used to be done on commercial UNIX
platforms is now being done on Linux, where there are many commercial
Fortran 95 compilers. The Lahey/Fujitsu Linux compiler costs as little
as $250, and the Intel Fortran compiler is free for non-commercial use.
G95 at http://www.g95.org is 99.9% of a free, open-source Fortran 95
compiler for Linux, Windows, Mac OS X, and other platforms. Gfortran is
getting there.
>The problem with Lapack is that you can't read the code,
>that it is neither commented nor documented with a Technical
>Manual and that the algorithms referred to are detailed in
>obscure internal reports at certain universities.
Lapack documentation is at http://www.netlib.org/lapack/lug/index.html
, also as a printed book.
>the Lapack code uses 60's style programming:
>No dynamic memory allocation but the eternal WORK and IWORK
>arrays which are partitioned up in intricate ways.
>Does Lapack really need to cater to prehistorical compilers
>which do not support dynamic memory allocation?
Fortran 90/95 has dynamic memory allocation.
>The functions generally are too large with the familiar gobs
>of variables at the start and no attempt at resticting scope.
Fortran 90/95 functions and subroutines can have OPTIONAL arguments, so
that the user may only need to specify a few to get the default
behavior. The LAPACK95 interface to the LAPACK library already exists,
and using it, a subroutine call such as
call SGELSD(M, N, NRHS, A, LDA, B, LDB, S, RCOND, RANK, WORK,LWORK,
IWORK, INFO)
can be replaced with just
CALL LA_GELSD (A, B)
>There are no Modules containing related functions but every function
is separate.
Fortran 90/95 has something called a "MODULE" that does exactly this.
To get compile-time checking of procedure interfaces, the
"pre-historic" #include statements of C/C++ are not needed.
>The single precision versions are useless in my opinion but
>others may disagree, at the very least relegate them to
>recognizeable modules so users who do not want them can leave
>them out.
The single precision codes of LAPACK start with the letter S, as
described at http://www.netlib.org/lapack/individualroutines.html , and
are thus easy to recognize.
Ideally, each of the BLAS calls could be replaced by calls to Fortran
90/95 intrinsic functions or array operations, improving readability.
You have indirectly made a good case for writing the next version of
LAPACK in Fortran 95.
In general, one should try to gain a current understanding of a
subject, in this case Fortran, before making recommendations about it
to others. There are Fortran 90/95 tutorials at
http://www.dmoz.org/Computers/Programming/Languages/Fortran/Tutorials/Fortran_90_and_95/
.
> The single precision codes of LAPACK start with the letter S, as
> described at http://www.netlib.org/lapack/individualroutines.html , and
> are thus easy to recognize.
What I would prefer regarding the simgle precision versions would
be a single unified version, with numeric inquiry functions, where
necessary, accounting for the differences. The kind type parameter
in the functions would be inherited from the host module. Another
module could rename the procedures to their single- and double-
precision names and create also generic names. The utility of the
resulting code would be that of C++ templates in that all the code
would be maintained in one place and could be used generically. The
only comparative awkwardness of this approach is the intermediate
module(s) that do the renaming, but I don't think they would be
changed a whole lot in the maintainance phase. I have previously
posted several examples of this technique, but it doesn't seem to
have caught on.
> You have indirectly made a good case for writing the next version of
> LAPACK in Fortran 95.
My reading of the previous post was that the intent was more direct.
But maybe I'm just reading others' posts more optimistically.
--
write(*,*) transfer((/17.392111325966148d0,6.5794487871554595D-85, &
6.0134700243160014d-154/),(/'x'/)); end
I don't think that is the issue, since the clapack translation (as I
understand it) is not an idiomatic translation into C (actually, that
would be much too confusing, if you think about it). It's a perfectly
literal translation, which preserves Fortan-style array indexing by
brutal mangling of C notation.
Allin Cottrell
Wake Forest University
beli...@aol.com wrote:
>>Many people nowadays perform numerical computations in C
>>or C++ which are just as suitable as Fortran.
>
>
> Fortran 95 still has many advantages over C and C++:
> (1) true multidimensional arrays, with a wide range of intrinsic
> functions like SUM and MAXVAL and the ability to use array sections
> like a(1:10,:,2:10:2).
That is great, now tell me why the part of Lapack, at least the
functions for Eigenvalues and SVD I have looked at does not use
any of these.
> (2) the ability to pass one- and multi-dimensional arrays to functions
> in a SIMPLE manner and to have functions returning arrays.
> (3) arrays that can start at 0, 1, or any other user-specifed integer
> (4) intrinsic functions like SIN that are generic to single, double,
> and often quadruple precision, for both scalar and array arguments
> (5) user-defined PURE and ELEMENTAL functions, to facilitate
> parallelization and reduce the need for separate functions handling
> scalar and array arguments
> (6) better handling of complex numbers
That is great but I rarely need those and can write the required
operations in an hour.
> (7) a form of DO loop that prevents the user from inadvertently
> changing the loop variable. WHILE loops and and a DO-ENDDO loop with an
> EXIT can be used when more flexibility is needed.
Come on, I am talking to programmers.
> (8) the ability to EXIT from nested loops
> superior facilities for reading and writing formatted tables of numbers
> (9) a cleaner, less error-prone syntax. It is natural for the
> end-of-line to terminate a statement by default, and to allow
> continuation lines when needed. A misplaced semi-colon won't
> drastically change the meaning of a Fortran code.
Sigh.
>>There are many engineering applications with substantial
>>number crunching which are written in C.
>
>
> In my company substantial number crunching is done in Excel, because
> some people refuse to learn anything else. Many C programmers seem to
> have similar blinders.
I never called myself exclusively a C programmer and may have
written a lot more Fortran code than you have.
>>Most individuals and many smaller Engineering companies do
>>not want to be perpetually burdened with having to purchase
>>a license for a Fortran compiler for their workstations and
>>would rather spend a few man days converting the Lapack
>>modules which their application requires to C.
>>Have you ever seen those bills for a Fortran compiler for an
>>AIX machine? Have you ever had to fight SUN Fortran?
>
>
> Much scientific computation that used to be done on commercial UNIX
> platforms is now being done on Linux, where there are many commercial
> Fortran 95 compilers. The Lahey/Fujitsu Linux compiler costs as little
> as $250, and the Intel Fortran compiler is free for non-commercial use.
> G95 at http://www.g95.org is 99.9% of a free, open-source Fortran 95
> compiler for Linux, Windows, Mac OS X, and other platforms. Gfortran is
> getting there.
It all depends on the application. Nowadays "systems" usually
have a database component and sometimes internet connectivity
and much more and it is natural for them to be composed of
modules in various different languages. I have seen many
engineering applications in the C and C++ languages.
I have written a large system with a Windows user interface,
database and internet connectivity in C++.
The beauty of C++ is that the notion that the code will have
to connect to modules written in other languages has been an
intrinsic part of the design of the language from the beginning.
>>The problem with Lapack is that you can't read the code,
>>that it is neither commented nor documented with a Technical
>>Manual and that the algorithms referred to are detailed in
>>obscure internal reports at certain universities.
>
>
> Lapack documentation is at http://www.netlib.org/lapack/lug/index.html
> , also as a printed book.
I haven't looked there but assume these are user manuals
for people like you. I am talking about design here.
>>the Lapack code uses 60's style programming:
>>No dynamic memory allocation but the eternal WORK and IWORK
>>arrays which are partitioned up in intricate ways.
>>Does Lapack really need to cater to prehistorical compilers
>>which do not support dynamic memory allocation?
>
>
> Fortran 90/95 has dynamic memory allocation.
Precisely, that is what my post implies.
>>The functions generally are too large with the familiar gobs
>>of variables at the start and no attempt at resticting scope.
>
>
> Fortran 90/95 functions and subroutines can have OPTIONAL arguments, so
> that the user may only need to specify a few to get the default
> behavior. The LAPACK95 interface to the LAPACK library already exists,
> and using it, a subroutine call such as
>
> call SGELSD(M, N, NRHS, A, LDA, B, LDB, S, RCOND, RANK, WORK,LWORK,
> IWORK, INFO)
>
> can be replaced with just
>
> CALL LA_GELSD (A, B)
>
>
>>There are no Modules containing related functions but every function
>
> is separate.
>
> Fortran 90/95 has something called a "MODULE" that does exactly this.
> To get compile-time checking of procedure interfaces, the
> "pre-historic" #include statements of C/C++ are not needed.
>
>
>>The single precision versions are useless in my opinion but
>>others may disagree, at the very least relegate them to
>>recognizeable modules so users who do not want them can leave
>>them out.
>
>
> The single precision codes of LAPACK start with the letter S, as
> described at http://www.netlib.org/lapack/individualroutines.html , and
> are thus easy to recognize.
Yes, I know that. There is rarely an excuse for duplicating code
such as for different precisions for example, this is really a
design problem. Beware... before you make that copy.
> Ideally, each of the BLAS calls could be replaced by calls to Fortran
> 90/95 intrinsic functions or array operations, improving readability.
>
> You have indirectly made a good case for writing the next version of
> LAPACK in Fortran 95.
I am making lots of useful and constructive comments.
> In general, one should try to gain a current understanding of a
> subject, in this case Fortran, before making recommendations about it
> to others.
Exactly, that's why I sometimes take the liberty to post.
I have programmed in Fortran since 1980, in C since 1985 and
in C++ and Assembler since 1995.
There are Fortran 90/95 tutorials at
>
http://www.dmoz.org/Computers/Programming/Languages/Fortran/Tutorials/Fortran_90_and_95/
Enough!
The purpose of my post was not to discredit Fortran and
I never did, this is not another language war.
My article makes some genuinely useful and well researched
suggestions for improvement which are based on a lot of
thinking. You may think otherwise.
Jentje Goslinga
Because it is written in Fortran 77, which did not have these features. A
new LAPACK should be in Fortran 95 IMO.
> It all depends on the application. Nowadays "systems" usually
> have a database component and sometimes internet connectivity
> and much more and it is natural for them to be composed of
> modules in various different languages. I have seen many
> engineering applications in the C and C++ languages.
> I have written a large system with a Windows user interface,
> database and internet connectivity in C++.
> The beauty of C++ is that the notion that the code will have
> to connect to modules written in other languages has been an
> intrinsic part of the design of the language from the beginning.
I can't find the section in Bjarne Stroustrup's book where he discusses
interoperability with other languages. Would you care to enlighten me?
cheers,
Rich
--
Dr Richard H D Townsend
Bartol Research Institute
University of Delaware
[ Delete VOID for valid email address ]
Rich Townsend wrote:
> Jentje Goslinga wrote:
>
>> It all depends on the application. Nowadays "systems" usually
>> have a database component and sometimes internet connectivity
>> and much more and it is natural for them to be composed of
>> modules in various different languages. I have seen many
>> engineering applications in the C and C++ languages.
>> I have written a large system with a Windows user interface,
>> database and internet connectivity in C++.
>> The beauty of C++ is that the notion that the code will have
>> to connect to modules written in other languages has been an
>> intrinsic part of the design of the language from the beginning.
>
>
> I can't find the section in Bjarne Stroustrup's book where he discusses
> interoperability with other languages. Would you care to enlighten me?
Yes, the "extern C" syntax is discused in Chapter 9
Section 9.2.4 Linkage to Non-C++ code,
The C++ programming Language, third edition page 205.
There is also a useful discussion about the design of large
systems in Chapter 23, Development and Design, worth reading.
This a great book.
> cheers,
Thanks for saying something positive for a change.
Jentje
> Rich
That's interop with C (which is almost redundant, C++ originally
being an extension of C, it's hardly impressive that it can interop
with C). I still don't see the part where it's originally intended
to interop with, say, Ada, Pascal, Lisp, Haskell, and yes: Fortran.
--
J. Giles
"I conclude that there are two ways of constructing a software
design: One way is to make it so simple that there are obviously
no deficiencies and the other way is to make it so complicated
that there are no obvious deficiencies." -- C. A. R. Hoare
Duh!, dummy Andy Capp from U of Delaware, with C++, who needs Fortran or
any other language?. There are several Fortran compilers (sorry, you have
to buy them, a known problem for cheapskates like you) that work quite
harmoniously with C/C++ but you're not ready for them, you being way too
dimwitted.
--
You're Welcome,
Gerry T.
______
"Ah, Klinger, my constant reminder that Darwin was right!." -- Maj Charles
Winchester III, the 4077th M*A*S*H
Page 524 (r.7.4.) in my second edition (I assume we are talking about
"The C++ Programming Language"). But in all seriousness, you can't claim
that 'extern' is in any way comparable with the sort of language
operability we see nowadays. The meaning of the string literal following
the 'extern' is implementation dependent, therefore this mechanism for
interoperability is wholly non-portable. Fortran has has similar
non-portable language interoperability *long* before C++ even existed --
there are few mainstream f77 compilers that aren't interoperable with C
on a platform-dependent basis.
Therefore, your claim about C++ being designed ab initio with
interoperability in mind is a little misleading; an equally-meaningless
claim could be made of Fortran, of Pascal, of Forth, of Basic etc etc.
>
> There is also a useful discussion about the design of large
> systems in Chapter 23, Development and Design, worth reading.
>
> This a great book.
Indeed; but I find the style a little dry. Metcalf & Reid is closer to
my preferred pedagogical style -- not as flippant as Numerical Recipes,
not as dense as Knuth.
>
>> cheers,
>
>
> Thanks for saying something positive for a change.
Out of curiousity, me in particular, or just people in general?
> No dynamic memory allocation but the eternal WORK and IWORK
> arrays which are partitioned up in intricate ways.
> Does Lapack really need to cater to prehistorical compilers
> which do not support dynamic memory allocation?
Ok, it's clear where this use of temporaries comes from. However, it's
not clear that dynamic allocation is the best solution. If your routine
is called once, with a large dataset, then it could do its own
alloation. However, if the routine is called a large number of times
with the same small problem size, then you want to do the allocation
outside it.
We probably have to take a step back to the application level, and see
which routines can be part of a computationally intensive inner loop.
Your comments very much appreciated.
> Because it is written in Fortran 77, which did not have these features. A
> new LAPACK should be in Fortran 95 IMO.
I'm sorry, that's too much code to rewrite.
James Giles wrote:
> Jentje Goslinga wrote:
>
>>Rich Townsend wrote:
>>
>>>Jentje Goslinga wrote:
>>
> ...
>
>>>>The beauty of C++ is that the notion that the code will have
>>>>to connect to modules written in other languages has been an
>>>>intrinsic part of the design of the language from the beginning.
>>>
>>>
>>>I can't find the section in Bjarne Stroustrup's book where he discusses
>>>interoperability with other languages. Would you care to enlighten me?
>>
>>Yes, the "extern C" syntax is discused in Chapter 9
>>Section 9.2.4 Linkage to Non-C++ code,
>>The C++ programming Language, third edition page 205.
>
>
>
> That's interop with C (which is almost redundant, C++ originally
> being an extension of C, it's hardly impressive that it can interop
> with C).
It is not so trivial with overloading and name decoration.
> I still don't see the part where it's originally intended
> to interop with, say, Ada, Pascal, Lisp, Haskell, and yes: Fortran.
You are right, that part doesnt exist; I had forgotten that
but what's the difference, isn't C almost the same as Fortran?
Interoperability between languages is a gray area where one
needs to consider the following:
1 Ability to suppress C++ name decoration, knowledge of other
conventions such as underscores appended or prepended to name
2 Knowledge of linker conventions such as upper case conversion
and maximum name length
3 Ability for the programmer to specify parameter passing by
address and by value which C and C++ have
4 Ability for the user to use parameter passing from left to
right or vice versa
5 Conventions on stack based parameter passing notably padding
and stack alignment
6 Decide on who resets the stack pointer, caller or callee.
I agree there may be differences between languages between
items 4-6 although they are usually not irreconcilable since
the operating systems supporting these languages must use
cross language linking too.
I believe none of these are even specified by the C or C++
standards either, although parameter passing from the right
to left is said to be necessary to accommodate variable
number of parameters.
But you are right.
Jentje
You have said it.
>>
>> There is also a useful discussion about the design of large
>> systems in Chapter 23, Development and Design, worth reading.
>>
>> This a great book.
>
>
> Indeed; but I find the style a little dry. Metcalf & Reid is closer to
> my preferred pedagogical style -- not as flippant as Numerical Recipes,
> not as dense as Knuth.
>
>>
>>> cheers,
>>
>>
>>
>> Thanks for saying something positive for a change.
>
>
> Out of curiousity, me in particular, or just people in general?
No, you, thanks for saying something positive. This newsgroup
stuff can be quite draining after a while especially if one's
expertise is continually questioned.
I would not post if I didnt think I had something to contribute
based on experience.
Jentje
> cheers,
>
> Rich
>
Well, not really. Historically is was not at all infrequent to find
that many languages could not interoperate easily with C, even
though they could easily interoperate with each-other. C often
tended to be diffferent, sometimes for no discerable reason: just
to be different I guess.
Also you mentioned "name decoration" or "name mangling" as
it's more often called. (It is, of course, the spawn of the devil.)
That's one of the cases in which many other languages could
interoperate, and no "mangling" was needed, while C
implementations, again often for no real reason, insisted
that non-C names *had* to be mangled. I never saw name
mangling (and I did site support of production linkers) before
C entered general service.
Note: I understand the reason for name mangling, but that doesn't
mean I have to like it. It's a lazy implementation choice that shifts
some of the burden elsewhere, including the users, when it should
be managed more cleanly by the compilers and linkers.
Victor Eijkhout wrote:
> Jentje Goslinga <gosl...@telus.net> wrote:
>
>
>>No dynamic memory allocation but the eternal WORK and IWORK
>>arrays which are partitioned up in intricate ways.
>>Does Lapack really need to cater to prehistorical compilers
>>which do not support dynamic memory allocation?
>
>
> Ok, it's clear where this use of temporaries comes from. However, it's
> not clear that dynamic allocation is the best solution. If your routine
> is called once, with a large dataset, then it could do its own
> alloation. However, if the routine is called a large number of times
> with the same small problem size, then you want to do the allocation
> outside it.
That is very true but in many cases it may be moved a level
below the user.
> We probably have to take a step back to the application level, and see
> which routines can be part of a computationally intensive inner loop.
>
> Your comments very much appreciated.
Thank you for saying that; I am trying to be constructive
and freely admit to having benefitted from the availability
of Lapack during the testing of my own code.
I also agree with you that it would be desirable to integrate
the single/double precision functions, if possible, preferably
prior to the 16 byte reals appearing on the scene.
> V.
Jentje
Bravo, you've skillfully engaged the pissartists of c.l.f. at their
unrelenting, vacuous, ill-founded, and unsustainable psuedosuperiority over
all languages nonfortran.
Word to the wise, take the dog for a romp, the kids to Baskin Robbins,
surprise your wife with fresh-cut flowers, and relegate these superannuated
fortran fogies and their antiquated prejudices to cultural and technical
obscurity by ignoring F66/90/05/200+ and subsequent inflictions they impose
on a dwindling, undiscerning, and more increasingly ancient audience whose
self-appointed arrogance entitles them to implement their fuckups on
posterity. No thanks.
Fuck them all, the long, the short, and the tall,
Gerry TO.
> Wow. I can't believe anyone would post this to sci.math.num-analysis!
> I thought that non-explicit use of extended precision variables was
> universally considered to be a dumb idea. If you want explicit use
> of extended precision with LAPACK, you need extra functions.
> BTW, modern compilers on PCs generally default to using SSE2
> instructions on modern PC cpus, hence no more extended precision
> unless you ask for it. This became true soon after the Pentium 4 was
> released, which was a few years ago.
As far as I know, about half the time the extra precision helps,
and about half the time it hurts. (It hurts when things
round the wrong way, or when quantities that should be equal
are not equal.)
-- glen
If you do write a shell in Fortran 95 it would be easy to have an
optional WORK parameter that allows the user to do his/her own
allocation when needed, and the shell will allocate its own otherwise
before calling the inner code (be it legacy F77 or newly rewritten F95
or whatever).
Cheers
Salvatore
<snip>
>I also agree with you that it would be desirable to integrate
>the single/double precision functions, if possible, preferably
>prior to the 16 byte reals appearing on the scene.
16-byte reals have already appeared on the scene. Several Fortran 95
compilers will compile the following code, which declares 4, 8, and
16-byte real variables. (It would be better to use the
SELECTED_REAL_KIND intrinsic function.)
real (kind=4) :: xs
real (kind=8) :: xd
real (kind=16) :: xq
REAL and DOUBLE PRECISION variables are usually stored in 4 and 8
bytes, but I believe the standard does not require this. The Numerical
Recipes in Fortran 90 library does not declare variables as REAL or
DOUBLE PRECISION but uses the KIND specifier instead. A new LAPACK
probably ought to do the same, which is another reason to write it in
Fortran 95.
> 16-byte reals have already appeared on the scene. Several Fortran 95
> compilers will compile the following code, which declares 4, 8, and
> 16-byte real variables. (It would be better to use the
> SELECTED_REAL_KIND intrinsic function.)
At least since the 360/85 in about 1968. Also, CDC machines
with 60 bit words had 120 bit double precision.
> real (kind=4) :: xs
> real (kind=8) :: xd
> real (kind=16) :: xq
> REAL and DOUBLE PRECISION variables are usually stored in 4 and 8
> bytes, but I believe the standard does not require this. The Numerical
> Recipes in Fortran 90 library does not declare variables as REAL or
> DOUBLE PRECISION but uses the KIND specifier instead. A new LAPACK
> probably ought to do the same, which is another reason to write it in
> Fortran 95.
Well, the standard tends to require that default REAL have
the same size as default INTEGER, and that DOUBLE PRECISION
have twice that size. (I don't believe it requires all the
bits to actually be used, but the storage used must agree.)
-- glen
> If you do write a shell in Fortran 95 it would be easy to have an
> optional WORK parameter that allows the user to do his/her own
> allocation when needed, and the shell will allocate its own otherwise
> before calling the inner code (be it legacy F77 or newly rewritten F95
> or whatever).
That's a cool idea. Thanks for the suggestion.
(Hope your trip home was uneventful?)
I am very interested in learning what problems you
are having with Sun Fortran. Please let me know what
they are. Please also let me know which release of
Sun Fortran you are using.
Yours truly,
Robert Corbett
In one of the least squares routines, the work array dimension is
very difficult to figure out. As other people suggested it would be
very nice to have some wrapper routines available.
We use CVF and Intel Fortran on both Windows and Linux, one of the
most annoying thing related to the lapack routines is the string
arguments. Because we use both C++ and Fortran and there is no
standard for string implementations, I have to write wrapper routines
to deal with that, like passing integers.
Another complaint I have is that a few routines are not thread safe
because some variables have to be computed and saved. It took me quite
some time to search for the SAVE statements and eventually wrote a
routine to call these routines before threads are created. It would
be nice for LAPACK to provide such routine.
Thanks for your response.
> It took me quite
> some time to search for the SAVE statements and eventually wrote a
> routine to call these routines before threads are created. It would
> be nice for LAPACK to provide such routine.
Yeah, SAVE statements are on the "must get rid of" list.
> <hc...@thermawave.com> wrote:
>>It took me quite
>>some time to search for the SAVE statements and eventually wrote a
>>routine to call these routines before threads are created. It would
>>be nice for LAPACK to provide such routine.
> Yeah, SAVE statements are on the "must get rid of" list.
As far as I know, compilers are allowed to SAVE data,
even without a SAVE statement. Traditionally, Fortran
used static storage, )though with some complications due to
overlays) and wouldn't be appropriate for multithreaded code.
But yes, SAVE should only be used for data that really needs
to be static, a favorite example being the seed for a
random number generator.
-- glen
Well, if you really ask for it: I have used Sun C and
Fortran from 1985-1990.
The transcendentals were only ten decimals accurate and
reported bugs were not fixed but maybe that was not in
Fortran. The whole experience with the hardware and the
software was traumatic throughout.
In 2000 I had the misfortune to have to work Sun machines
again, this time in C.
The simulation model I was using gave quite different
results on different machines with different versions
of the libraries. I simply don't trust Sun floating point.
Sun likes to call their boxes "workstations" but I
consider an Intel PC a much superior workstation for
floating point computations, and a lot cheaper.
Jentje Goslinga
> The only sensible way would be to have two functions: one which does the
> allocation, another which does not. The problem is, that sometimes one
> can flexibly choose a work space area, like in (D,S)SYEV routine. Small
> workspace conserves memory but large workspace gives better convergence.
> Which options should the self-allocating version of (D,S)SYEV choose?
Modern Fortran allows optional arguments, so the "sensible" way in
that language would be to specify the workspace with an optional
array. If specified during the call, then it is used, otherwise the
subroutine allocates its own workspace. Of course, this puts
additional burdens on other languages in cross-language projects to
support the fortran calling conventions. BTW, fortran also allows
the call to use a generic name (e.g. call syev() in the above
example), which is resolved to the specific precision based on the
data types of its arguments, in order to eliminate the (D,S)
portability problems when moving code between machines.
$.02 -Ron Shepard
Ron Shepard wrote:
> In article <slrncroaij.ti...@theta1.cft.edu.pl>,
> Roman Werpachowski <"r o m a nNOSPAM"@theta1.cft.edu.pl> wrote:
>
>
>>The only sensible way would be to have two functions: one which does the
>>allocation, another which does not. The problem is, that sometimes one
>>can flexibly choose a work space area, like in (D,S)SYEV routine. Small
>>workspace conserves memory but large workspace gives better convergence.
>>Which options should the self-allocating version of (D,S)SYEV choose?
>
>
> Modern Fortran
?
allows optional arguments, so the "sensible" way in
> that language would be to specify the workspace with an optional
> array. If specified during the call, then it is used, otherwise the
> subroutine allocates its own workspace. Of course, this puts
> additional burdens on other languages in cross-language projects to
> support the fortran calling conventions.
They already have to deal with the calling conventions, many
of which are not even defined by the Fortran language.
BTW, fortran also allows
> the call to use a generic name (e.g. call syev() in the above
> example), which is resolved to the specific precision based on the
> data types of its arguments, in order to eliminate the (D,S)
> portability problems when moving code between machines.
Hmmm.
> $.02 -Ron Shepard
I am sorry, but this advice is not worth two cents in my opinion.
Functions should be self-contained and allocate local storage.
If they really perform so little actual work that this is a
concern, then maybe the program is not structured optimally.
Variable arguments are another dubious language feature which
made its way into the C language because of the requirement of
the formatted print functions, but it causes problems.
Esoteric language "features" like variable arguments and "multiple
returns" should be avoided if at all possible.
Jentje Goslinga
Gerry Thomas wrote:
> "Jentje Goslinga" <gosl...@telus.net> wrote in message
> news:41BA59EC...@telus.net...
>
> :
> :
>
>>Enough!
>>
>>The purpose of my post was not to discredit Fortran and
>>I never did, this is not another language war.
>>My article makes some genuinely useful and well researched
>>suggestions for improvement which are based on a lot of
>>thinking. You may think otherwise.
>>
>>
>>Jentje Goslinga
>>
>
>
> Bravo, you've skillfully engaged the pissartists of c.l.f. at their
> unrelenting, vacuous, ill-founded, and unsustainable psuedosuperiority over
> all languages nonfortran.
I don't think people are motivated by superiority in
this case. Humans are generally motivated by fears.
There are those "dusty decks" which are still used after
their original authors have retired, heavens knows what's
in those; it is usually said in political wording thus:
"There is a Large Body of Mathematical and Technical
Software which cannot easily be transcribed".
This "large body of software" is also written using a
programming style which defies attempts at maintenance
and reconstruction, which has nothing to do with Fortran.
No technical manuals, lame comments, gigantic "subroutine
headers" no concept of variable scope and possible
recursion (SAVE ALL).
No understanding of how computers function which is
apparent from clever attempts at optimization such as:
I1 = I+1
Let's say the compiler already has "I" in a register
and now has to allocate a stack variable (in cache memory)
to hold an extra variable and move this integer from
the stack to save a single cycle a few times.
In 1990 I worked on a new version of a Pipeline Network
Simulation model which had been written in Fortran some
15 years earlier.
It took a team of highly paid consultants consisting of
myself (a technical mathematician), two thermodynamicists
and several programmers maybe 8 months to reverse engineer
the code; to find out what algorithms and empirical
relations were used, what values of the physical constants
were buried in the code and so on. All in Imperial Units,
they are so much fun.
Let's say one million dollars which could have been saved
if the original programmers had spent some time commenting
the code and writing a technical manual.
> Word to the wise, take the dog for a romp, the kids to Baskin Robbins,
> surprise your wife with fresh-cut flowers, and relegate these superannuated
> fortran fogies and their antiquated prejudices to cultural and technical
> obscurity by ignoring F66/90/05/200+ and subsequent inflictions they impose
> on a dwindling, undiscerning, and more increasingly ancient audience whose
> self-appointed arrogance entitles them to implement their fuckups on
> posterity. No thanks.
>
> Fuck them all, the long, the short, and the tall,
> Gerry TO.
People will be programming in the Fortran pastures
"Until the Dinosaurs come home"
But it is slowly changing. I have even had a manager who knew
C++, I am not kidding you.
Jentje Goslinga
Bill Shortall wrote:
> "Victor Eijkhout" <see...@for.addy> wrote in message
> news:1gokv1s.16zyrkl1ytczkbN%see...@for.addy...
>
>>Tino <tin...@yahoo.com> wrote:
>>
>>
>>>On a slightly related note, is there any effort (or even thought) being
>>>put into the development of a "standardized" C++ object-oriented LAPACK
>>>library? Something to wrap around the BLAS and LAPACK, perhaps?
>>
>>The thought has occurred to many people. I mean, passing LDA into a C++
>>routine is the programming equivalent of turning a crank handle on the
>>space shuttle. Now, what to do about it....
>>
>>I've started looking at Sidl/Babel to get a more native looking
>>interface in other languages than F77. Not sure to what extent that will
>>satisfy you.
>>
>>V.
>>--
>>email: lastname at cs utk edu
>>homepage: www cs utk edu tilde lastname
>
>
> Hi Victor,
>
> For the last few years I have been working on a
> set of C++ classes to do linear algebra. I was trying to
> acheive most of the functionality of Lapack/Blas but
> in a very user friendly fashion. The classes center
> arround a general purpose vector class along with a dense rectangular Matrix
> class and then some more compilcated classes like a vector of vectors and
> vector of matrices.
> All of them use operator overloading so you can write
> A = B + C * D where A,B,C, D are matrices and all of the operators
> work for both real and complex matrices.
This is not a very common operation which I have only
encountered when working with block matrices.
C++ is a powerful language but the simple algebra of
Matrices, vectors and Scalars is sufficiently complex
that using C++ shorthand notation is not always possible.
Have you looked at OON, the Object Oriented Numerics
Group?
Akll of the basic Linear algebra
> functions are supported
> ie solution of equations, decompositions, SVD eigenvectors etc. etc.
> The entire library is less than 5000 lines of code
You must be kidding, by the time I am done with Schur, SVD
and Block Reflector, I will have 25-30k lines of C and a few
thousand lines of Intel Assembler. And that is without Linear
Equations or Complex Data type.
And it is including testing modules for everything.
> and compiles into a static library of ~~ 3 megabytes
Isn't that a bit large? My executable which contains all of
the said factorizations and comprehensive testing software
weighs in at 76 kilobytes. You generate 40 times the object
code with one fifth the code, a factor 200!
> You don't have to compile it but you will need either a
> Miccrosoft VC6 or Linux GCC compiler to use it.
> I have demo version available and if anyone in the group wants to play
> with it and has a compatible compiler send me a line. Once you've used it
> you will never go back to LAPACK
> pe...@cminet.net
> regards...Bill
Have you tested those functions by constructing pathological
matrices from their eigenvalues (for symmetric matrices)
or from their singular values (in the case of the SVD) and
likewise for Linear Equations and evaluated the accuracy?
Jentje Goslinga
Jentje Goslinga wrote:
>
>
> Gerry Thomas wrote:
>
>> "Jentje Goslinga" <gosl...@telus.net> wrote in message
>> news:41BA59EC...@telus.net...
[snip]
>
> No understanding of how computers function which is
> apparent from clever attempts at optimization such as:
>
> I1 = I+1
>
> Let's say the compiler already has "I" in a register
> and now has to allocate a stack variable (in cache memory)
> to hold an extra variable and move this integer from
> the stack to save a single cycle a few times.
>
Perhaps what they were optimizing was the number of compiler
error messages. It is well known that programs are much more
efficient when the number of compiler error messages is zero.
When dinosaurs were current the statement
do 10 i = 1, n + 1
drew a diagnostic about bad syntax so the result was
n1 = n + 1
do 10 i = 1, n1
The underlying cause was an attempt by the compiler writers, in their
role as language designers, to make use of the machines which they had
access to. Those machines had vary fast caches as the memory and
processor speeds were about the same. No extra cost for memory latency
as the cache did not get in the way. The cache was quite inexpensive
as it consisted of a length of copper wire on the backplane.
[snip]
>
> Jentje Goslinga
>
Jentje Goslinga wrote:
...
> Functions should be self-contained and allocate local storage.
> If they really perform so little actual work that this is a
> concern, then maybe the program is not structured optimally.
Maybe, maybe not. Define "optimal structure". Lots of things
are structured better when procedures do little work. Whatever
makes the code more legible and easier to maintain, is usually
quite superior. Contrary to the state of things in the '50s, human
time is usually now more valuable than the computer's time.
If allocating workspace outside of the procedures retains
the advantages of legibility while improving computer efficiency,
it's a valid technique. (Nor can I think of any way to prohibit
the technique with a language design change. So this is independent
of the language you use.)
> Variable arguments are another dubious language feature which
> made its way into the C language because of the requirement of
> the formatted print functions, but it causes problems.
C's feature is abominably defined. But, as long as you permit
generic procedures, it's hard to prohibit Fortran's form of optional
arguments. Fortran just has a notation that makes them more
convenient to write than as generics. But, from the point of
view of the user of the procedure, F(X, Other) vs. F(X) is the
same whether F is really two procedures that are generic (have
the same name, but different type signatures) or a single procedure
with an optional argument. (So, once again, you could actually
use this trechnique independent of what language you use.)
Mostly because the way C "supports" such a thing is so ridiculously
baroque.
> Esoteric language "features" like variable arguments and "multiple
> returns" should be avoided if at all possible.
Variable arguments, keyword arguments, and multple value returns are
extremely useful tools if your language supports such things in a
reasonable manner. Poor implementations of these ideas shouldn't be
taken as proof that they aren't.
Subject: Re: Who uses clapack?
hi Jentje,
Bill wrote
class and then some more compilcated classes like a vector of vectors and
> > vector of matrices.
> > All of them use operator overloading so you can write
> > A = B + C * D where A,B,C, D are matrices and all of the operators
> > work for both real and complex matrices.
Jentje wrote
This is not a very common operation which I have only
> encountered when working with block matrices.
> C++ is a powerful language but the simple algebra of
> Matrices, vectors and Scalars is sufficiently complex
> that using C++ shorthand notation is not always possible.
I believe that operator overloading is one of the primary advantages of
C++ The ability to write simple equations like A = B + C*D makes the
underlying mathematics stand out and the code easy to debug and remember.
I haven't had many problems wwith the shorthand notation. It is however
challenging to do in complex variables.
Bill wrote
ie solution of equations, decompositions, SVD eigenvectors etc. etc.
> > The entire library is less than 5000 lines of code
Jentje wrote
You must be kidding, by the time I am done with Schur, SVD
> and Block Reflector, I will have 25-30k lines of C and a few
I am not kidding !!!
Remember I am writing templated code so that one function description
can in most cases handle float, double, complex<float>, and complex<double>
its all the same function....this reduces code length by a factor of 4.
Also remenber that I am using the standard template library which handles
the pointers and data types.. As am example
template <class T> inline
SMatrix<T>& SMatrix<T>::operator +=(const SMatrix<T> &B){
SYS_ASSERT(( (rows() == B.rows()) & (cols() == B.cols()) ),
"error += matrices not equal size \n");
std::transform(begin(),end(),B.begin(),begin(),std::plus<T>());
return *this;
}
here in 8 lines we have the code to do an update add += for all data types.
It also
checks for proper matrix size. The c code for this would be twice as long.
I also reuse the code in SVD's and eigenvectors to save space.
Bill
> and compiles into a static library of ~~ 3 megabytes
Jentje
> Isn't that a bit large? My executable which contains all of
> the said factorizations and comprehensive testing software
Remember that my lib file contains instances of code for all data types
For a particular problem only a small fraction of this is used and appears
in the object file of a particular app.
Jentje
Have you tested those functions by constructing pathological
matrices from their eigenvalues (for symmetric matrices)
or from their singular values (in the case of the SVD) and
likewise for Linear Equations and evaluated the accuracy?
Bill
I have tested all functions up to size 500 x 500 using
matrices filled with random numbers. I havent done anything
with pathological matrices yet.
I hope that this encourages others to consider using C++ for
their linear algebra problems
Regards...Bill Shortall
>
> Yes, I had understood that, but don't you have to write
> separate implementations of those template functions for
> complex numbers for example. OK, I understand now, the
> complex float and complex double are the same code and the
> real float and double are another code, understood.
Now I don't think I get it. The Fortran version needs two
separate codes to handle the 6 possibilities: a real code
for single, double, and quad precision real and a complex
code for single, double, and quad precision complex. But
the previous poster seems to be saying that in the C++
version, a single code suffices for the four cases of
(single vs. double precision) and (real vs. complex).
There are at least some level 1 blas subprograms where
there are two complex versions and only one real version,
e.g. cdotc/zdotc/ydotc & cdotu/zdotu/ydotu for complex
data but only sdot/ddot/qdot for real data. It seems to
me that you aren't quite going to get a 4X code reduction
factor for several of the blas subprograms.
--
write(*,*) transfer((/17.392111325966148d0,6.5794487871554595D-85, &
6.0134700243160014d-154/),(/'x'/)); end
Your concerns seem oddly out of step, now that clc's own gatekeeper
declared them *off topic*. I agree, they should concentrate on doing
what C is good for -- numerical computing is not one of them.
Consequently, don't dilute your efforts to accommodate technically
illiterate crowd that could neither appreciate nor effectively use what
you have to offer.
You lack historical perspective. Coming from a DEC world, I was surprised
to learn that there existed systems where mixing source from different
languages in one application was not the norm, not say the rarest of
exceptions fraught with difficulties and pain - and all of that for no
perceptible reason at all but perhaps a bad case of NIH syndrome.
Although I must admit they didn't deem C++ important enough to re-write
the linker from scratch to avoid name mangling. However, the support tools
were quite good in that respect.
Jan
Ah yes, every C or C++ code comes with automatic documentation and
useful comment - is that what you think? This is a problem of programmers,
not programming languages.
> gigantic "subroutine headers"
Could do without that in VAX Fortran some decades ago, but admittedly
an extension of the then-stadard Fortran...when C still had a decade to
go to actually _be_ standardized as a language.
> no concept of variable scope
C has variable scope? Oh yes, since C99. Do all compilers, especially
the free ones, support it properly or at all?
> No understanding of how computers function which is
> apparent from clever attempts at optimization such as:
>
> I1 = I+1
>
> Let's say the compiler already has "I" in a register
> and now has to allocate a stack variable (in cache memory)
> to hold an extra variable and move this integer from
> the stack to save a single cycle a few times.
Your knowledge what compilers do nowadays with such code is astounding
in its ignorance.
> Let's say one million dollars which could have been saved
> if the original programmers had spent some time commenting
> the code and writing a technical manual.
So the original programmers weren't paid to produce proper documentation?
Tough luck.
> But it is slowly changing. I have even had a manager who knew
> C++, I am not kidding you.
It's actually possible to know or understand C++? I hadn't thought
it possible.
Jan
Just so. What I did in 187.facerec was to provide the storage by the caller
in form of an allocatable array*, and the called routined checked whether it
was 1) allocated and 2) of the proper size; if either of those conditions
was not met, it would (re-)allocate at that proper size.
Also, I believe some interfaces of that era - for instance, FFTPACK - used
the work arrays to store data structures describing the operation, or some
tables of constants, into them. These have more natural solutions in F95
that are much easier to use.
* I used POINTERs because most compilers didn't implement the ALLOCTABLE TR
at the time - I'm not even sure it was an official TR yet...Nowadays, used
ALLOCATABLE dummy arguments is the method of choice in almost all cases, IMO.
Jan
Then you should clearly get a new job, as you don't understand what
"numerical analysis" is about.
For the record, there's nothing wrong about FP on Sun machines, in general.
In some parts, they're even too close to the IEEE standard for predictable
performance.
Jan
I have also developed a C++ library which acts as nothing more than a
wrapper for the BLAS/LAPACK or Intel MKL libraries. In my case, I
claim only to support a subset of the BLAS/LAPACK functionality (though
it is 100% of the functionality that I use). I also support all of the
different matrix formats (dense, banded, symmetric), though only in
double precision (no complex matrices). The reason I was interested in
a more "official" effort is, that I believe that LAPACK's popularity
and utility is in its generality and I would welcome a sort of
standardization of the C++ interface as there seem to be dozens if not
hundereds of them floating around.
Ryan
Yes...the writers of LAPACK should write a C++
version. After all, they are the experts. It might even
be fun for them !
Bill ............. pe...@cminet.net
>
> > The simulation model I was using gave quite different
> > results on different machines with different versions
> > of the libraries. I simply don't trust Sun floating point.
>
> Then you should clearly get a new job, as you don't understand what
> "numerical analysis" is about.
Harsh. Not all numerical algorithms automatically damp out
perturbations, and sometimes you just have to live with that.
> No understanding of how computers function which is
> apparent from clever attempts at optimization such as:
>
> I1 = I+1
I'm mostly staying out of this thread because, regardless of
stated intentions, it has at least a taste of flame war about
it (though a fairly mild one so far), and the subject matter
is certainly prime flame-war territory. I'll restrict my
comment solely to the above snippet.
That style was exceedingly common in f66 days. The reason
had nothing to do with optimization, but rather with the
language of thee times. There were quite a few places where
f66 *REQUIRED* a simple variable. You could not substitute
an expression, even such a simple expression as that. I distinctly
recall it as being one of the things I liked about f77 was that
you could use expressions in most of the "obvious" placs. Probably
the most common case was DO loops.
A loop that might today be written as
do k = i+1, j-1
...
end do
would have, of necessity, been written in f66 something more like
ip1 = i+1
jm1 = j-1
if (ip1 .gt. jm1) goto 200
do 100 k = ip1 , jm1
...
100 continue
200 continue
I didn't actualy go back and find old code of mine to copy from,
but I probably could find almost exactly those lines in some of
my old code somewhere (well, ok, in upper case). My style has
certainly changed over the decades, but the biggest difference
in this case is in what the language allows.
Many people who learned in f66 days kept the coding habits even
after the language changed. In most cases, I doubt this had
anything to do with attempts at optimization.
Thus, the deprecatory comments about the people who coded this
way seem to me likely to reflect the a lack of understanding of
the matter. Perhaps there was too quick a rush to judgement
without much attempt at first understanding.
--
Richard Maine | Good judgment comes from experience;
email: my first.last at org.domain | experience comes from bad judgment.
org: nasa, domain: gov | -- Mark Twain
Sure. But then put the blame on the hardware, instead of the
algorithm? That doesn't make any sense at all to me.
Jan
> I distinctly
> recall it as being one of the things I liked about f77 was that
> you could use expressions in most of the "obvious" placs.
The other big simplifications were algol-type control structures and
list-diirected I/O.
For f90, automatic arrays are the only new feature I can see that
significantly simplifies small numerical programs (of the sort found in
Numerical Recipes).
Richard E Maine wrote:
> Jentje Goslinga <gosl...@telus.net> writes:
>>No understanding of how computers function which is
>>apparent from clever attempts at optimization such as:
>>I1 = I+1
(snip)
> That style was exceedingly common in f66 days. The reason
> had nothing to do with optimization, but rather with the
> language of thee times. There were quite a few places where
> f66 *REQUIRED* a simple variable. You could not substitute
> an expression, even such a simple expression as that. I distinctly
> recall it as being one of the things I liked about f77 was that
> you could use expressions in most of the "obvious" placs. Probably
> the most common case was DO loops.
I/O list elements were another one. Maybe even more than
list-directed output (especially for debugging) the ability to
put expressions in write statements was a great addition to F66.
Note that the WATFIV compiler, based on the F66 standard had many
F77 features years before 1977. Expressions in DO statements and
I/O lists, and also CHARACTER variables, were some of my favorite.
-- glen
kia wrote:
(snip)
> Your concerns seem oddly out of step, now that clc's own gatekeeper
> declared them *off topic*. I agree, they should concentrate on doing
> what C is good for -- numerical computing is not one of them.
> Consequently, don't dilute your efforts to accommodate technically
> illiterate crowd that could neither appreciate nor effectively use what
> you have to offer.
Topicality in comp.lang.c is very different than in some similar
groups, and almost has to be. Even so, there are too many posts
to read in a reasonable time. Note that the discussion of programs
written in Fortran is considered on topic in comp.lang.fortran, but the
discussion of C programs is not for comp.lang.c. That is, just being
written in C isn't enough. If the question is about the C language,
and even though it may require a sample program to understand the
question, then it is likely on topic.
Similarly, the comp.arch.powerpc newsgroup discusses the architecture
of the PowerPC processor, and not general questions related to macintosh
software as run on Apple machines using that processor. That doesn't
seem to stop people from posting them.
-- glen
Your bitter and sarcastic attack serves no useful purpose.
This thread started with a request for suggestions for Lapack
and questions about the use of CLapack.
I have made some very useful suggestions for improvement of
Lapack which have been acknowledged by the OP.
I am developing C code for the most important decompositions
because I want them to be available in the C language in code
which can be easily maintained and also because I them in my
other work.
I have designed a recursive D&C solver for the Symmetric
Eigenvalue problem and am working on one for the SVD.
I have read several papers by Cuppen, Gu and Eisenstadt,
Tisseur and Dongarra, Gu, Demmel and Dhillon.
If I did not think I could write somewhat faster and somewhat
more accurate code than Lapack I would not be doing this.
Does this mean that I a "competitor" of Lapack?
No, I do this because I like to do it and will decide some
day what to do with it, and will probably end up giving it
away.
I really don't care whether some Jan Vorbruggen thinks that
I am neither a programmer nor a numerical analyst.
> Jan
Jentje Goslinga
Jan Vorbrüggen wrote:
>> No technical manuals, lame comments,
>
>
> Ah yes, every C or C++ code comes with automatic documentation and
> useful comment - is that what you think? This is a problem of
programmers,
> not programming languages.
True, that has not changed much with the introduction of new
languages. The lame comments, the surprised looks when you
ask for the technical manual.
>
>> gigantic "subroutine headers"
>
>
> Could do without that in VAX Fortran some decades ago, but admittedly
> an extension of the then-stadard Fortran...when C still had a decade to
> go to actually _be_ standardized as a language.
>
> > no concept of variable scope
>
> C has variable scope? Oh yes, since C99. Do all compilers, especially
> the free ones, support it properly or at all?
>
>> No understanding of how computers function which is
>> apparent from clever attempts at optimization such as:
>>
>> I1 = I+1
>>
>> Let's say the compiler already has "I" in a register
>> and now has to allocate a stack variable (in cache memory)
>> to hold an extra variable and move this integer from
>> the stack to save a single cycle a few times.
>
>
> Your knowledge what compilers do nowadays with such code is astounding
> in its ignorance.
That is simply not true, I have studied compiler generated
assembler code for many years and am an Assembler programmer.
I have written a BLAS and could write a compiler.
>> Let's say one million dollars which could have been saved
>> if the original programmers had spent some time commenting
>> the code and writing a technical manual.
>
>
> So the original programmers weren't paid to produce proper documentation?
> Tough luck.
>
>> But it is slowly changing. I have even had a manager who knew
>> C++, I am not kidding you.
>
>
> It's actually possible to know or understand C++? I hadn't thought
> it possible.
It takes a long time to get a basic understanding of C++ and much
longer to attain expert level. I do not claim to be an expert.
It is much more difficult than Fortran.
Maybe it could be taught better. It took some time for the
concepts of differentiation and integration to come down from
the rarefied atmosphere where Newton and Weiserstrasz lived
but it is now taught in high school: maybe the same is true
of C++.
> Jan
Jentje
Victor Eijkhout wrote:
> Jan Vorbrüggen <jvorbrue...@mediasec.de> wrote:
>
>
>>>The simulation model I was using gave quite different
>>>results on different machines with different versions
>>>of the libraries. I simply don't trust Sun floating point.
>>
>>Then you should clearly get a new job, as you don't understand what
>>"numerical analysis" is about.
>
>
> Harsh. Not all numerical algorithms automatically damp out
> perturbations, and sometimes you just have to live with that.
This actually was a transient simulation model in C and it
does not take much for these simulations to diverge on
different platforms. On the other hand, I have tested complex
transient models between Intel and AMD processors and the
results were exactly the same.
Maybe the Sun platform is better now than at the time I used
it which is a long time ago.
Maybe the Lapack test suites should be expanded upon somewhat
and used to test the accuracy and speed of numerical linear
algebra between platforms. Using a set of difficult matrices
and a portable random generator.
> V.
Jentje Goslinga
I would note in addition that the consequences of such temporary
variables on optimization using modern compilers should be
essentially nil. The compiler is free to notice that such variables
are defined, used immediately afterward, and not ever used again
(or are redefined before their next use). As a consequence, the
compiler can keep the value in a register the whole time it's
needed and not even allocate memory space for it.
Did I say "modern compiler"? The technique has been completely
understood since the '60s.
As for the language of thee times, thou spelt it wrong?
It isn't insofar as performance is concerned - a significant degradation
occurs just by introducing dynamic allocation. If that's acceptable then
the second part is a no brainer - always allocate inside and only if
size has changed - no need to treat large or small, frequent or glacial
calls separately.
I admit that Sun's single-precision transcendental functions
are not accurate to ten decimal digits. They are accurate
to slightly less than seven decimal digits. Since IEEE
single-precision floating-point numbers are accurate to slightly
less than seven decimal digits, it would be hard to do any
better.
Sun's double-precision transcendental functions are typically
accurate to about 16 decimal digits. I would be very interested
in seeing an example where one of our double-precision
transcendental functions produces a result that not accurate to
at least fourteen digits.
Yours truly,
Robert Corbett
While I might agree with you regarding C89, C90, or C95,
I disagree regarding C99. The facilities for numerical
computing in C99 are pretty much on a par with those in
Fortran, and in some cases are better. In particular,
C99 includes imaginary floating-point data types, which
are quite handy for some applications. It is still
easier to deal with arrays in Fortran than C, but C now
matches Fortran in areas where Fortran once had the edge,
such as aliasing and complex arithmetic.
Sincerely,
Bob Corbett
Sincerely,
Bob Corbett
>>Well, if you really ask for it: I have used Sun C and
>>Fortran from 1985-1990.
>>The transcendentals were only ten decimals accurate and
>>reported bugs were not fixed but maybe that was not in
>>Fortran.
> I admit that Sun's single-precision transcendental functions
> are not accurate to ten decimal digits. They are accurate
> to slightly less than seven decimal digits. Since IEEE
> single-precision floating-point numbers are accurate to slightly
> less than seven decimal digits, it would be hard to do any
> better.
> Sun's double-precision transcendental functions are typically
> accurate to about 16 decimal digits. I would be very interested
> in seeing an example where one of our double-precision
> transcendental functions produces a result that not accurate to
> at least fourteen digits.
1985 was pre-SPARC as far as I remember, though it came
out sometime before 1990. In the 680x0 days sun had a variety
of different floating point systems including some type
of hardware accelerator. Also, as well as I remember it
the 68881 had too few guard bits on some transcendental
functions and so was slightly less accurate than it should
have been.
In any case, it none of that is relevant to SPARC systems.
-- glen
Work on SPARC began in 1984. Fujitsu made the first
SPARC chips in 1986. The first SPARC workstations shipped
in 1987. There was a bug in the square root instruction
in early SPARC chips, and so for years Sun did not use the
square root instruction.
>In the 680x0 days sun had a variety
>of different floating point systems including some type
>of hardware accelerator. Also, as well as I remember it
>the 68881 had too few guard bits on some transcendental
>functions and so was slightly less accurate than it should
>have been.
IIRC, the 68881, like the Intel 8087, used an 80-bit register
representation for floating-point numbers. The effects of
missing guard digits would therefore be unlikely to cause
any significant loss of precision in the transcendental
functions.
Sincerely,
Bob Corbett
I have no Sun but maybe they have improved in the mean
time, it is quite some time ago.
Intel has had extended precision well over 19 digits in
most transcendentals from 1980 maybe with some glitches
which were fixed with the 386.
They have also published some albeit scant information
about the methods used but basically claim maximum
accuracy, or +- 1 ULP - I do not quite recall.
I have tested some selected functions myself and have no
reason to disbelieve their claims.
Maybe Sun should publish some information on their
methods too.
With Lapack being the de-facto standard in Numerical
Linear Algebra, it would be nice to have a set of Lapack
speed and accuracy tests with known results for a few
selected realistic size problems.
There already are such tests using Mathematica (which
uses Lapack), which however only compare speed.
If comparing large outputs would be a problem, the
numbers might be quantified: for the Eigenvalue
problem maximum ULP of: QT.Q-I and QT.A.Q-Lambda
and Lambda*-Lambda, something of that nature.
This is the type of test I normally use, it condenses
the information into a human readable form and it can
be very revealing.
Jentje Goslinga
> It is still easier to deal with arrays in Fortran than C
and arrays of reals are what scientific computing is all about.
Really? The VAX Fortran compiler, ca. 1984, would have performed the
obvious optimization on the above code, making it perfectly equivalent
to substituting the addition inline. It even elimated the memory for
the dead variable, which then resulted in a modification to the debugger
so it could tell you that storage for I1 had been eliminated. Imagine
that, debugging of opzimized code - that is almost totally unheard of
nowadays, and it happened 20 years ago!
Oh, and that compiler did some other surprising optimizations such as
strength reduction in loops and rearrangement of basic blocks.
I stand by my opinion - you have no idea what an old, let alone a modern,
optimizing compiler does to your code.
Jan
> Maybe Sun should publish some information on their
> methods too.
Perhaps something like "Numerical Computation Guide", Sun Part
800-3555-10, Revision A of 16 March 1990? Though much of it is a bit
dated (lots of stuff about older chips), it still has a place on my
bookshelf within easy reach without getting up from my chair.
>Imagine
>that, debugging of opzimized code - that is almost totally unheard of
>nowadays, and it happened 20 years ago!
gcc allows it.
Robert Corbett wrote:
> In article <41BD36A3...@sparrow.com>, kia <k...@sparrow.com> wrote:
>
>>Your concerns seem oddly out of step, now that clc's own gatekeeper
>>declared them *off topic*. I agree, they should concentrate on doing
>>what C is good for -- numerical computing is not one of them.
>
>
> While I might agree with you regarding C89, C90, or C95,
> I disagree regarding C99. The facilities for numerical
> computing in C99 are pretty much on a par with those in
> Fortran, and in some cases are better. In particular,
> C99 includes imaginary floating-point data types, which
> are quite handy for some applications. It is still
> easier to deal with arrays in Fortran than C, but C now
<snip>
> Sincerely,
> Bob Corbett
I agree with what you are saying and have a few more notes
on this isuue for whoever is interested.
Without saying anything negative about Fortran which is
a very comprehensive language, I have not found any
technical problem where I felt in any way hindered by
using the C language.
I ask your patience for some examples:
I have writen an Industrial simulation model for Petroleum
Reservoir Simulation which solves three coupled diffusion
equations in a cuboid type grid.
The matrix is block 3D Lapacian and sparse, regular Lapacian
3-5-7 point but it is a Block matrix whose elements
themselves are [3,3] matrices AND with irregular terms
generated by wells peforating (usually vertical) rows of
blocks. The solution method is Incomplete LU factorization
followed by Iteration involving Orthogonalization.
Now, here is an example where Multiply Dimensioned Arrays
CAN NOT even be used for the matrix because it would involve
storing a sparse matrix in full storage.
More precisely, even Fortran does not have a built-in data
structure for this type of matrix.
The matrix is stored as a linear array of Fixed Dimension
[3,3] matrices, of course with additional information such
as row or column indices which are packed separately.
Even the industrial mostly Fortran software I have seen for
this type of problem generally avoids unknown dimension
Multiply Dimensioned Arrays. This can of course be dismissed
as HACKING but this is industrial software.
I have worked with an industrial simulation model for tar
sands exploitation written (by others) in C.
I have designed an industrial model for transient drying
of pipelines containing air and water/water vapour which
solves the compressible gas equations in a pipeline network
with heat conduction to the ground.
This company requested the model be written in C. It was
very difficult but the problems had nothing to do with the
particular programming language used.
I have also written decompositions for the Symmetric
Eigenvalue Problem: Schur QR and Schur Divide and Conquer,
and for the SVD with QR iteration and am working on the
D&C which is very hard. But not because of trivial technical
matters such as handling arrays.
This software is entirely portable since the minimal BLAS
type Assembler is also available in C, which is somewhat
less accurate and a lot slower.
I already have to link C and Assembler together which is
unavoidable but the last thing I want is yet another
language, or calls to some library which is not available
on some other platforms.
One can simply dismiss all of the above or question my
programming ability (or any ability) as an earlier poster
does, but several of the above are backed by industrial
evidence and by the management of the companies involved.
Jentje Goslinga
Jan Vorbrüggen wrote:
>> >> No understanding of how computers function which is
>> >> apparent from clever attempts at optimization such as:
>> >>
>> >> I1 = I+1
>> >>
>> >> Let's say the compiler already has "I" in a register
>> >> and now has to allocate a stack variable (in cache memory)
>> >> to hold an extra variable and move this integer from
>> >> the stack to save a single cycle a few times.
>> >
>> >
>> > Your knowledge what compilers do nowadays with such code is astounding
>> > in its ignorance.
>>
>> That is simply not true, I have studied compiler generated
>> assembler code for many years and am an Assembler programmer.
>> I have written a BLAS and could write a compiler.
>
>
> Really? The VAX Fortran compiler, ca. 1984, would have performed the
> obvious optimization on the above code, making it perfectly equivalent
> to substituting the addition inline. It even elimated the memory for
> the dead variable, which then resulted in a modification to the debugger
> so it could tell you that storage for I1 had been eliminated. Imagine
> that, debugging of opzimized code - that is almost totally unheard of
> nowadays, and it happened 20 years ago!
Maye you should read my post before starting to discredit
my abilities:
I was not talking about code optimization but rather about
a programming style which includes attempts at optimization
which makes it even harder to read programs.
> Oh, and that compiler did some other surprising optimizations such as
> strength reduction in loops and rearrangement of basic blocks.
>
> I stand by my opinion - you have no idea what an old, let alone a modern,
> optimizing compiler does to your code.
I have examined probably over hundred compiler generated
small function codes although nowadays I may not even bother,
and I AM a compiler myself.
Many compilers do an awesome job on some loops. Visual C which
was not so good for some time now writes almost perfect code
for some Linear Algebra functions such as the composition of
sequences of Givens Matrices into a Matrix.
> Jan
Jentje Goslinga
Scientific computing is about modelling complex physical
processes using imperfect sciences such as Thermodynamics
and ad-hoc discretizations and about devising stable and
accurate computational methods.
In the end, the problem is usually reduced to the solution
of a System of Linear Equations or an Eigenvalue Problem.
The subsequent technical problems involving the handling of
arrays of reals are trivial in comparison.
See also my other post of today.
Jentje Goslinga
<huge snip>
> Imagine that, debugging of opzimized code
> - that is almost totally unheard of
> nowadays, and it happened 20 years ago!
Any compiler I know of allows you to compile the source into
the production version: Visual C and Watcom and probably
most others.
Amateurs use debuggers a lot but for serious development
you shoul develop diagnostics code in parallel with very
carefully crafted output. In my programs this constitues
about 30-40% of the code.
I do not like ultimately leaving such code in and try to
either encapsulate it into diagnostics functions so that
I need to conditionally compile only the call, or to phase
it out altogether, ultimately. It is a little bit less
pretty but it is essential.
There may be a few dozen of such diagnostics items in
very complex program allowing me to test every aspect of
a program if required.
<snip>
> I stand by my opinion - you have no idea what an old, let alone a modern,
> optimizing compiler does to your code.
> Jan
Jentje Goslinga
There have been several posts pointing out that what you describe
as "attempts at optimization" where in fact the direct results
of programmers meeting the requirements of the the Fortran version
of the day. The early versions of Fortran did not permit expressions
in various places. The most notable was in the limits of DO loops.
So what would now be
do i = 1, n + 1
end do
would have been
n1 = n + 1
do 10 i = 1, n1
10 continue
Your inability to understand the constraints that the early programmers
worked under, but still produced code that is so functional that it is
still in use, does not speak well for your scholarship. The noise this
generates severely detracts from your message to the point that you seem
to have provided a defensive response to an overly strongly stated
misguided assertion.
There is a secondary issue of whether this "misguided optimization"
is harmful or not. In the absence of caches, which obtains when memory
matches processor speed, or with compiler expression optimization your
complaint is misguided.
So we end up with the complaint that old code may be hard to read
because it did not use capabilities that would not be introduced
until after the code had been in productive use for a long time.
I am an "early programmer" and have programmed in Fortran 66,
I am afraid to say <g>. I agree with most of what you say and
may have forgotten some of the older Fortran issues!
I sincerely apologize if some of my statements have been too
strong. My mission is not to insult or deprecate but to cooperate
and reach consensus.
> There is a secondary issue of whether this "misguided optimization"
> is harmful or not. In the absence of caches, which obtains when memory
> matches processor speed, or with compiler expression optimization your
> complaint is misguided.
>
> So we end up with the complaint that old code may be hard to read
> because it did not use capabilities that would not be introduced
> until after the code had been in productive use for a long time.
Not quite. Most of my post was directed at a programming style
with incomplete or altogether lacking technical documentation
which is not the fault of the Fortran language.
<snip>
regards,
Jentje Goslinga
Please note that all of Sun's manuals are available in
electronic form from http://docs.sun.com.
Sincerely,
Bob Corbett
Sure, I know that - usually a compiler will not be able to disallow it 8-).
However, how well do gdb et al. _support_ this? Last I saw, it wasn't great
and usually only used when desperate.
Jan
Gosh, you're scraping it!, why not treat yourself to some uber,
metrosexual, alter comeuppance, 'Desperate Jan' voodoo effigies that you
can give to the unsuspecting in the yuletide spirit:
http://www.scottish-sculpture.com/Finished_Sculptures/desperate_dan.htm,
going once, going twice, ah!, going for free, even Lordy, Lord Richard of
Dryden couldn't muster a toss for pathetic Jan.
--
You're Welcome,
Gerry T.
______
"The most successful tyranny is not the one that uses force to assure
uniformity but the one that removes the awareness of other possibilities,
that makes it seem inconceivable that other ways are viable, that removes
the sense that there is an outside." -- Allan Bloom, in The Closing of the
American Mind.
Perhaps your indiscriminate attacks (at Sun and characterization of
coding practices reflecting lack of knowledge of historical context
among others) invite such responses???
Duane Bozarth wrote:
> Jentje Goslinga wrote:
>
> ...
>
>>Your bitter and sarcastic attack serves no useful purpose.
>
>
> Perhaps your indiscriminate attacks
I have not attacked anyone PERSONALLY but have been attacked
by quite a few people, usually ending with the statement that
"I know nothing" of some subject or another.
(at Sun and characterization of
> coding practices reflecting lack of knowledge
Here we go again, I do not have to wait very long do I:
You have no idea about my knowledge. I didn't even write
some of the examples attributed to me by some posters.
I have designed dozens of projects in Fortran usually F77
from 1980 to 1990 and subsequently in several contracts
throughout the nineties and on top of that I have worked
on maybe half a million lines of Fortran by others.
I was paid very high wages to do this work, because I am
an expert at Fortran and Technical Programming.
I also have written hunderd-thousands of lines of C code
and a large C++ code.
of historical context
> among others) invite such responses???
No! There is really No Excuse for attacking and deprecating
people such as happens by selected individuals in notably
this newsgroup.
Everyone concerned tries to make a contribution and maybe
some are misguided.
Some of the efforts people are discussing such as my C
functions for Linear Algebra or the C++ interface of a
precious poster in this thread are a lot of work and
should not be ridiculed. They can be examined, questioned
or ignored.
I have had less good experience with certain machines and
better with others and likewise with OS's. It is generally
recognized that people have the right to say that on the
internet, everyone realizes that these may be very personal
opinions and limited expererience. Much like some people
have had a lemon case of some make of car where others
never have any problems.
As for Sun, I have had problems with it, both as a user
and as the "system manager", which consumed a lot of my
time for 6 years, something I did not find with IBM.
I admit that most of this is a long time ago although I
used a Sun in a contract in 2000 for a few months.
Back to the original topic, the C conversion of Lapack
is an issue which is not going away, which is the reason
I am developing my system.
There are many technical applications in C nowadays which
may already have to be ported between several platforms.
Most developers do not want to have to deal with a
Fortran compiler on each one of those platforms but would
rather convert the required Lapack code to C.
If there is no BLAS that's the platform user's problem.
Many users may require only one set of functions such as
either the Linear Equations, or one of the Decompositons
(Schur,SVD) and usually the double precision version.
I have my doubts about the usefulness of the single
precision versions but others may disagree.
regards,
Jentje Goslinga
> Back to the original topic, the C conversion of Lapack
> is an issue which is not going away, which is the reason
> I am developing my system.
And good luck to you! When you have finished your project, I'm sure
there are many reading this thread who would be keen to see the results.
However, if you expect wholesale adoption of you new C-based LAPACK,
then you are misunderstanding the principal strength of the original
LAPACK: trust. The library has been used for such a long time, and by so
many different people, that the chance of signifiant bugs remaining
undiscovered is pretty small. Hence, people place a great deal of trust
in it.
Even if your new library is n times more efficient than LAPACK, it won't
have the same trusted status. Regrettably, this will tend to penalize
your new library against rapid adoption.
cheers,
Rich
--
Dr Richard H D Townsend
Bartol Research Institute
University of Delaware
[ Delete VOID for valid email address ]
I also found the manual extremely useful; I also used to keep
a copy within quick reach. It continued to be useful to look up
things about IEEE floating points even after I ceased to do
numerical computations on Sun machines. Too bad that I couldn't
bring it with me when I moved.
Robert Corbett wrote:
> Please note that all of Sun's manuals are available in
> electronic form from http://docs.sun.com.
That's good news!
Regards,
Ryo
My newsreader dropped all the original posts to this thread, so this is probbly blowing
hot air out my proverbial, but if there is a LanguageX-version of the LAPACK package, what
is to stop folks from testing it against their trusted Fortran version? If there is an
opportunity for a significant gain in computational speed, people tend to (eventually)
overcome inertia, update codes, and perform tests (in the background or in parallel). I
guess it all comes down to the definition and timescale of "wholesale" or "rapid" adoption
:o) One person's rapid may correspond to another's slower-than-a-snail-in-a-straitjacket.
cheers,
paulv
--
Paul van Delst
CIMSS @ NOAA/NCEP/EMC
Rich Townsend wrote:
> Jentje Goslinga wrote:
>
>> Back to the original topic, the C conversion of Lapack
>> is an issue which is not going away, which is the reason
>> I am developing my system.
>
>
> And good luck to you!
Huh? A positive response?! Thanks!
> When you have finished your project, I'm sure
> there are many reading this thread who would be keen to see the results.
You can take that to the bank.
> However, if you expect wholesale adoption of you new C-based LAPACK,
Wait a minute now, I am not trying to produce a comprehensive
C Lapack that would be an enormous undertaking. I will only do
the few decompositions mentioned and not the Linear Equations
and nothing complex.
My code uses mostly the same algorithms as Lapack unless
I think I have something slightly better in some cases and
only as far as I could deduce from the papers or decipher.
The design is totally different and I have not analyzed the
Lapack code except for some KEY DETAILS like the
implementation of the Gu-Eisenstadt method and the RK-Li
secular solver.
> then you are misunderstanding the principal strength of the original
> LAPACK: trust. The library has been used for such a long time, and by so
> many different people, that the chance of signifiant bugs remaining
> undiscovered is pretty small. Hence, people place a great deal of trust
> in it.
>
> Even if your new library is n times more efficient than LAPACK, it won't
> have the same trusted status. Regrettably, this will tend to penalize
> your new library against rapid adoption.
Extensive testing software is built into it which tests the
most pathological matrices I could construct (from their
eigenvalues or singular values) and calls Lapack with the same
matrix and shows some norm of the error and the timing in both
cases.
Yes, I know you can run a billion tests and someone concocts a
new one which blows everything apart.
Lapack is quite fast and accurate so my code will not be n
times faster; I expect it to be somewhat faster and slightly
more accurate.
Speaking of precision, I do not do single precision but am
thinking of including higher precision in addition to double
but that requires more Assembler modules and reduced
portability. Needless to say, there are C stubs for all ASM
functions so you can use only C if you want just the
functionality and not top speed.
I am not really looking for fame or "acceptance", I am too
old to value that; I make my contribution to civilization
and if a single person tells me that I have done something
useful for them, that's enough.
> cheers,
>
> Rich
cheers,
Jentje Goslinga
> [...] the principal strength of the original
> LAPACK: trust. The library has been used for such a long time, and by so
> many different people, that the chance of signifiant bugs remaining
> undiscovered is pretty small. Hence, people place a great deal of trust
> in it.
That's one; another factor is that in the eigenvalue part -- which are
highly non-obvious algorithms -- we have some of the biggest experts on
board. So you'll have solid implementations of the best algorithms.
V.
--
email: lastname at cs utk edu
homepage: www cs utk edu tilde lastname
The speed/accuracy tests and equivalent Lapack calls
are built in. Lapack has some built-in tests too but
only for small matrices and it could be expanded upon.
> If there is an
> opportunity for a significant gain in computational speed, people tend
> to (eventually) overcome inertia, update codes, and perform tests (in
> the background or in parallel).
True, but expect no miracles: using a different
programming language does not produce miracles although
use of Assembler in combination with a restructuring
of the data sometimes does.
I would be satisfied if my code were not slower.
Using the same algorithm, computational speed is
determined by two things, the speed of the underlying
BLAS and the structuring of the data, neither of which
is Fortran or C specific.
The speed of the BLAS depends on the Assembler
programming skills of its authors.
In the D&C methods, the speed is asymptotically dominated
by two things: the matrix multiply, mainly for the largest
order primary "divide" and the application of sequences
of Householder matrices.
There is a lot which can be said about those, but they are
extremely sensitive to the structuring of the data.
I guess it all comes down to the
> definition and timescale of "wholesale" or "rapid" adoption :o) One
> person's rapid may correspond to another's
> slower-than-a-snail-in-a-straitjacket.
>
> cheers,
>
> paulv
regards,
Jentje Goslinga
>> (at Sun and characterization of
>> coding practices reflecting lack of knowledge
>Here we go again, I do not have to wait very long do I:
>You have no idea about my knowledge. I didn't even write
>some of the examples attributed to me by some posters.
>I have designed dozens of projects in Fortran usually F77
>from 1980 to 1990 and subsequently in several contracts
>throughout the nineties and on top of that I have worked
>on maybe half a million lines of Fortran by others.
>I was paid very high wages to do this work, because I am
>an expert at Fortran and Technical Programming.
>I also have written hunderd-thousands of lines of C code
>and a large C++ code.
Yes, here we go again.
Fortran 90 and 95 are qualitatively different from Fortran 77. Long
experience with Fortran 77 does NOT make you an expert in them. Long
experience in C alone would not make one a competent judge of C++.
Based on your obvious ignorance of Fortran 95, which you do not wish to
address, you are NOT qualified to judge its technical merits as a
language for numerical linear algebra or anything else. I am not
denying your knowledge of F77, C, and numerical analysis.
My first reply to you was a bit aggressive and did contain some (true)
pro-Fortran propaganda, inspired by the tone and content of your
message, but I did explain how some of the technical deficiencies you
cite in Lapack are addressed in Fortran 95, using allocatable or
automatic arrays and optional arguments.
I never heard of you (nor I think did you hear of me) before this
thread, and I do not wish you ill. But it raises my blood pressure when
people criticize Fortran based on defects fixed more than a decade ago.
It's a good thing there is a cardiologist in my family, because this
happens a lot :).