Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Fortran is really very fast.

53 views
Skip to first unread message

Liwen Zhang

unread,
Feb 20, 2012, 6:56:44 AM2/20/12
to
I am new in Fortran, I use a short program to test Fortran's speed, it
takes my computer 9.8 seconds to excute the following program, Java
takes 1200 seconds, Visual c++ takes 1000 seconds, Mathematica8.0
takes 6000 seconds, Matlab takes 4500 seconds. Fortran is really very
fast.

program hello
implicit none
real :: x, y, n, i, j, t1, t2
call CPU_TIME(t1)
n = 30000.0
j = 1.0
do while (j < n)
i = 1.0
do while (i < n)
x = sin(45.0) * asin(0.5) * sqrt(5.000000) * atan(2.5555)
i = i + 1.0
end do
j = j + 1
end do
call CPU_TIME(t2)
print *, t2-t1
call sleep(10)
end program hello

Nasser M. Abbasi

unread,
Feb 20, 2012, 7:15:47 AM2/20/12
to
Why not post your Matlab and Mathematica code here?

It is easy to write the same algorithm in an inefficient
was in different language if one does not take
advantage of that language features as well as they
can. I am not saying you did that, but without
posting code, one has to accept that you wrote
the same code in very efficient way in Matlab
and Mathematica.

For example, here is an example of Matlab being 5
times Faster than Fortran in multiplying 2 matrices:

http://www.mathworks.com/matlabcentral/newsreader/view_thread/23949

Looking forward to see your Matlab and Mathematica code.

--Nasser

Erik Toussaint

unread,
Feb 20, 2012, 8:03:53 AM2/20/12
to
I'm surprised it takes any time to run at all (apart from the call to
sleep). Since the result of the calculation is not used in an output
statement, a (human) optimizer could cut out the entire calculation and
create a program that produces the same result (i.e. nothing), but runs
in 0.0 seconds.

Erik.

John Appleyard

unread,
Feb 20, 2012, 8:11:49 AM2/20/12
to
Hi

I'm surprised it takes that long. A decent optimizer will recognize
that, because nothing is done with x, the program doesn't need to
calculate it (or anything else, except t2-t1)!

C:\tmp>ifort hello.f90 -fast
..
C:\tmp>hello
0.0000000E+00

C:\tmp>

Benchmarking can be a tricky business.

JA

Liwen Zhang

unread,
Feb 20, 2012, 9:02:34 AM2/20/12
to
The following is my Matlab codes, since it is too long to calulate for
n=30000, so here I only use n=3000, the time should be 100 times as
long as it takes. Sorry I donot konw how to get precise calculating
time, someone test the code needs a watch to how long it takes.

n=3000;
j=1;
while j<n
i=1;
while i<n
x=sin(45.0)*asin(0.5)*sqrt(5.0)*atan(2.5555);
i=i+1;
end
j=j+1;
end

Liwen Zhang

unread,
Feb 20, 2012, 9:22:01 AM2/20/12
to
The following is the code in Mathematica, it takes 68 seconds when
n=3000, when n=30000, the time should be 100 times as long as 68
seconds.

Time1 = AbsoluteTime[];
n = 3000;
Do[Do[x =
Sin (45.0)*ArcSin (0.5)*Sqrt (5.0)*ArcTan (2.5555);, {n}], {n}];
Time2 = AbsoluteTime[];
Print[Time2 - Time1];

Gordon Sande

unread,
Feb 20, 2012, 9:22:59 AM2/20/12
to
On 2012-02-20 09:11:49 -0400, John Appleyard said:

> Hi
>
> I'm surprised it takes that long. A decent optimizer will recognize
> that, because nothing is done with x, the program doesn't need to
> calculate it (or anything else, except t2-t1)!

Even without any dead code analysis this code is sped up by weaker
optimizations.
The various sin(45.0) are loop invariants so may be moved outside the loop. The
value may even be precalculated. x is also a loop invaraint and may
treated as a
constant. These are pretty standard optimizations that Fortran users have come
to expect. Published benchmarks tend to reinforce this. Even the couple vendors
that put more effort into debugging do a decent job of the basic optimizations.
The Fortran standard (i.e. its specification) avoids constucts which get in the
way of optimization (to the point that they can be error prone for someone who
has not quite gotten the intent of the design).

This is all rather obvious to existing Fortran users but may not be so apparent
to new users like the OP.

Richard Maine

unread,
Feb 20, 2012, 11:20:17 AM2/20/12
to
Liwen Zhang <gears...@gmail.com> wrote:

> The following is the code in Mathematica, it takes 68 seconds when
> n=3000, when n=30000, the time should be 100 times as long as 68
> seconds.

Well, no, that isn't necessarily so at all. As John mentioned,
"Benchmarking can be a tricky business". Sometimes it can be very
tricky. This one is easy to see how a compiler could optimize, making
its behavior very different from what you might expect. Other cases can
be much more subtle. That's why standard advice for several decades has
been to do speed tests with actual applications instead of with
artificial benchmarks like this. If the compiler manages to do
impressive optimizations with your actual application, then that's a
great thing.

In addition to all the usual issues of possible optimization, even if
the [elided] code compiled without optimization at all, I would not
expect it to tell very much interesting. Naively compiled with no
optimizations at all - not even the evaluation of obvious compile-time
expressions, which barely even counts as optimization - I'd guess many
compilers might do that even with optimization turned off - the time for
this should be dominated by the evaluation of the trig functions. That
would be a function of the run-time library and have very little to do
with the language.

I'd say that you would be a lot better off with, ideally, testing with
your real applications; or if that is impractical because you are still
trying to decide how to develop the application, with reading what has
been done on the subject. There is a pretty big gap between what you are
showing and what is likely to get meangful performance measures - a
bigger gap than you are likely to be able to cross in a short time.

--
Richard Maine | Good judgment comes from experience;
email: last name at domain . net | experience comes from bad judgment.
domain: summertriangle | -- Mark Twain

glen herrmannsfeldt

unread,
Feb 21, 2012, 12:16:32 AM2/21/12
to
Liwen Zhang <gears...@gmail.com> wrote:

> I am new in Fortran, I use a short program to test Fortran's speed, it
> takes my computer 9.8 seconds to excute the following program, Java
> takes 1200 seconds, Visual c++ takes 1000 seconds, Mathematica8.0
> takes 6000 seconds, Matlab takes 4500 seconds. Fortran is really very
> fast.

Note what others have said about optimization.

In most real programs that spend most time in nested loops,
Java (with JIT on), C, C++, and Fortran will be reasonably
close in time.

Interpreters like Matlab and Mathematica can be much slower,
depending on how the program is written.

-- glen

Tobias Burnus

unread,
Feb 21, 2012, 4:25:35 AM2/21/12
to
On 02/20/2012 05:20 PM, Richard Maine wrote:
> In addition to all the usual issues of possible optimization, even if
> the [elided] code compiled without optimization at all, I would not
> expect it to tell very much interesting. Naively compiled with no
> optimizations at all - not even the evaluation of obvious compile-time
> expressions, which barely even counts as optimization - I'd guess many
> compilers might do that even with optimization turned off - the time for
> this should be dominated by the evaluation of the trig functions. That
> would be a function of the run-time library and have very little to do
> with the language.

Actually, I think some compilers will evaluate the trigonometric
functions already at compile time. I think Fortran 2003 allows them in
initialization expressions, thus, the compiler should have the means to
evaluate them always at compile time, if possible.

In case of gfortran, the line
x = sin(45.0) * asin(0.5) * sqrt(5.000000) * atan(2.5555)
will be optimized to
x = 1.19329750537872314453125e+0;
early during compilation (still in the front-end part and with all
optimization levels).

With optimization, this line will be optimized away as "x" is not used.
(That should also work for compilers which do not optimize the
trigonometric expressions at compile time.)

Thus, with optimization, I expect that the only difference is whether
the loops are optimized away or not. As the loop variables are floating
point numbers, one needs to be careful: the loop may only be optimized
away if "n - 1.0 /= n". Otherwise, the compiler has to generate an
endless loop. Only about half of my compilers optimize the loop away.

Tobias

Christoph Naumann

unread,
Feb 21, 2012, 4:38:27 AM2/21/12
to
30000/3000 = 10, not 100.

Liwen Zhang schrieb:

Wolfgang Kilian

unread,
Feb 21, 2012, 4:55:55 AM2/21/12
to
On 02/21/2012 10:25 AM, Tobias Burnus wrote:
> On 02/20/2012 05:20 PM, Richard Maine wrote:
>> In addition to all the usual issues of possible optimization, even if
>> the [elided] code compiled without optimization at all, I would not
>> expect it to tell very much interesting. Naively compiled with no
>> optimizations at all - not even the evaluation of obvious compile-time
>> expressions, which barely even counts as optimization - I'd guess many
>> compilers might do that even with optimization turned off - the time for
>> this should be dominated by the evaluation of the trig functions. That
>> would be a function of the run-time library and have very little to do
>> with the language.
>
> Actually, I think some compilers will evaluate the trigonometric
> functions already at compile time. I think Fortran 2003 allows them in
> initialization expressions, thus, the compiler should have the means to
> evaluate them always at compile time, if possible.

In principle, "interpreted" languages may also do this, in particular if
there is some pre-compilation or just-in-time compilation involved.

I guess Mathematica will evaluate the expression every time, however.
One reason is that the definition of symbols like Sin[...] can be
modified at runtime anywhere in the execution flow. I don't know about
Matlab.

-- Wolfgang

--
E-mail: firstnameini...@domain.de
Domain: yahoo

John Appleyard

unread,
Feb 21, 2012, 8:35:50 AM2/21/12
to
On 21/02/2012 9:25 AM, Tobias Burnus wrote:
> Thus, with optimization, I expect that the only difference is whether
> the loops are optimized away or not. As the loop variables are floating
> point numbers, one needs to be careful: the loop may only be optimized
> away if "n - 1.0 /= n". Otherwise, the compiler has to generate an
> endless loop. Only about half of my compilers optimize the loop away.

Interesting - but does the compiler really "have to" generate an endless
loop if we set n to 1E20? I would have thought it could legally perform
a miracle and reduce the calculation time from infinity to a few
microseconds! The only things it has to do concern t1 and t2.
--
JA

Richard Maine

unread,
Feb 21, 2012, 11:56:49 AM2/21/12
to
Tobias Burnus <bur...@net-b.de> wrote:

> On 02/20/2012 05:20 PM, Richard Maine wrote:
> > In addition to all the usual issues of possible optimization, even if
> > the [elided] code compiled without optimization at all, I would not
> > expect it to tell very much interesting. Naively compiled with no
> > optimizations at all - not even the evaluation of obvious compile-time
> > expressions, which barely even counts as optimization - I'd guess many
> > compilers might do that even with optimization turned off - the time for
> > this should be dominated by the evaluation of the trig functions. That
> > would be a function of the run-time library and have very little to do
> > with the language.
>
> Actually, I think some compilers will evaluate the trigonometric
> functions already at compile time.

That's what I was trying to say, albeit with slightly complicated
wording.

user1

unread,
Feb 21, 2012, 5:04:45 PM2/21/12
to

> On 2/20/2012 5:56 AM, Liwen Zhang wrote:
>> I am new in Fortran, I use a short program to test Fortran's speed, it
>> takes my computer 9.8 seconds to excute the following program, Java
>> takes 1200 seconds, Visual c++ takes 1000 seconds, Mathematica8.0
>> takes 6000 seconds, Matlab takes 4500 seconds. Fortran is really very
>> fast.
>>


I'd sure like to see the Visual C++ code. Being a compiled language, I
am surprised it has a speed so much slower than Fortran, and more
comparable to an interpreted language.


dpb

unread,
Feb 21, 2012, 5:10:46 PM2/21/12
to
Optimization options, undoubtedly????

Will Visual C++ find the deadcode/move invariant code out of loop w/
default switch settings "out of the box"? I'd venture if OP built it in
the IDE, it probably was a debug-build, not production.

--

user1

unread,
Feb 21, 2012, 5:39:05 PM2/21/12
to
dpb wrote:
> On 2/21/2012 4:04 PM, user1 wrote:
>>
>>> On 2/20/2012 5:56 AM, Liwen Zhang wrote:
>>>> I am new in Fortran, I use a short program to test Fortran's speed, it
>>>> takes my computer 9.8 seconds to excute the following program, Java
>>>> takes 1200 seconds, Visual c++ takes 1000 seconds, Mathematica8.0
>>>> takes 6000 seconds, Matlab takes 4500 seconds. Fortran is really very
>>>>
>>
>>
>> I'd sure like to see the Visual C++ code. Being a compiled language, I
>> am surprised it has a speed so much slower than Fortran, and more
>> comparable to an interpreted language.
>
> Optimization options, undoubtedly????
>
> Will Visual C++ find the deadcode/move invariant code out of loop w/
> default switch settings "out of the box"? I'd venture if OP built it in
> the IDE, it probably was a debug-build, not production.
>
> --
>


Who knows, ... not me. For me, "gfortran -O0" did it in about 4 seconds,
and I haven't bothered to look at any generated assembler to see whether
that invariant calculation is being moved out of the loop, but I would
doubt it is.




dpb

unread,
Feb 21, 2012, 5:50:01 PM2/21/12
to
On 2/21/2012 4:39 PM, user1 wrote:
...

> Who knows, ... not me. For me, "gfortran -O0" did it in about 4 seconds,
> and I haven't bothered to look at any generated assembler to see whether
> that invariant calculation is being moved out of the loop, but I would
> doubt it is.
...

For all of the above (and more), I don't think it's worthy of
pursuing...certainly the OP's conclusion of some general hierarchy and
relative speed differences between various languages is grossly
inaccurate from what he's done. Hopefully at least that message got
through...

--

glen herrmannsfeldt

unread,
Feb 21, 2012, 7:12:42 PM2/21/12
to
user1 <us...@example.net> wrote:

(snip)
>>> I'd sure like to see the Visual C++ code. Being a compiled language, I
>>> am surprised it has a speed so much slower than Fortran, and more
>>> comparable to an interpreted language.

>> Optimization options, undoubtedly????

>> Will Visual C++ find the deadcode/move invariant code out of loop w/
>> default switch settings "out of the box"? I'd venture if OP built it in
>> the IDE, it probably was a debug-build, not production.

> Who knows, ... not me. For me, "gfortran -O0" did it in about 4 seconds,
> and I haven't bothered to look at any generated assembler to see whether
> that invariant calculation is being moved out of the loop, but I would
> doubt it is.

To me, and to most C compilers, it isn't dead code or moving things
out of the loop, but constant expression evaluation.

C compilers for some time allowed constant expressions in places
where Fortran didn't. Try:

REAL X(3+3)

in Fortran 66, or, I believe Fortran 77.

But yes, over the years Fortran has done much better than some
other languages at common expression evaluation, strength reduction,
and moving invariants out of loops.

C required constant expression evaluation before Fortran did.

All of which show that this isn't a good benchmark.

-- glen

user1

unread,
Feb 22, 2012, 8:07:18 AM2/22/12
to
On 2/21/2012 5:50 PM, dpb wrote:

>
> For all of the above (and more), I don't think it's worthy of
> pursuing...certainly the OP's conclusion of some general hierarchy and
> relative speed differences between various languages is grossly
> inaccurate from what he's done. Hopefully at least that message got
> through...


You should have known that I wouldn't leave it alone, even if
meaningless ;-) I ran a variation of OP's code through Visual C++. All
you need is a bunch of semicolons and curly brackets. Win32 console
project, Visual Studio 2010 Express, source stored in a main.c file (not
C++). Used the ide, and ran as debug and release, and get a huge
difference between debug and release. Release version runs in 1.390
secs, debug in 140.919 secs.

The gfortran code does it in 3.67 seconds (-O0) and 1.359 secs (-O3). As
nearly as I can tell,with gfortran, the invariant calculation is never
actually calculated at run time, not even with the -O0 optimization
switch. In fact, you get almost identical timings with it commented out.

-------------------------------------------------------------------------------------------


#include <stdio.h>
#include <math.h>
#include <time.h>

#define real double

int main()
{
real x, y, n, i, j, t1, t2;

n = 30000.0;
j = 1.0;
t1=clock();
while (j < n) {
i = 1.0;
while (i < n) {
x = sin(45.0) * asin(0.5) * sqrt(5.000000) * atan(2.5555);
i = i + 1.0;
}
j = j + 1;
}
t2=clock();
printf("Done in %f secs\n",(t2-t1)/CLOCKS_PER_SEC);
getchar();
return 0;
}

Tobias Burnus

unread,
Feb 22, 2012, 10:57:30 AM2/22/12
to
On 02/22/2012 02:07 PM, user1 wrote:
> The gfortran code does it in 3.67 seconds (-O0) and 1.359 secs (-O3). As
> nearly as I can tell,with gfortran, the invariant calculation is never
> actually calculated at run time, not even with the -O0 optimization
> switch. In fact, you get almost identical timings with it commented out.

To see what GCC (gcc, g++, gfortran, ...) do, you can use
-fdump-tree-original and -fdump-tree-optimized, which create a "dump"
file with a C-like syntax - once before and once after optimization.
(Note: It's only C like not C and it does not contain the whole
information, e.g., INTENT(IN) is honoured but the internal flag is
settable in a C program.)

The next step is to look at the assembler, but typically, the tree dumps
are much more readable.

Side topic: Regarding
double n, i, j;
n = 30000.0;
while (j < n) {
while (i < n) {
...
i = i + 1.0;
j = j + 1;

(or the equivalent Fortran version in the first post), I was asked by a
GCC middle end developer whether such constructs are common enough that
the compiler should optimize them.

Therefore: How common are floating-point loop index variables in
programs you know? I haven't encountered any so far in the codes I use,
but I won't rule out that I have simply missed them.

(I know code of the form "do ... while (abs(difference) > eps)", but I
think that differs from the optimization problem for the loop above.
Recall that one issue with loops like the one above is whether "n-1.0 ==
n" or not.)

Tobias

user1

unread,
Feb 22, 2012, 11:21:17 AM2/22/12
to
Tobias Burnus wrote:

> Therefore: How common are floating-point loop index variables in
> programs you know? I haven't encountered any so far in the codes I use,
> but I won't rule out that I have simply missed them.

Not common at all. I was trying to duplicate the posted Fortran code,
which used floating point loop counters. Haven't looked at gcc or
gfortran tree dumps either, and probably won't.

I've always found VC++ to be a decent compiler, and was concerned with
comment that VC++ was much slower than Fortran.



Ron Shepard

unread,
Feb 22, 2012, 1:05:37 PM2/22/12
to
In article <4F45106A...@net-b.de>,
Tobias Burnus <bur...@net-b.de> wrote:

> Side topic: Regarding
> double n, i, j;
> n = 30000.0;
> while (j < n) {
> while (i < n) {
> ...
> i = i + 1.0;
> j = j + 1;
>
> (or the equivalent Fortran version in the first post), I was asked by a
> GCC middle end developer whether such constructs are common enough that
> the compiler should optimize them.
>
> Therefore: How common are floating-point loop index variables in
> programs you know? I haven't encountered any so far in the codes I use,
> but I won't rule out that I have simply missed them.

You should not spend any effort at all optimizing code like the
above in fortran. In fortran, one would use DO loops rather than DO
WHILE with increments of counters, and no real programmer would use
floating point variables in this way anyway.

Surely there are better ways for you guys to spend gfortran
development time. (I know, "Don't call me Shirley.")

$.02 -Ron Shepard

glen herrmannsfeldt

unread,
Feb 22, 2012, 3:03:50 PM2/22/12
to
Ron Shepard <ron-s...@nospam.comcast.net> wrote:

(snip)
>> Side topic: Regarding
>> double n, i, j;
>> n = 30000.0;
>> while (j < n) {
>> while (i < n) {

(snip)
>> Therefore: How common are floating-point loop index variables in
>> programs you know? I haven't encountered any so far in the codes
>> I use, but I won't rule out that I have simply missed them.

Not very, but they do exist. There are a few cases where the
inexact loop count isn't so much of a problem. One that I have
thought of is axis labels on plots. Often you want fractional
increments and, if there is or isn't one last label it doesn't
matter so much.

> You should not spend any effort at all optimizing code like the
> above in fortran. In fortran, one would use DO loops rather than DO
> WHILE with increments of counters, and no real programmer would use
> floating point variables in this way anyway.

Many years ago I would sometimes translate BASIC programs
into Fortran. BASIC (at least at the time) only had floating
point variables, so the obvious translation was to REAL.

Some other common interpreted languages only have floating point.

-- glen

dpb

unread,
Feb 22, 2012, 3:24:02 PM2/22/12
to
On 2/22/2012 2:03 PM, glen herrmannsfeldt wrote:
> Ron Shepard<ron-s...@nospam.comcast.net> wrote:
...

>>> Therefore: How common are floating-point loop index variables in
>>> programs you know? I haven't encountered any so far in the codes
>>> I use, but I won't rule out that I have simply missed them.
>
> Not very, but they do exist. There are a few cases where the
> inexact loop count isn't so much of a problem. One that I have
> thought of is axis labels on plots. Often you want fractional
> increments and, if there is or isn't one last label it doesn't
> matter so much.
...
> Many years ago I would sometimes translate BASIC programs
> into Fortran. BASIC (at least at the time) only had floating
> point variables, so the obvious translation was to REAL.
>
> Some other common interpreted languages only have floating point.
...

The basic (no pun intended :) ) Matlab data type is DOUBLE and until the
more recent revisions, most math operations weren't implemented
out-of-the-box for the other classes (SINGLE, various integers, etc.) so
loop were implemented in floating point as well as were array indices,
etc., etc., even if they were constrained to be integer values in many
instances. Loops can be written w/ floating point still and the obvious
caveats of roundoff are still evident and arise in cs-sm newsgroup
regularly.

Now, whether that's the reason OP chose to use reals in his example or
not, who knows?

--

Nasser M. Abbasi

unread,
Feb 22, 2012, 7:59:17 PM2/22/12
to
On 2/22/2012 2:03 PM, glen herrmannsfeldt wrote:

> Not very, but they do exist. There are a few cases where the
> inexact loop count isn't so much of a problem. One that I have
> thought of is axis labels on plots. Often you want fractional
> increments and, if there is or isn't one last label it doesn't
> matter so much.

In Ada, a loop counter has to be discrete type. It can't be real type.
In addition, a loop counter is not allowed to be modified inside the loop,
the compiler will reject the code.

That what a robust safe language should be like. Too bad that
Ada does not have some of the nicer vector and matrix related
operations that Fortran has (vectored operations on indices and such),
and also many more numerical libraries that exist for Fortran, else
I think Ada would be the perfect language (for me) to use for number
crunching work (FEM, and such).

No language is perfect I guess. Maybe some day someone will design
the perfect language for computational and numerical work.

--Nasser

Athanasios Migdalas

unread,
Feb 22, 2012, 8:50:46 PM2/22/12
to
Nasser,

is MATMUL implemented inherently unoptimized? It is beaten by orders of
magnitude of straight loop Fortran implementations even in rowwise order!
And this seems to happen with any compiler (gfortran, lahey, intel) I've
tested up to now. I think the example you mentioned should, unfortunately,
not compare to MATMUL. Only when it runs autoparallelized or with omp
workshare (on dual core for instance) has it a chance of competing with the
loop implementations of matrix multiplication.


--
A.M.

glen herrmannsfeldt

unread,
Feb 23, 2012, 3:07:42 AM2/23/12
to
dpb <no...@non.net> wrote:

(snip)

>>>> Therefore: How common are floating-point loop index variables in
>>>> programs you know? I haven't encountered any so far in the codes
>>>> I use, but I won't rule out that I have simply missed them.

(snip)
> The basic (no pun intended :) ) Matlab data type is DOUBLE and until the
> more recent revisions, most math operations weren't implemented
> out-of-the-box for the other classes (SINGLE, various integers, etc.) so
> loop were implemented in floating point as well as were array indices,
> etc., etc., even if they were constrained to be integer values in many
> instances.

The IBM OS/360 Fortran G and H compilers allow REAL subscripts,
and most likely later compilers, too. They didn't have many of
the extensions that others, like DEC, had but they did that one.

-- glen

dpb

unread,
Feb 23, 2012, 10:15:59 AM2/23/12
to
Beside loops not going the number of counts desired, it's not uncommon
to have questions about non-integer indices into arrays at cs-sm.
Matlab generates a warning but rounds to nearest integer and goes on
anyways (assuming resultant index/indices are in bounds as bounds
checking is always in effect). One can turn off the warning and get
silent operation or have the warning trigger the debugger akin to a
fatal error.

--

Tobias Burnus

unread,
Feb 23, 2012, 4:01:58 PM2/23/12
to
Nasser M. Abbasi wrote:
> In Ada, a loop counter has to be discrete type. It can't be real type.
> In addition, a loop counter is not allowed to be modified inside the loop,
> the compiler will reject the code.
>
> That what a robust safe language should be like. Too bad that
> Ada does not have some of the nicer vector and matrix related
> operations that Fortran has

If you ignore Fortran 77 and only look at Fortran 66 or Fortran
90/95/2003/2008, in a normal DO loop, you also may not use nonintegers.
And also in Fortran, modifying the loop iteration variables in the loop
is not allowed. In the simple cases, the compiler will give a compiler
error, in more complicated cases, some run-time checking flag might be
able to detect it.

However, I think Ada - as Fortran - allows to use real loop variables by
writing the loop slightly differently. For the the posting of the thread
starter, the following is valid Fortran (at least 90/95/2003/2008):

i = 1.0
do while (i < n)
x = sin(45.0) * asin(0.5) * sqrt(5.000000) * atan(2.5555)
i = i + 1.0
end do

For the compiler, there is no real difference between that loop and a
Fortran 77 loop which uses

do i = 1.0, n
x = sin(45.0) * asin(0.5) * sqrt(5.000000) * atan(2.5555)
end do

Tobias

PS: I think for writing safe code, the Ada's type declaration support
is rather nice as it allows to restrict the valid values. (I also saw a
proposal to add such a feature to Fortran. However, I am not sure
whether the effort to implemented it in the compiler is in a good ratio
to the number of programs, which actually would use it. Let's see which
new features Fortran 201x will add - the work on the successor of 2008
has not yet started.)

Thomas Koenig

unread,
Feb 23, 2012, 6:56:43 PM2/23/12
to
dpb <no...@non.net> schrieb:

> Beside loops not going the number of counts desired,

Sure you could get burned with

do x=0., 1., 0.1

but (in the old days before I knew this was deleted from the
standards) I wrote

do x=0., 1.05, 0.1

which worked just fine.

dpb

unread,
Feb 23, 2012, 7:43:23 PM2/23/12
to
For a specific definition of "worked" and "fine"... :)

--

Thomas Koenig

unread,
Feb 24, 2012, 2:15:19 PM2/24/12
to
dpb <no...@non.net> schrieb:
This idiom reliably looped over the values 0., 0.1, ... with the
last value being 1.0.

No problem with roundoff error. Is there any problem that
I have missed?

Ron Shepard

unread,
Feb 25, 2012, 3:26:38 AM2/25/12
to
In article <ji8nk7$vv5$1...@newsreader4.netcologne.de>,
Since the decimal 0.1 cannot be represented exactly in floating
point, of course there is a potential for problems. If the compiler
computes the values for "x" by incrementing a scalar, then that
roundoff error can accumulate. After 10 steps the next value might
be a little smaller than 1.0, it might be exactly 1.0, or it might
be a little bigger than 1.0. Further, the result might change
depending on compiler optimization options, or if your floating
point hardware rounding mode can be set, you could get different
results depending on that. If loop termination is determined from
this value, then you might get different numbers of iterations.

If the compiler computes an integer trip count before loop
execution, then the behavior will be a little different. There are
still several ways to compute the values "x" in this case, so there
are still some uncertainties in what are the results.

There are several ways to write code like this without such
problems. One approach for integer "i" and real "x" is:

do i = 0, 10
x = i * 0.1
...

With this approach, the errors in the value of "x" are local to that
expression and do not accumulate. On the last iteration, the value
of "x" still might not be exactly 1.0, but you can clearly see why
in this case. I personally find this easier to understand than the
above floating point do loop versions. That is why I do not think
this feature should have ever been added to the language. It
introduces several potential problems with no apparent advantage
over the simpler integer-loop code.

$.02 -Ron Shepard

Thomas Koenig

unread,
Feb 25, 2012, 7:51:44 AM2/25/12
to
Ron Shepard <ron-s...@NOSPAM.comcast.net> schrieb:
> In article <ji8nk7$vv5$1...@newsreader4.netcologne.de>,
> Thomas Koenig <tko...@netcologne.de> wrote:
>
>> dpb <no...@non.net> schrieb:
>> > On 2/23/2012 5:56 PM, Thomas Koenig wrote:
>> >> dpb<no...@non.net> schrieb:
>> >>
>> >>> Beside loops not going the number of counts desired,
>> >>
>> >> Sure you could get burned with
>> >>
>> >> do x=0., 1., 0.1
>> >>
>> >> but (in the old days before I knew this was deleted from the
>> >> standards) I wrote
>> >>
>> >> do x=0., 1.05, 0.1
>> >>
>> >> which worked just fine.
>>
>> > For a specific definition of "worked" and "fine"... :)
>>
>> This idiom reliably looped over the values 0., 0.1, ... with the
>> last value being 1.0.
>>
>> No problem with roundoff error. Is there any problem that
>> I have missed?
>
> Since the decimal 0.1 cannot be represented exactly in floating
> point, of course there is a potential for problems. If the compiler
> computes the values for "x" by incrementing a scalar, then that
> roundoff error can accumulate.

Theoretically, yes.

> After 10 steps the next value might
> be a little smaller than 1.0, it might be exactly 1.0, or it might
> be a little bigger than 1.0.

... which is why I chose 1.05 as the end value, so I don't think this
applies here.

Ron Shepard

unread,
Feb 25, 2012, 12:11:44 PM2/25/12
to
In article <jialh0$aqi$1...@newsreader4.netcologne.de>,
Presumably you are focusing only on the issue of the number of
iterations of the loop. As I said in my original post, this may or
may not be relevant to the various problems that are associated with
the construct. In any case, the integer version of the do loop is
almost certainly clearer, easier to understand, and easier to
characterize than the floating point version, whether for unknown
programmers examining the code or for the original author examining
the code at later times. IMO, this was a bad feature to add to the
language (although it was only added temporarily), and it was a bad
feature for a programmer to use even when it was available.

$.02 -Ron Shepard

glen herrmannsfeldt

unread,
Feb 26, 2012, 3:10:46 AM2/26/12
to
Ron Shepard <ron-s...@nospam.comcast.net> wrote:

(snip, someone wrote)
>> No problem with roundoff error. Is there any problem that
>> I have missed?

> Since the decimal 0.1 cannot be represented exactly in floating
> point, of course there is a potential for problems.

Binary floating point, along with most other bases, can't represent
exactly 0.1, but decimal floating point can. Note that Fortran
specifically allows any base greater than one.


IBM now sells machines supporting binary, decimal, and hexadecimal,
all in one machine. There might not yet be any Fortran compilers
supporting decimal float, though.

> If the compiler
> computes the values for "x" by incrementing a scalar, then that
> roundoff error can accumulate. After 10 steps the next value might
> be a little smaller than 1.0, it might be exactly 1.0, or it might
> be a little bigger than 1.0. Further, the result might change
> depending on compiler optimization options, or if your floating
> point hardware rounding mode can be set, you could get different
> results depending on that. If loop termination is determined from
> this value, then you might get different numbers of iterations.

There are enough problems that don't depend on exact values.
Axis labels on plots are one example. Assuming they are rounded to
a reasonably number of digits, the labels will look fine. One might
not get the expected number of labels, but that is more of a cosmetic
problem. The plot is still readable.

> If the compiler computes an integer trip count before loop
> execution, then the behavior will be a little different. There are
> still several ways to compute the values "x" in this case, so there
> are still some uncertainties in what are the results.

If the end loop value is approximately half way between two loop
values, then it should reliably stop as expected.

> There are several ways to write code like this without such
> problems. One approach for integer "i" and real "x" is:

> do i = 0, 10
> x = i * 0.1
> ...

> With this approach, the errors in the value of "x" are local to that
> expression and do not accumulate. On the last iteration, the value
> of "x" still might not be exactly 1.0, but you can clearly see why
> in this case. I personally find this easier to understand than the
> above floating point do loop versions. That is why I do not think
> this feature should have ever been added to the language. It
> introduces several potential problems with no apparent advantage
> over the simpler integer-loop code.

Yes it allows for problems. Designing a language that doesn't allow
you to do anything that could cause problems isn't easy.

Personally, I don't believe that one should even try. It is up to
the programmer to decide which features are useful and which aren't
for the particular problem at hand. I suppose I don't mind warnings
so much, but removing a feature because it can be misused seems
wrong to me.

-- glen

Ron Shepard

unread,
Feb 26, 2012, 12:28:31 PM2/26/12
to
In article <jicpe5$65s$2...@speranza.aioe.org>,
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

> > If the compiler computes an integer trip count before loop
> > execution, then the behavior will be a little different. There are
> > still several ways to compute the values "x" in this case, so there
> > are still some uncertainties in what are the results.
>
> If the end loop value is approximately half way between two loop
> values, then it should reliably stop as expected.

Consider this situation. An integer trip count is computed before
the loop, the initial value of "x" is set before the loop, and its
value is incremented each pass through the loop. The floating point
hardware is set to round down on the increment, and the floating
point increment happens to be smaller than the exact increment. In
this case, the value of "x" will always be smaller than the expected
value each pass through the loop. In such a case, it might appear
that there should have been one more pass through the loop, or that
the final pass through the loop did not succeed in reaching the
desired boundary value by one or more steps -- there are several
potential problems depending on the application.

However, if the floating point increments are set to round up in
this situation, then the final value of "x" will always be larger
than the mathematically correct value. With enough loop iterations,
it could even be large enough so that the last one or more values
that "x" has will be larger than the loop upper bound. In this
case, it might appear that there were "extra" loop iterations, or
that the final iterations overstepped the desired boundary value.
Again, there are several potential problems depending on the
application.

This is related to the way the trip count is computed for fortran do
loops. The trip count expression is defined in the fortran
standard, but it is not required by the standard that this is the
way the do loop must be implemented. In C, this does not happen
because the loop structure itself defines how the value is
incremented and what test is done to terminate the loop. A C loop is
more like a DO WHILE loop in fortran, where the termination is
determined during the loop executions rather than at the beginning.
The only way for the trip count to be computed correctly, so that it
is well defined and agrees with the expected number of loops in all
cases in fortran, is for integer do loop specifications.

$.02 -Ron Shepard
0 new messages