Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

"With A Little Help From My Friends"

1,369 views
Skip to first unread message

Lynn McGuire

unread,
Mar 4, 2020, 4:39:19 PM3/4/20
to
"With A Little Help From My Friends"

"I spent last week at my first Fortran Standards Committee meeting. It
was a pretty interesting experience. Everyone there was brilliant, and
interested in trying to do a good job improving the language. And yet,
it was still somehow very disfunctional."

Yup, Fortran is dysfunctional nowadays. And I doubt that is going to
change, the roots are just too old.

Lynn

Gary Scott

unread,
Mar 4, 2020, 5:56:39 PM3/4/20
to
Don't know about the meeting, but Fortran has dramatically improved.
I'm fairly happy with it other than needing a well designed bit string
data type and perhaps decimal or maybe really-really-big-integer math
support.

Steve Lionel

unread,
Mar 4, 2020, 8:30:27 PM3/4/20
to
I disagree with that premise and with the observation. In fact a lot has
changed over the past couple of years, with significantly increased
participation of the user community and a streamlined process that
should get new revisions published faster. We're almost done with the
technical content of Fortran 202X.

It's true that not everyone gets their way, but my experience is that we
have made excellent progress. I am also delighted to see so many new
(and younger) faces at the meetings.


--
Steve Lionel
ISO/IEC JTC1/SC22/WG5 (Fortran) Convenor
Retired Intel Fortran developer/support
Email: firstname at firstnamelastname dot com
Twitter: @DoctorFortran
LinkedIn: https://www.linkedin.com/in/stevelionel
Blog: http://intel.com/software/DrFortran
WG5: https://wg5-fortran.org

robin....@gmail.com

unread,
Mar 4, 2020, 9:14:05 PM3/4/20
to
On Thursday, March 5, 2020 at 9:56:39 AM UTC+11, Gary Scott wrote:

> Don't know about the meeting, but Fortran has dramatically improved.
> I'm fairly happy with it other than needing a well designed bit string
> data type and perhaps decimal or maybe really-really-big-integer math
> support.

Bit strings and decimal data type have been available in PL/I since 1966.

IBM's PL/I currently offers decimal data to 31 digits
(both integer and fixed-point non-integer)

and 64-bit integers.

Decimal float has been available in the language since 1966 also,
though not every compiler provided decimal hardware.

With IEEE decimal floating-point available, IBM's compilers
use that hardware.

Gary Scott

unread,
Mar 4, 2020, 9:29:11 PM3/4/20
to
:) I could have predicted this...but yes, I used PL/I on our mainframes
back 3 decades or so ago.

Lynn McGuire

unread,
Mar 4, 2020, 10:05:34 PM3/4/20
to
Fortran has improved at the expense of backwards compatibility. My
750,000 line software requires zero initialization and autosave to run.
Works great with the old Unix F77 compiler and the Open Watcom F77
compiler. Does not work with any of the F90+ compilers that I have
tried. We are going to try GFortran again this year using the Simply
Fortran IDE now that they support multiple targets.
http://simplyfortran.com/

Sigh, here come the flames.

Lynn

steve kargl

unread,
Mar 4, 2020, 10:36:07 PM3/4/20
to
Lynn McGuire wrote:

> On 3/4/2020 4:56 PM, Gary Scott wrote:
>> On 3/4/2020 3:39 PM, Lynn McGuire wrote:
>>> "With A Little Help From My Friends"
>>>
>>> "I spent last week at my first Fortran Standards Committee meeting. It
>>> was a pretty interesting experience. Everyone there was brilliant, and
>>> interested in trying to do a good job improving the language. And yet,
>>> it was still somehow very disfunctional."
>>>
>>> Yup, Fortran is dysfunctional nowadays.  And I doubt that is going to
>>> change, the roots are just too old.
>>>
>>> Lynn
>> Don't know about the meeting, but Fortran has dramatically improved. I'm
>> fairly happy with it other than needing a well designed bit string data
>> type and perhaps decimal or maybe really-really-big-integer math support.
>
> Fortran has improved at the expense of backwards compatibility. My
> 750,000 line software requires zero initialization and autosave to run.

This suggests that your code does not conform to the Fortran 77 standard.
Perhaps, reviewing Fortran 77, section 2.11, might help with gaining an
understanding of your problem.

> Works great with the old Unix F77 compiler and the Open Watcom F77
> compiler.

Of course, it depends on processor dependent behavior

> Does not work with any of the F90+ compilers that I have
> tried. We are going to try GFortran again this year using the Simply
> Fortran IDE now that they support multiple targets.
> http://simplyfortran.com/
>
> Sigh, here come the flames.

No flames. Just the reality. You have been posting laments about
your inability to fix your code here for at least a decade. If you
are keen on getting your code to run to exploit modern hardware,
it may behoove you to hire a consultant.

--
steve



Lynn McGuire

unread,
Mar 5, 2020, 12:07:02 AM3/5/20
to
Sorry but our software went commercial back in 1969 on the UCC
(University Computing Center) time sharing service on their 64K word
Univac 1108. Fortran IV (66) on a good day with 6 bit bytes (no lower
case) and 36 bit words. Since then we have ported the software to 12 ?
13 ? 14 ? platforms. I started working with the software in 1975.
Never a problem with auto initialization or auto save in all that time
until we tried porting to the F90+ compilers.

Lynn

steve kargl

unread,
Mar 5, 2020, 12:51:39 AM3/5/20
to
Not sure what your sorry about. Does not matter what you did in
1969 or in the 12 or 13 ports you've done. If the code has always
required auto initialization and auto save, then it has never conformed
with any Fortran standard, include Fortran 66.

Yes, I know, your software is a commercial product. This is precisely
why I suggested you hire a consultant who is capable of fixing your
nonconforming Fortran code. Reading your complaints that no
F90+ compiler can do anything useful with code is getting tiresome.

--
steve
to exploit modern hardware.


to the Fortran standard

Louis Krupp

unread,
Mar 5, 2020, 1:26:58 AM3/5/20
to
I'll second the consultant suggestion. Your list of requirements could
include:

1. Familiarity with modern Fortran and its antecedents.
2. Ability to modify code while resisting the urge to rewrite stuff
without a really good reason.
3. Tolerance for pain.

Louis

spectrum

unread,
Mar 5, 2020, 6:39:12 AM3/5/20
to
On Thursday, March 5, 2020 at 12:05:34 PM UTC+9, Lynn McGuire wrote:

> Fortran has improved at the expense of backwards compatibility. My
> 750,000 line software requires zero initialization and autosave to run.
> Works great with the old Unix F77 compiler and the Open Watcom F77
> compiler. Does not work with any of the F90+ compilers that I have
> tried. We are going to try GFortran again this year (...snip...)

As for zero initialization and autosave, isn't an option like "-finit-local-zero" and
"-fno-automatic" useful for that purpose?
I guess similar options are available for other compiles (but not checked myself...)


From the man page of gfortran-9 ("man gfortran-9" on the terminal of MacOS10.11):

-finit-local-zero
-finit-derived
-finit-integer=n
-finit-real=<zero|inf|-inf|nan|snan>
-finit-logical=<true|false>
-finit-character=n
The -finit-local-zero option instructs the compiler to initialize
local "INTEGER", "REAL", and "COMPLEX" variables to zero,
"LOGICAL" variables to false, and "CHARACTER" variables to a
string of null bytes. Finer-grained initialization options are
provided by the -finit-integer=n,
-finit-real=<zero|inf|-inf|nan|snan> (which also initializes the
real and imaginary parts of local "COMPLEX" variables),
-finit-logical=<true|false>, and -finit-character=n (where n is
an ASCII character value) options.
...

-fno-automatic
Treat each program unit (except those marked as RECURSIVE) as if
the "SAVE" statement were specified for every local variable and
array referenced in it. Does not affect common blocks. (Some
Fortran compilers provide this option under the name -static or
-save.) The default, which is -fautomatic, uses the stack for
local variables smaller than the value given by
-fmax-stack-var-size. Use the option -frecursive to use no
static memory.

robin....@gmail.com

unread,
Mar 5, 2020, 8:26:05 AM3/5/20
to
On Thursday, March 5, 2020 at 2:05:34 PM UTC+11, Lynn McGuire wrote:
> On 3/4/2020 4:56 PM, Gary Scott wrote:
> > On 3/4/2020 3:39 PM, Lynn McGuire wrote:
> >> "With A Little Help From My Friends"
> >>
> >> "I spent last week at my first Fortran Standards Committee meeting. It
> >> was a pretty interesting experience. Everyone there was brilliant, and
> >> interested in trying to do a good job improving the language. And yet,
> >> it was still somehow very disfunctional."
> >>
> >> Yup, Fortran is dysfunctional nowadays.  And I doubt that is going to
> >> change, the roots are just too old.
> >>
> >> Lynn
> > Don't know about the meeting, but Fortran has dramatically improved. I'm
> > fairly happy with it other than needing a well designed bit string data
> > type and perhaps decimal or maybe really-really-big-integer math support.
>
> Fortran has improved at the expense of backwards compatibility. My
> 750,000 line software requires zero initialization and autosave to run.

Zero initialization was never standard Fortran ;
it was a particular manufacturer's extension.

> Works great with the old Unix F77 compiler and the Open Watcom F77
> compiler.

Those are 30+ year-old compilers and are long out-of-date.

> Does not work with any of the F90+ compilers that I have
> tried.

And probably never will.

ga...@u.washington.edu

unread,
Mar 5, 2020, 9:16:13 AM3/5/20
to
On Thursday, March 5, 2020 at 5:26:05 AM UTC-8, robin...@gmail.com wrote:

(snip)

> Zero initialization was never standard Fortran ;
> it was a particular manufacturer's extension.

Or an accident.

For many years, you got whatever was leftover in memory, either
before the program was loaded, or was filled in by the linkage
editor. (For OS/360, it is a combination of the two.)

At some point, it was decided that was a security risk, and
memory had to be cleared. Whatever the linkage editor filled
in was yours, and not someone else's, so that was fine.

I do remember a system in the late OS/360 days that initialized
with X'81', both in the linkage editor and before program fetch.
That worked pretty well for Fortran (static data).

C guarantees static data is initialized to zero, but automatic
(stack) data gets whatever is there. This requirement leaks over
into other systems using, for example, the same linker.

Years ago (again, OS/360 days) I had a program that I suspected
was using either uninitialized data or data out of array bounds,
and I had WATFIV. WATFIV was one of the few compilers at the time
to check array bounds (not optional), and also check for uninitialized
data. Some runs with that, and some careful selection of input data
got that pretty well cleaned up.

What some people are saying to Lynn is that it is about time
to fix the program. If you set floating point data to SNaN, it
should be somewhat fast to find. Setting integers to X'81818181'
tends to cause some problems near the mistake, such that you
can find them. In addition, you might find other bugs that
were not previously known.

mecej4

unread,
Mar 5, 2020, 10:46:46 AM3/5/20
to
On 3/4/2020 3:39 PM, Lynn McGuire wrote:
> Yup, Fortran is dysfunctional nowadays.  And I doubt that is going to
> change, the roots are just too old.

I can see a mountain a dozen miles away. Ten years ago, I asked it to
come to me, but it just stays there.

It's a dysfunctional mountain. And I doubt that is going to change, the
roots are just too old.

I hear that people go to the mountain, climb up and come back happier.
They must not be very smart, those people. A pity.

-- mecej4

Dick Hendrickson

unread,
Mar 5, 2020, 11:10:16 AM3/5/20
to
On 3/5/20 8:16 AM, ga...@u.washington.edu wrote:
> On Thursday, March 5, 2020 at 5:26:05 AM UTC-8, robin...@gmail.com wrote:
>
> (snip)
>
>> Zero initialization was never standard Fortran ;
>> it was a particular manufacturer's extension.
>
> Or an accident.
>
> For many years, you got whatever was leftover in memory, either
> before the program was loaded, or was filled in by the linkage
> editor. (For OS/360, it is a combination of the two.)
>
> At some point, it was decided that was a security risk, and
> memory had to be cleared. Whatever the linkage editor filled
> in was yours, and not someone else's, so that was fine.
>
One operating system, on a machine with 10 characters per word, sprayed
memory with

10HURAHORSES*

Dick Hendrickson

Ron Shepard

unread,
Mar 5, 2020, 12:08:17 PM3/5/20
to
On 3/4/20 11:06 PM, Lynn McGuire wrote:
> Sorry but our software went commercial back in 1969 on the UCC
> (University Computing Center) time sharing service on their 64K word
> Univac 1108.  Fortran IV (66) on a good day with 6 bit bytes (no lower
> case) and 36 bit words.  Since then we have ported the software to 12 ?
> 13 ? 14 ? platforms.  I started working with the software in 1975. Never
> a problem with auto initialization or auto save in all that time until
> we tried porting to the F90+ compilers.


Nonetheless, your compatibility problems are because of your code, not
because of the fortran language evolution over the last 50 years. I (and
many others here) have told you step by step procedures how to fix your
code, yet you instead look for compiler options that allow you to
compile your nonconforming code to get it to work. I can understand why
you might not want to invest effort, but you cannot deflect the blame
away from your nonstandard code and onto the fortran language. Your code
was nonconforming in 1969, in 1976, in 1980, and at every point in time
since it existed. On the other hand, if your code had conformed to the
f77 standard, it would likely still run today with no changes, or, at
most, minor modifications. I know because I also wrote and use code from
that era that ran on Univac 1108 hardware and on dozens of other
hardware and software combinations since then.

$.02 -Ron Shepard

Ron Shepard

unread,
Mar 5, 2020, 12:20:14 PM3/5/20
to
On 3/4/20 7:30 PM, Steve Lionel wrote:
> I disagree with that premise and with the observation. In fact a lot has
> changed over the past couple of years, with significantly increased
> participation of the user community and a streamlined process that
> should get new revisions published faster. We're almost done with the
> technical content of Fortran 202X.

Is there any chance that the final standards, the official versions,
will be made available for free downloads? The fact that they are behind
paywalls has been an odd, frustrating, feature of the standards process
for the last three decades.

Do you know how much income is actually generated for the committee by
the paywall?

$.02 -Ron Shepard

Brad Richardson

unread,
Mar 5, 2020, 2:31:45 PM3/5/20
to
Hi Lynn,

Glad to see you're reading my blog, but a link back would be greatly appreciated (:P).

https://everythingfunctional.wordpress.com/

Regards,
Brad

Steve Lionel

unread,
Mar 5, 2020, 2:45:25 PM3/5/20
to
On 3/5/2020 12:20 PM, Ron Shepard wrote:
> Is there any chance that the final standards, the official versions,
> will be made available for free downloads? The fact that they are behind
> paywalls has been an odd, frustrating, feature of the standards process
> for the last three decades.

It's ISO that has the copyright on the standard and they set the rules,
which includes requiring removal of online copies. The committee has
investigated operating under IEEE, which is less of a pain in this way,
but it's not our decision to make, as INCITS, which manages all US
language standards for ANSI, would have to do this en masse.

However, there's no reason you should be frustrated at not being able to
freely download the official ISO standard, if what you want is to read
what the standard says about the language. Keep reading.

> Do you know how much income is actually generated for the committee by
> the paywall

Yes - zero. I don't know how much ISO gets from sales of the standard,
but my understanding is that if you do buy it, you get a "print on
demand" copy in a shoddy binding. I don't know of any committee member
that has an official copy. Instead, we all reference something we call
the "Fortran 2018 Interpretation Document" which, unlike the official
standard, has line numbers.

FWIW, the only "income" PL22.3 (J3) gets is from meeting fees assessed
on attendees; this is used to pay for expenses during meetings. There is
no outside funding source. WG5 (the ISO committee) gets no income at
all. Worse, each principal member of J3 has to pay INCITS yearly for the
privilege of belonging. The rate is based on organization revenue, with
a minimum of $1300/yr (currently) for "Less than $1M" - this includes
individual members such as myself. Most organizations pay $2600/yr or more.

On a completely unrelated topic, you might want to visit the J3 site
(https://j3-fortran.org), select Documents > By Year > 2018, and
download 18-007r1. Any similarity to an official document is entirely
coincidental.

P.S. The WG5 web site (https://wg5-fortran.org/) has links to copies of
the text of earlier Fortran standards back to F66.

Lynn McGuire

unread,
Mar 5, 2020, 3:23:45 PM3/5/20
to
Argh ! I beg incompetence !

Sorry about that. I ALWAYS add the URL.

Lynn

Lynn McGuire

unread,
Mar 5, 2020, 3:24:21 PM3/5/20
to
On 3/4/2020 3:39 PM, Lynn McGuire wrote:
Somehow, I incompetently left off the URL of the blog entry. Here it is.


https://everythingfunctional.wordpress.com/2020/03/04/with-a-little-help-from-my-friends/

Lynn

Lynn McGuire

unread,
Mar 5, 2020, 3:29:32 PM3/5/20
to
On 3/4/2020 7:30 PM, Steve Lionel wrote:
> On 3/4/2020 4:39 PM, Lynn McGuire wrote:
>> "With A Little Help From My Friends"
>>
>> "I spent last week at my first Fortran Standards Committee meeting. It
>> was a pretty interesting experience. Everyone there was brilliant, and
>> interested in trying to do a good job improving the language. And yet,
>> it was still somehow very disfunctional."
>>
>> Yup, Fortran is dysfunctional nowadays.  And I doubt that is going to
>> change, the roots are just too old.
>
> I disagree with that premise and with the observation. In fact a lot has
> changed over the past couple of years, with significantly increased
> participation of the user community and a streamlined process that
> should get new revisions published faster. We're almost done with the
> technical content of Fortran 202X.
>
> It's true that not everyone gets their way, but my experience is that we
> have made excellent progress. I am also delighted to see so many new
> (and younger) faces at the meetings.

The number one dysfunctional item in Fortran is variable typing and
declaration. At the 1977 specs, much less the 1990 specs, all Fortran
variables should be explicitly declared and typed. The implicit rule is
ok for small programs. Not at all for anything more than say, a
thousand lines of code.

You would not believe the pushback that I got in my shop when I decided
that we were going to get rid of implicit typing. I had one PhD
Chemical Engineer complain for ten years about having to declare and
type his variables.

Lynn

Steve Lionel

unread,
Mar 5, 2020, 5:41:29 PM3/5/20
to
On 3/5/2020 3:29 PM, Lynn McGuire wrote:
> The number one dysfunctional item in Fortran is variable typing and
> declaration.  At the 1977 specs, much less the 1990 specs, all Fortran
> variables should be explicitly declared and typed.  The implicit rule is
> ok for small programs.  Not at all for anything more than say, a
> thousand lines of code.

Implicit typing is bad, to be sure. That's why the language added
IMPLICIT NONE (and enhanced that with the ability to do away with
implicit external attribute for procedure calls in F2018). Most (all?)
compilers have a command line option to globally disable implicit
typing. If you want to enforce that as a coding style, you can.

Like it or not, implicit typing has been in the language forever, and
uncounted thousands of programs rely on it. The fastest way to get
people to stop using a language is to change it so that their programs
break, with high effort required to adapt. Is that what you want? Do you
think that's what compiler vendors want - to have their users angry that
a new compiler version broke their previously conforming code?

I expect that in future revisions we will be adding more control over
implicit behaviors (implicit SAVE on initialization is a candidate), but
will do it in an "opt-in" fashion.

ga...@u.washington.edu

unread,
Mar 5, 2020, 6:08:05 PM3/5/20
to
On Thursday, March 5, 2020 at 2:41:29 PM UTC-8, Steve Lionel wrote:

(snip)

> Implicit typing is bad, to be sure. That's why the language added
> IMPLICIT NONE (and enhanced that with the ability to do away with
> implicit external attribute for procedure calls in F2018). Most (all?)
> compilers have a command line option to globally disable implicit
> typing. If you want to enforce that as a coding style, you can.

It should be fairly easy to write a program that will go through
a program and put in the IMPLICIT NONE at the right place. I believe
this can be done in one pass.

I suspect that there are some tricky cases needed to get it right,
but it shouldn't be all that hard. One could even write a Makefile
that would pass each input file through the conversion, and then to
the compiler. Once such program was written, we could forget
about this problem.

There should also be programs to replace some deleted or obsolescent
features with the appropriate replacement feature.

It should not be hard, though I suspect takes two passes, to convert
ASSIGNed GOTO into computed GOTO.

I believe one pass is enough to convert computed GOTO
into SELECT/CASE with a GOTO in each CASE. Maybe ugly, but
it should work. Combine with above.

All these should only need to be done once.
(Well, possibly updated with new standard versions.)

Gary Scott

unread,
Mar 5, 2020, 6:12:22 PM3/5/20
to
Heck, I'd almost consider volunteering to do it for him just to get this
issue straightened out once and for all...it's been decades...

Steve Lionel

unread,
Mar 5, 2020, 6:35:12 PM3/5/20
to
On 3/5/2020 2:31 PM, Brad Richardson wrote:

> https://everythingfunctional.wordpress.com/

Hi Brad - interesting post. I recognize that this was your first meeting
so some things might have been a bit confusing. I did want to correct
one item in your post, where you said "The committee is comprised mainly
of representatives from compiler vendors/writers, with a few
representatives from some large “users,” like NASA and some national labs."

This is not the case. Vendors are a decided minority on the US
committee, though recently AMD has joined. Of current Fortran vendors,
only Cray(HPE), IBM, Intel, NAG and Nvidia/PGI are represented. There
are eleven principal members from the user community, four from vendors;
only principals get a vote at the J3 level. (Malcolm Cohen from NAG is
not a principal member - as NAG no longer pays for his membership in J3,
he is my alternate!) We had some 20 or so attendees at the February
meeting, seven from vendors (NVidia sent two.)

When we consider the larger international committee (WG5), the fraction
of vendors is even smaller as only one other (Fujitsu) is represented -
the rest are users. It is WG5 that selects the features/changes that go
into revisions.

I look forward to working with you on defining the future of Fortran.

Lynn McGuire

unread,
Mar 5, 2020, 7:11:31 PM3/5/20
to
On 3/5/2020 4:41 PM, Steve Lionel wrote:
> On 3/5/2020 3:29 PM, Lynn McGuire wrote:
>> The number one dysfunctional item in Fortran is variable typing and
>> declaration.  At the 1977 specs, much less the 1990 specs, all Fortran
>> variables should be explicitly declared and typed.  The implicit rule
>> is ok for small programs.  Not at all for anything more than say, a
>> thousand lines of code.
>
> Implicit typing is bad, to be sure. That's why the language added
> IMPLICIT NONE (and enhanced that with the ability to do away with
> implicit external attribute for procedure calls in F2018). Most (all?)
> compilers have a command line option to globally disable implicit
> typing. If you want to enforce that as a coding style, you can.
>
> Like it or not, implicit typing has been in the language forever, and
> uncounted thousands of programs rely on it. The fastest way to get
> people to stop using a language is to change it so that their programs
> break, with high effort required to adapt. Is that what you want? Do you
> think that's what compiler vendors want - to have their users angry that
> a new compiler version broke their previously conforming code?
>
> I expect that in future revisions we will be adding more control over
> implicit behaviors (implicit SAVE on initialization is a candidate), but
> will do it in an "opt-in" fashion.

We added a central include to all 6,000 subroutine files. The first
line of code in the include is IMPLICIT NONE.

We actually got the smaller program (only 250,000 lines of code) to work
without auto-save and auto-init. The bigger program, 600,000 lines of
code, is still being worked on intermittently when not putting out
fires. Lots of fires lately.

Lynn

robin....@gmail.com

unread,
Mar 5, 2020, 8:41:34 PM3/5/20
to
On Friday, March 6, 2020 at 1:16:13 AM UTC+11, g....@u.washington.edu wrote:
> On Thursday, March 5, 2020 at 5:26:05 AM UTC-8, r.......@gmail.com wrote:
>
> (snip)
>
> > Zero initialization was never standard Fortran ;
> > it was a particular manufacturer's extension.
>
> Or an accident.

More likely a non-zero value.

robin....@gmail.com

unread,
Mar 5, 2020, 8:44:40 PM3/5/20
to
On Friday, March 6, 2020 at 1:16:13 AM UTC+11, ga...@u.washington.edu wrote:
> On Thursday, March 5, 2020 at 5:26:05 AM UTC-8, robin...@gmail.com wrote:
>
> (snip)
>
> > Zero initialization was never standard Fortran ;
> > it was a particular manufacturer's extension.
>
> Or an accident.
>
> For many years, you got whatever was leftover in memory, either
> before the program was loaded, or was filled in by the linkage
> editor. (For OS/360, it is a combination of the two.)
>
> At some point, it was decided that was a security risk, and
> memory had to be cleared. Whatever the linkage editor filled
> in was yours, and not someone else's, so that was fine.
>
> I do remember a system in the late OS/360 days that initialized
> with X'81', both in the linkage editor and before program fetch.
> That worked pretty well for Fortran (static data).
>
> C guarantees static data is initialized to zero, but automatic
> (stack) data gets whatever is there. This requirement leaks over
> into other systems using, for example, the same linker.
>
> Years ago (again, OS/360 days) I had a program that I suspected
> was using either uninitialized data or data out of array bounds,
> and I had WATFIV. WATFIV was one of the few [FORTRAN] compilers at the time
> to check array bounds (not optional),

By the time WATFIV came along PL/I had been checking for array bounds
violations for about a 5 years.

> and also check for uninitialized
> data. Some runs with that, and some careful selection of input data
> got that pretty well cleaned up.
>
> What some people are saying to Lynn is that it is about time
> to fix the program. If you set floating point data to SNaN, it
> should be somewhat fast to find. Setting integers to X'81818181'
> tends to cause some problems near the mistake, such that you
> can find them. In addition, you might find other bugs that
> were not previously known.

Indeed.

robin....@gmail.com

unread,
Mar 5, 2020, 8:50:05 PM3/5/20
to
On Friday, March 6, 2020 at 3:10:16 AM UTC+11, Dick Hendrickson wrote:
> On 3/5/20 8:16 AM, g.....@u.washington.edu wrote:
> > On Thursday, March 5, 2020 at 5:26:05 AM UTC-8, robin...@gmail.com wrote:
> >
> > (snip)
> >
> >> Zero initialization was never standard Fortran ;
> >> it was a particular manufacturer's extension.
> >
> > Or an accident.
> >
> > For many years, you got whatever was leftover in memory, either
> > before the program was loaded, or was filled in by the linkage
> > editor. (For OS/360, it is a combination of the two.)
> >
> > At some point, it was decided that was a security risk, and
> > memory had to be cleared. Whatever the linkage editor filled
> > in was yours, and not someone else's, so that was fine.
> >
> One operating system, on a machine with 10 characters per word, sprayed
> memory with
>
> 10HURAHORSES*

ha ha.
At least you knew that you were on the right course.

Some FORTRAN compilers on that same system computed the
address of the subscripted variable. If it fell withing the
array, it was considered OK, even if individual subscripts
of a multi-dimensional array were out of bounds.

robin....@gmail.com

unread,
Mar 5, 2020, 8:57:11 PM3/5/20
to
On Friday, March 6, 2020 at 10:08:05 AM UTC+11, ga...@u.washington.edu wrote:
> On Thursday, March 5, 2020 at 2:41:29 PM UTC-8, Steve Lionel wrote:
>
> (snip)
>
> > Implicit typing is bad, to be sure. That's why the language added
> > IMPLICIT NONE (and enhanced that with the ability to do away with
> > implicit external attribute for procedure calls in F2018). Most (all?)
> > compilers have a command line option to globally disable implicit
> > typing. If you want to enforce that as a coding style, you can.
>
> It should be fairly easy to write a program that will go through
> a program and put in the IMPLICIT NONE at the right place. I believe
> this can be done in one pass.

But it won't put in the variables in implicit type statements.

ga...@u.washington.edu

unread,
Mar 5, 2020, 9:01:08 PM3/5/20
to
On Thursday, March 5, 2020 at 5:41:34 PM UTC-8, robin...@gmail.com wrote:
> On Friday, March 6, 2020 at 1:16:13 AM UTC+11, g....@u.washington.edu wrote:

(snip someone wrote)

> > > Zero initialization was never standard Fortran ;
> > > it was a particular manufacturer's extension.

(then I wrote)
> > Or an accident.

> More likely a non-zero value.

In the olden days, that was usual.

But then came security concerns, and so it is usual to clear
all memory to zeros before loading programs into it.

Many virtual memory systems now have one page filled with zeros,
which is marked read only and mapped to all pages in the user
address space. When a page is written to, the system allocates
an unused page for it, clears it (security again) and maps it
into the appropriate place. So, static storage is usually zero.

Also, since C requires static data to be zero, it is convenient.


robin....@gmail.com

unread,
Mar 5, 2020, 9:05:32 PM3/5/20
to
On Friday, March 6, 2020 at 9:41:29 AM UTC+11, Steve Lionel wrote:

> Implicit typing is bad, to be sure.

Only because almost all FORTRAN compilers did not produce
a list of variables and their types.
Because they didn't, the programmer could not see any mis-spelled
variable names (especially O [capital o] typed insstead of 0 [zero] ).
As well, mis-typed variable names could creep through unannounced.

> That's why the language added
> IMPLICIT NONE (and enhanced that with the ability to do away with
> implicit external attribute for procedure calls in F2018). Most (all?)
> compilers have a command line option to globally disable implicit
> typing. If you want to enforce that as a coding style, you can.
>
> Like it or not, implicit typing has been in the language forever, and
> uncounted thousands of programs rely on it. The fastest way to get
> people to stop using a language is to change it so that their programs
> break, with high effort required to adapt.

It is not impossible for a program to add any undeclared
variable names to type specification statements.
However, it still requires a visible check for mis-spelled
variable names.

Ron Shepard

unread,
Mar 6, 2020, 2:53:05 AM3/6/20
to
On 3/5/20 1:45 PM, Steve Lionel wrote:

Thanks for the info. I have known other committee members over the
years, and I was aware that the individuals are volunteers and receive
no payment for their services, and also that their supporting
organizations usually have to foot the bill for travel expenses. I was
not aware that you also must pay INCITS. All that makes it even more
perplexing that your work is not distributed freely by default.

>> Do you know how much income is actually generated for the committee by
>> the paywall
>
> Yes - zero. I don't know how much ISO gets from sales of the standard,
> but my understanding is that if you do buy it, you get a "print on
> demand" copy in a shoddy binding. I don't know of any committee member
> that has an official copy. Instead, we all reference something we call
> the "Fortran 2018 Interpretation Document" which, unlike the official
> standard, has line numbers.

Is this Interpretation Document with the line numbers available for
download? And if so, is it essentially indistinguishable from the final
ISO text?

I do have a copy of 18-007r1, but when I reference it I always wonder if
the section I'm reading is identical to the official document.

Thanks again for the info.

$.02 -Ron Shepard

Steve Lionel

unread,
Mar 6, 2020, 8:55:00 AM3/6/20
to
On 3/6/2020 2:53 AM, Ron Shepard wrote:
> Is this Interpretation Document with the line numbers available for
> download? And if so, is it essentially indistinguishable from the final
> ISO text?
>
> I do have a copy of 18-007r1, but when I reference it I always wonder if
> the section I'm reading is identical to the official document.

Let's put it this way - this is the document the committee uses when
referencing the standard for edits, interpretation requests and changes.
18-007r1 is what we use.

Clive Page

unread,
Mar 6, 2020, 10:06:06 AM3/6/20
to
On 05/03/2020 17:08, Ron Shepard wrote:
> Nonetheless, your compatibility problems are because of your code, not because of the fortran language evolution over the last 50 years. I (and many others here) have told you step by step procedures how to fix your code, yet you instead look for compiler options that allow you to compile your nonconforming code to get it to work. I can understand why you might not want to invest effort, but you cannot deflect the blame away from your nonstandard code and onto the fortran language. Your code was nonconforming in 1969, in 1976, in 1980, and at every point in time since it existed. On the other hand, if your code had conformed to the f77 standard, it would likely still run today with no changes, or, at most, minor modifications. I know because I also wrote and use code from that era that ran on Univac 1108 hardware and on dozens of other hardware and software combinations since then.

Well that's true but I have somewhat more sympathy with Lynn. There were two problems concerning standard-conformance for programmers in the pre-Fortran90 era:

1. The Fortran77 language really didn't have everything that programmers needed to do the job in hand. That's why so much has been added since. Without using extensions you couldn't do: dynamically allocated memory, read/write stream files, read command-line arguments, issue a prompt to the user's terminal allowing the response to be given on the same line, declare variables with a precision suitable for a given number of digits, choose an unused I/O unit, (and lots more). In addition the shortage of memory led to the use of COMMON blocks and lots of complicated overlay schemes. These weren't necessarily unportable, but it was easy to break the rules of the standard by accident.

2. Most programmers relied upon the programming manuals that came with their computer which typically described a version of Fortran including lots of vendor extensions without differentiating them adequately from features that were Standard Fortran. This was often deliberate: the vendor wanted you to use a maximum number of their own extensions so that later you would buy another computer of the same brand whey you found your program would not run on anything else.

When you take these two together you find lots of programmers deliberately used non-standard features, and a lot more used them accidentally. We don't know which predominates here.

But most of these problems should have gone away after compilers for Fortran90 (and later) came along, and programmers should by now have sorted their code out by converting non-standard stuff to the Standard equivalent. It can be a lot of work, but 25 years is a lot of time.


--
Clive Page

dpb

unread,
Mar 6, 2020, 10:36:06 AM3/6/20
to
On 3/5/2020 5:35 PM, Steve Lionel wrote:
> On 3/5/2020 2:31 PM, Brad Richardson wrote:
>
>> https://everythingfunctional.wordpress.com/
>
> Hi Brad - interesting post. I recognize that this was your first meeting
> so some things might have been a bit confusing. I did want to correct
> one item in your post, where you said "The committee is comprised mainly
> of representatives from compiler vendors/writers, with a few
> representatives from some large “users,” like NASA and some national labs."
>
> This is not the case. Vendors are a decided minority on the US
> committee, though recently AMD has joined. Of current Fortran vendors,
> only Cray(HPE), IBM, Intel, NAG and Nvidia/PGI are represented. There
> are eleven principal members from the user community, four from vendors;
> only principals get a vote at the J3 level. (Malcolm Cohen from NAG is
> not a principal member - as NAG no longer pays for his membership in J3,
> he is my alternate!) We had some 20 or so attendees at the February
> meeting, seven from vendors (NVidia sent two.)
>
> When we consider the larger international committee (WG5), the fraction
> of vendors is even smaller as only one other (Fujitsu) is represented -
> the rest are users. It is WG5 that selects the features/changes that go
> into revisions.
...

Just how many active Fortran vendors are there left now, Steve?
Having left the mainframe world 30+ years ago, I'm out of touch with
anything but the desktop and choices there are pretty limited afaik.
But even there I gave up active consulting almost 20 yr ago now and had
migrated almost exclusively to MATLAB for essentially everything there
other than the legacy application support work owing to the integration
of everything I needed in the one package; even given the high initial
cost it was time-saving enough to be well worth the investment (for the
kind of work I was doing; large-scale compute-bound applications
wouldn't have been so much so, agreed).

--

Phillip Helbig (undress to reply)

unread,
Mar 6, 2020, 10:50:31 AM3/6/20
to
> Just how many active Fortran vendors are there left now, Steve?

Interesting question. In the old days, essentially all manufacturers
had their own compiler, and many had their own chips: DEC, HP, SUN, SGI,
IBM, Cray, Convex, etc.

Now that VMS on x86 is almost consumer ready, I wonder what sort of
Fortran they will have there. I remember at least F95, and that is
enough for 99% of what I need, but perhaps a version less than a quarter
of a century old would be possible.

Steve Lionel

unread,
Mar 6, 2020, 11:32:12 AM3/6/20
to
On 3/6/2020 10:36 AM, dpb wrote:
> Just how many active Fortran vendors are there left now, Steve?

More than are on the committee. In addition to those on J3 (AMD, Cray,
IBM, Intel, NAG, Nvidia/PGI) I can think offhand of gfortran,
Oracle(Sun) and HPE (for HP-UX). All three used to have representation
on J3 or WG5 but don't now. (Bill Long from Cray, I suppose, also
represents HPE now but I don't think he is much in touch with that
product team.)

VSI (VMS Software Inc.) now maintains what had been DEC/Compaq Fortran
and is in the process of porting it to their x86 VMS - I don't know
where they are at bringing it up to the current standard (it was F95
with some F03 last I looked.)

Fujitsu Japan has a compiler but it is used primarily in Japan; they
have a representative on WG5. There are other compilers out there, but I
don't think they're what I'd call "active".

There is a "flang" (formerly "f18") project to create a modern Fortran
compiler using a new front-end and LLVM for optimization and code
generation, but it is still a work-in-progress. (An older attempt using
the PGI front-end is dead, as far as I know.) Multiple current vendors
are moving to use LLVM, though, so this will be an interesting space to
watch.

If I have left out anyone, I apologize.

FortranFan

unread,
Mar 6, 2020, 5:02:34 PM3/6/20
to
On Friday, March 6, 2020 at 10:36:06 AM UTC-5, dpb wrote:

> ..
> Just how many active Fortran vendors are there left now, Steve? ..


Readers can refer to this webpage at the site managed by Fortranplus authors Ian Chivers and Jane Sleightholme for some additional details on compiler support toward the current Fortran standard:

https://www.fortranplus.co.uk/app/download/30202489/fortran_2003_2008_2018_compiler_support.pdf

robin....@gmail.com

unread,
Mar 7, 2020, 2:25:13 AM3/7/20
to
On Saturday, March 7, 2020 at 2:06:06 AM UTC+11, Clive Page wrote:
> On 05/03/2020 17:08, Ron Shepard wrote:
> > Nonetheless, your compatibility problems are because of your code, not because of the fortran language evolution over the last 50 years. I (and many others here) have told you step by step procedures how to fix your code, yet you instead look for compiler options that allow you to compile your nonconforming code to get it to work. I can understand why you might not want to invest effort, but you cannot deflect the blame away from your nonstandard code and onto the fortran language. Your code was nonconforming in 1969, in 1976, in 1980, and at every point in time since it existed. On the other hand, if your code had conformed to the f77 standard, it would likely still run today with no changes, or, at most, minor modifications. I know because I also wrote and use code from that era that ran on Univac 1108 hardware and on dozens of other hardware and software combinations since then.
>
> Well that's true but I have somewhat more sympathy with Lynn. There were two problems concerning standard-conformance for programmers in the pre-Fortran90 era:
>
> 1. The Fortran77 language really didn't have everything that programmers needed to do the job in hand.

Well, they thought that it didn't.

> That's why so much has been added since. Without using extensions you couldn't do: dynamically allocated memory,

But they could have used PL/I, which not only had dynamic memory
allocation but many other features [compared to FORTRAN]
to make programming easier.


> read/write stream files,

what?

> read command-line arguments, issue a prompt to the user's terminal allowing the response to be given on the same line, declare variables with a precision suitable for a given number of digits, choose an unused I/O unit, (and lots more). In addition the shortage of memory led to the use of COMMON blocks and lots of complicated overlay schemes.

That didn't happen with either ALGOL or PL/I.

ALGOL ran on systems with as little as 384 words of high-speed store.
What year was that? 1963.

Dick Hendrickson

unread,
Mar 7, 2020, 11:56:00 AM3/7/20
to
Sure,
DIMENSION A(10,10)
DO 10 I = 1,100
10 A(I,1) = 0.0
Is much faster than 2 nested loops and just as easy to read once you
catch on to the trick. ;)

Dick Hendrickson

robin....@gmail.com

unread,
Mar 7, 2020, 7:21:13 PM3/7/20
to
On Sunday, March 8, 2020 at 3:56:00 AM UTC+11, Dick Hendrickson wrote:
But non-portable, of course.
And subscript bounds checking would not pick that up on that machine.

But I'm not sure that it would be significantly faster,
because the only thing saved was a check on one bound
(instead of both bounds).

Steve Lionel

unread,
Mar 8, 2020, 12:04:17 PM3/8/20
to
On 3/7/2020 7:21 PM, robin....@gmail.com wrote:
>> Sure,
>> DIMENSION A(10,10)
>> DO 10 I = 1,100
>> 10 A(I,1) = 0.0
>> Is much faster than 2 nested loops and just as easy to read once you
>> catch on to the trick. ;)
> But non-portable, of course.
> And subscript bounds checking would not pick that up on that machine.
>
> But I'm not sure that it would be significantly faster,
> because the only thing saved was a check on one bound
> (instead of both bounds).

If you're concerned about performance, I'd be willing to bet that any
optimizing compiler you'd use would figure out what you are doing, even
with nested loops, and generate optimal code, vectorized or using a
"fast fill" method.

This is another case where you should just write what you mean and let
the compiler figure it out.

Ron Shepard

unread,
Mar 8, 2020, 1:08:29 PM3/8/20
to
On 3/8/20 11:04 AM, Steve Lionel wrote:
> On 3/7/2020 7:21 PM, robin....@gmail.com wrote:
>>> Sure,
>>>          DIMENSION A(10,10)
>>>          DO 10 I = 1,100
>>>    10    A(I,1) = 0.0
>>> Is much faster than 2 nested loops and just as easy to read once you
>>> catch on to the trick.  ;)
>> But non-portable, of course.
>> And subscript bounds checking would not pick that up on that machine.

In my memory, this was fairly portable. The language standard defined
the memory layout, and it defined how subscripts mapped onto that
memory, and that is all consistent with the above "trick". The only
nonportable aspect was whether the compiler detected the bounds
violation, but that could usually be turned off, and the practical
result of ignoring that warning is portable code, nonstandard but portable.

Of course nowadays that is not portable when the arrays are declared
with assumed shape or when the dummy argument is associated with a
strided actual array, but those were not options before f90.

>> But I'm not sure that it would be significantly faster,
>> because the only thing saved was a check on one bound
>> (instead of both bounds).

I'm pretty sure that the machines on which this code originated did not
check the bounds in a separate machine cycle, the check was built into
the increment-test-branch instruction. So it was "free" as far as clock
cycles.

But what it did achieve was to use all 64 registers efficiently and to
access the multiple memory banks optimally, something that was more
difficult with two nested loops. There were some particular array
dimensions, for which the two-loop code was especially bad, resulting in
memory bank conflicts and slowdowns of 2x or 4x. The linpack performance
benchmark could be improved on many machines by changing the leading
dimension from 300 to 301, and then just ignoring that extra element;
that was a memory bank conflict issue.

This loop is dominated just by memory accesses, but more complicated
loops with actual arithmetic instructions also had to contend with
chaining issues on the Cray, and those could also be handled easier with
a long single loop than with nested loops. These were timing issues
related to mixing vector register accesses with memory accesses.

Vector computers in this era did not have cache, so all the
optimizations were centered around getting memory contents into
registers and then back to memory. When memory hierarchies (NUMA) began
to appear in supercomputers in the late 1980s, this eliminated some of
these memory bank conflict issues while introducing cache
coherence/thrashing issues.

> If you're concerned about performance, I'd be willing to bet that any
> optimizing compiler you'd use would figure out what you are doing, even
> with nested loops, and generate optimal code, vectorized or using a
> "fast fill" method.

This code probably originated on a Cray, and what you say would have
been true in the late 1980s, but not in the early 1980s. In the early
1980s the Cray compiler had relatively poor optimization, and it relied
instead on the user incorporating many vendor-specific intrinsics to
achieve the best performance. In the case of Cray, the goal would have
been to use all of the 64 vector registers on each pass, and that would
have been easier to do with one long vector than with a sequence of sort
vectors.

The use of vendor-specific intrinsics was the popular trend at that
time, not only to guarantee the best performance on whatever hardware
was available but also to lock in the programmers and the users into
that vendor's combination of hardware and software. From the vendor's
perspective, it was a win-win situation.

$.02 -Ron Shepard

ga...@u.washington.edu

unread,
Mar 8, 2020, 4:30:29 PM3/8/20
to
On Sunday, March 8, 2020 at 9:04:17 AM UTC-7, Steve Lionel wrote:

(snip)

> >> DIMENSION A(10,10)
> >> DO 10 I = 1,100
> >> 10 A(I,1) = 0.0
> >> Is much faster than 2 nested loops and just as easy to read once you
> >> catch on to the trick. ;)
> > But non-portable, of course.
> > And subscript bounds checking would not pick that up on that machine.

You could EQUIVALENCE to a 1D array, in which case it should
be completely portable.

> > But I'm not sure that it would be significantly faster,
> > because the only thing saved was a check on one bound
> > (instead of both bounds).

> If you're concerned about performance, I'd be willing to bet that any
> optimizing compiler you'd use would figure out what you are doing, even
> with nested loops, and generate optimal code, vectorized or using a
> "fast fill" method.

Compilers for vector machines might have special-cased many like this.

In the early years of Fortran 90 and later compilers, with array
expressions, they often did a good job on simpler array expressions
like this one (that is, better than DO loops), and worse than DO
loops on more complicated expressions.

Many processors have a special way to initialize large memory
areas to a constant value. When you get even slightly more complicated:

DIMENSION A(10,10)
DO 10 I = 1,100
10 A(I,1) = A(I,1) +1

It is less obvious what compilers might do on different machines.

> This is another case where you should just write what you mean and let
> the compiler figure it out.

In the Fortran 66 days, it was common to dimension a dummy array
for a matrix (2D array) (1), and later on (*), and do all the
subscript calculations in the program.

Many programs that do this allow for a matrix (static) allocated
larger than actually used. That is, the array elements in actual
use are not contiguous. That complicates using one loop.


JCampbell

unread,
Mar 8, 2020, 10:47:53 PM3/8/20
to
On Thursday, March 5, 2020 at 2:05:34 PM UTC+11, Lynn McGuire wrote:
> On 3/4/2020 4:56 PM, Gary Scott wrote:
> > On 3/4/2020 3:39 PM, Lynn McGuire wrote:
> >> "With A Little Help From My Friends"
> >>
> >> "I spent last week at my first Fortran Standards Committee meeting. It
> >> was a pretty interesting experience. Everyone there was brilliant, and
> >> interested in trying to do a good job improving the language. And yet,
> >> it was still somehow very disfunctional."
> >>
> >> Yup, Fortran is dysfunctional nowadays.  And I doubt that is going to
> >> change, the roots are just too old.
> >>
> >> Lynn
> > Don't know about the meeting, but Fortran has dramatically improved. I'm
> > fairly happy with it other than needing a well designed bit string data
> > type and perhaps decimal or maybe really-really-big-integer math support.
>
> Fortran has improved at the expense of backwards compatibility. My
> 750,000 line software requires zero initialization and autosave to run.
> Works great with the old Unix F77 compiler and the Open Watcom F77
> compiler. Does not work with any of the F90+ compilers that I have
> tried. We are going to try GFortran again this year using the Simply
> Fortran IDE now that they support multiple targets.
> http://simplyfortran.com/
>
> Sigh, here come the flames.
>
> Lynn

Lynn,

It is not flames, but I also learnt FORTRAN in 1970's. It was called FORTRAN IV.

Back then un-initialised variables was a coding error. "zero initialization" was an extension on some compilers, but fortunately for me, not on the first few compilers I used. I have no sympathy for anyone who assumes "zero initialization" as it looks just wrong.

As for autosave, I thought default dynamic allocation changed with F77. This was a good thing, especially for memory limited programming.
So you had only a few years of programming prior to 1978 to develop these bad programming practices. Why should someone with a PhD not be able to make the change, after it was already in the Fortran standard ?

I have used many Fortran compilers, including some extensions that I insist on (we all make choices), but a commercial code that requires zero initialization and autosave to run looks like a bad call that has been perpetuated for too long.
At least you appear to have adopted "implicit none", which I find an essential code development aid, but is rejected by others in your camp of "bad" Fortran users. ( Hope the flames are not too hot )

JCampbell

unread,
Mar 8, 2020, 11:03:33 PM3/8/20
to
Yes, I agree, this was very common in 60's and 70's, especially for engineers who wrote code.
More often, we used a "F77" wrapper for many 2-d+ array calculations and performed the equivalent vector calculation, with a significant speed improvement. Some subscript contortions became very cryptic.
Strangely, with modern optimising compilers like ifort, these wrappers can now run slower that the array syntax, although often not worth the effort to revise the old codes.

robin....@gmail.com

unread,
Mar 9, 2020, 2:10:10 AM3/9/20
to
On Monday, March 9, 2020 at 4:08:29 AM UTC+11, Ron Shepard wrote:
> On 3/8/20 11:04 AM, Steve Lionel wrote:
I do not know and do not care when that type of bodgie code originated,
however, it was done in the days of the IBM S/360.
That machine had only 16 general purpose registers.
The CDC 7xx series had 8 data registers,
along with corresponding address registers.

> and to
> access the multiple memory banks optimally, something that was more
> difficult with two nested loops.

Not if the array was accessed by columns, because that's
how the elements are stored.

Steve Lionel

unread,
Mar 9, 2020, 11:39:02 AM3/9/20
to
I didn't write any of the text attributed to me here.

On 3/8/2020 1:08 PM, Ron Shepard wrote:
> On 3/8/20 11:04 AM, Steve Lionel wrote:
>> On 3/7/2020 7:21 PM, robin....@gmail.com wrote:
>>>> Sure,
>>>>          DIMENSION A(10,10)
>>>>          DO 10 I = 1,100
>>>>    10    A(I,1) = 0.0
>>>> Is much faster than 2 nested loops and just as easy to read once you
>>>> catch on to the trick.  ;)
>>> But non-portable, of course.
>>> And subscript bounds checking would not pick that up on that machine.


Dick Hendrickson

unread,
Mar 9, 2020, 11:39:17 AM3/9/20
to
On 3/8/20 11:04 AM, Steve Lionel wrote:
> On 3/7/2020 7:21 PM, robin....@gmail.com wrote:
>>> Sure,
>>>          DIMENSION A(10,10)
>>>          DO 10 I = 1,100
>>>    10    A(I,1) = 0.0
>>> Is much faster than 2 nested loops and just as easy to read once you
>>> catch on to the trick.  ;)
>> But non-portable, of course.
>> And subscript bounds checking would not pick that up on that machine.
>>
>> But I'm not sure that it would be significantly faster,
>> because the only thing saved was a check on one bound
>> (instead of both bounds).
>
> If you're concerned about performance, I'd be willing to bet that any
> optimizing compiler you'd use would figure out what you are doing, even
> with nested loops, and generate optimal code, vectorized or using a
> "fast fill" method.
>
> This is another case where you should just write what you mean and let
> the compiler figure it out.
>

The example is actually older than any of you guys have guessed. It was
a common coding style on the CDC 1604 in the mid 60s. Many compilers
were essentially one-pass and did statement at a time optimization.
Something like
DIMENSION A(10,10)
DO 10 J = 1,10
DO 10 I = 1,10
10 A(I,J) = 0.0
would, almost for sure, multiply J by 10 at least 10 times and, worst
case, 100 times. Programmers did what they needed to do to get speed.

Steve is right that you shouldn't do this now.

Dick Hendrickson

Brad Richardson

unread,
Mar 9, 2020, 1:29:50 PM3/9/20
to
On Thursday, March 5, 2020 at 3:35:12 PM UTC-8, Steve Lionel wrote:
> On 3/5/2020 2:31 PM, Brad Richardson wrote:
>
> > https://everythingfunctional.wordpress.com/
>
> Hi Brad - interesting post. I recognize that this was your first meeting
> so some things might have been a bit confusing. I did want to correct
> one item in your post, where you said "The committee is comprised mainly
> of representatives from compiler vendors/writers, with a few
> representatives from some large “users,” like NASA and some national labs."
>
> This is not the case. Vendors are a decided minority on the US
> committee, though recently AMD has joined. Of current Fortran vendors,
> only Cray(HPE), IBM, Intel, NAG and Nvidia/PGI are represented. There
> are eleven principal members from the user community, four from vendors;
> only principals get a vote at the J3 level. (Malcolm Cohen from NAG is
> not a principal member - as NAG no longer pays for his membership in J3,
> he is my alternate!) We had some 20 or so attendees at the February
> meeting, seven from vendors (NVidia sent two.)
>
> When we consider the larger international committee (WG5), the fraction
> of vendors is even smaller as only one other (Fujitsu) is represented -
> the rest are users. It is WG5 that selects the features/changes that go
> into revisions.
>
> I look forward to working with you on defining the future of Fortran.
>
> --
> Steve Lionel
> ISO/IEC JTC1/SC22/WG5 (Fortran) Convenor
> Retired Intel Fortran developer/support
> Email: firstname at firstnamelastname dot com
> Twitter: @DoctorFortran
> LinkedIn: https://www.linkedin.com/in/stevelionel
> Blog: http://intel.com/software/DrFortran
> WG5: https://wg5-fortran.org

Hi Steve,

Thanks for the clarifications. Obviously I'm still learning about how all this works and who all the players are.

Regards,
Brad

Lynn McGuire

unread,
Mar 9, 2020, 3:52:43 PM3/9/20
to
We moved to a Prime 450 in 1978. It auto-initialized by default. So
did the Prime 750 we bought in 1980 or so. So did the Vax VMS we bought
in 1985. So did the F77 compilers on the Apollo Domains we bought in
1990. And the F77 compiler on the RS/6000 and Sun/OS computers we
bought in 1992.

I do not remember what the NDP/386 F77 compiler on the PC did when we
started using it in 1986. We moved to the Watcom compiler in 1994 ???
which did require explicitly turning on autosave.

Lynn

Lynn McGuire

unread,
Mar 9, 2020, 3:55:38 PM3/9/20
to
On 3/8/2020 9:47 PM, JCampbell wrote:
And we ALL moved to IMPLICIT NONE simultaneously. I fought that battle
and won.

BTW, do not discount inertia in programming styles. Especially when one
does not understand the depth of the problem.

Lynn

JCampbell

unread,
Mar 10, 2020, 3:20:54 AM3/10/20
to
Lynn

I used all those compilers, but no longer have the original manuals.
They all provided significant extensions to the F77 standard.
/save and /zero compile options were commonly available for most F77 compilers
to be compatible with typical FTN IV compilers.

My first compilers was Watfor on an IBM 7040, which I am (fairly) sure did not
have /zero as a default. I have never assumed /zero when developing code.
I was taught this was an error.

I used Prime FTN from 1975 to 1992. While it did default to /save, I do recall /zero
was not the default. There was always the problem of the .exe exploding in size if
/zero was selected.

Pre F77, there was little consideration of the Fortran standard, with most Fortran
compilers identified by the hardware manufacturer.

For porting, you needed to identify the development hardware in the program documentation.
For me it was Cyber or 32-bit mini only. Most universities had both.
ICL and IBM were just too different. Best to assume those English computers did not exist.

A big change at F77 was the change to dynamic memory allocation.
While providing improved memory management, it broke many codes that saved values in
subroutine local variables. It was a lot of work to clean out these "errors" else
use the easy option of /save.
It is described in Section 17. "ASSOCIATION AND DEFINITION", and more specifically
in 17.3 "Events That Cause Entities to Become Undefined" then
(6) The execution of a RETURN ...
The other big change at the F77 standard was the use of the term "processor dependent".
For porting to IBM, you quickly learnt that "processor dependent" was a nightmare.

When MSDOS arrived, I also used Lahey and Salford F77 compilers, both of which provide
/save and /zero as documented extensions. These were more F77 standard conforming compilers.
Salford actually states "The use of the /SAVE and /ZEROISE options will often
make the program “work”, but efforts should be made to correct the program source
by explicitly giving values to the undefined variables."

Now we have multi-threading, where /save should not be used and /zero should not be
assumed.

Using /save and /zero looks to be a bad call for 700,000 lines of code.

robin....@gmail.com

unread,
Mar 10, 2020, 3:36:21 AM3/10/20
to
On Tuesday, March 10, 2020 at 6:20:54 PM UTC+11, JCampbell wrote:

> I used all those compilers, but no longer have the original manuals.
> They all provided significant extensions to the F77 standard.
> /save and /zero compile options were commonly available for most F77 compilers
> to be compatible with typical FTN IV compilers.
>
> My first compilers was Watfor on an IBM 7040, which I am (fairly) sure did not
> have /zero as a default. I have never assumed /zero when developing code.
> I was taught this was an error.
>
> I used Prime FTN from 1975 to 1992. While it did default to /save, I do recall /zero
> was not the default. There was always the problem of the .exe exploding in size if
> /zero was selected.
>
> Pre F77, there was little consideration of the Fortran standard, with most Fortran
> compilers identified by the hardware manufacturer.
>
> For porting, you needed to identify the development hardware in the program documentation.
> For me it was Cyber or 32-bit mini only. Most universities had both.
> ICL and IBM were just too different. Best to assume those English computers did not exist.
>
> A big change at F77 was the change to dynamic memory allocation.

F77 did not have dynamic memory allocation.
That did not come until F90.

JCampbell

unread,
Mar 10, 2020, 3:59:29 AM3/10/20
to
On Tuesday, March 10, 2020 at 6:36:21 PM UTC+11, robin...@gmail.com wrote:
> On Tuesday, March 10, 2020 at 6:20:54 PM UTC+11, JCampbell wrote:
>
> F77 did not have dynamic memory allocation.
> That did not come until F90.
>
Not really what I was stating. Yes F77 did not have ALLOCATE or automatic arrays, but it did have local variables and local arrays, which were dynamically allocated onto the stack on entry and removed from the stack on return, as per Section 17 of the standard.
This was a significant change in F77 in comparison to most pre-F77 Fortran compilers. This is when most Fortran users had to deal with Section 17.3 "Events That Cause Entities to Become Undefined" which broke a lot of previously developed codes.
/save was an easy, but non-conforming fix, but then failed to provide the benefits that a dynamic stack could realise, especially for reducing the maximum memory usage that was then always an issue.

FortranFan

unread,
Mar 10, 2020, 10:16:13 AM3/10/20
to
On Tuesday, March 10, 2020 at 3:59:29 AM UTC-4, JCampbell wrote:

> ..
> Not really what I was stating. Yes F77 did not have ALLOCATE or automatic arrays, but it did have local variables and local arrays, which were dynamically allocated onto the stack on entry and removed from the stack on return, as per Section 17 of the standard.
> This was a significant change in F77 in comparison to most pre-F77 Fortran compilers. This is when most Fortran users had to deal with Section 17.3 "Events That Cause Entities to Become Undefined" which broke a lot of previously developed codes.
> /save was an easy, but non-conforming fix, but then failed to provide the benefits that a dynamic stack could realise, especially for reducing the maximum memory usage that was then always an issue.


@JCampbell's terminology appears highly imprecise and it can misinform readers.

There was no "dynamic allocation" in FORTRAN 77 if one takes FORTRAN 77 to mean ANSI X3.9-1978 publication.

Nor did this ANSI X3.9-1978 edition allow "local variables and local arrays" to be 'dynamic' in any way e.g., local arrays in subprograms all had to be fixed size, meaning each array declarator had to be a constant declarator.

ANSI X3.9-1978 FORTRAN 77 allowed dummy array arguments to have either constant declarators, or be adjustable, or have assumed size. But that is not 'dynamic allocation'.

Ron Shepard

unread,
Mar 10, 2020, 1:23:24 PM3/10/20
to
On 3/10/20 9:16 AM, FortranFan wrote:
> On Tuesday, March 10, 2020 at 3:59:29 AM UTC-4, JCampbell wrote:
>
>> ..
>> Not really what I was stating. Yes F77 did not have ALLOCATE or automatic arrays, but it did have local variables and local arrays, which were dynamically allocated onto the stack on entry and removed from the stack on return, as per Section 17 of the standard.
>> This was a significant change in F77 in comparison to most pre-F77 Fortran compilers. This is when most Fortran users had to deal with Section 17.3 "Events That Cause Entities to Become Undefined" which broke a lot of previously developed codes.
>> /save was an easy, but non-conforming fix, but then failed to provide the benefits that a dynamic stack could realise, especially for reducing the maximum memory usage that was then always an issue.
>
>
> @JCampbell's terminology appears highly imprecise and it can misinform readers.
>
> There was no "dynamic allocation" in FORTRAN 77 if one takes FORTRAN 77 to mean ANSI X3.9-1978 publication.

The difference between f77 and previous was that stack allocation of
local variables was specifically allowed. The semantics of those
entities was defined in the standard, and their values became undefined
when they went out of scope unless the new feature, SAVE, made them
otherwise.

f77 did not require dynamic allocation of local variables, but it
allowed it. Some compilers continued their previous approach of static
allocation of all variables. That was also allowed by the standard. The
limitations to only local arrays with constant sizes was consistent with
either stack or static storage of local variables.

I did not use machines with stack storage in the 1970s, but I did use
machines with overlay linkers. The same issues come up there regarding
variables going in and out of scope, and you had to be careful with the
overlays to be consistent with fortran semantics.

$.02 -Ron Shepard

Lynn McGuire

unread,
Mar 10, 2020, 3:06:00 PM3/10/20
to
When we ported from the Univac 1108 to the CDC 6600, we did not have
many problems. When we ported from the Univac 1108 to the IBM MVS, we
had a disaster on our hands. All of the 6H123456 became 4H1234, 2H56.
That was our big disaster.

I am not sure when we became dependent on /save and /zero. It was
definitely not by intent.

Lynn

Lynn McGuire

unread,
Mar 10, 2020, 3:08:31 PM3/10/20
to
On 3/10/2020 2:20 AM, JCampbell wrote:
By way, we have been using dynamic memory allocation since 1976 or so.
"DYNOSOR: a set of subroutines for dynamic memory organization in
Fortran programs"
https://dl.acm.org/doi/10.1145/954654.954661

Lynn

JCampbell

unread,
Mar 10, 2020, 8:02:03 PM3/10/20
to
@FortranFan: Not sure how imprecise I was. In the quoted post I did not use the term "dynamic allocation", but instead referred to "dynamically allocated onto the stack". I also acknowledged that "F77 did not have ALLOCATE or automatic arrays" so I doubt that anyone who read the complete sentence would be as confused as you claim.

We had to wait until F90 for the introduction of ALLOCATE. This omission from F77 was a big disappointment to many Fortran users at that time. The inclusion of automatic arrays in F90 was a very pleasing addition. (who voted against ALLOCATE in the 1978 standard ? perhaps a big processor manufacturer! "processor dependent" was a much used term in the F77 document)

The introduction of ALLOCATE meant that the many libraries, like DYNOSOR that Lynn has referred to, became less used. Unfortunately, the capability of "give me a big array that I will resize later" was not in ALLOCATE, which meant these libraries that allocated space on a big array continued to be used.
Lynn, did DYNOSOR use blank COMMON ? I am not sure if extendable blank COMMON is still part of modern Fortran and associated linkers, now that stack and extendable heap is used.

robin....@gmail.com

unread,
Mar 10, 2020, 8:43:17 PM3/10/20
to
On Wednesday, March 11, 2020 at 11:02:03 AM UTC+11, JCampbell wrote:
> On Wednesday, March 11, 2020 at 1:16:13 AM UTC+11, FortranFan wrote:
> > On Tuesday, March 10, 2020 at 3:59:29 AM UTC-4, JCampbell wrote:
> >
> > > ..
> > > Not really what I was stating. Yes F77 did not have ALLOCATE or automatic arrays, but it did have local variables and local arrays, which were dynamically allocated onto the stack on entry and removed from the stack on return, as per Section 17 of the standard.
> > > This was a significant change in F77 in comparison to most pre-F77 Fortran compilers. This is when most Fortran users had to deal with Section 17.3 "Events That Cause Entities to Become Undefined" which broke a lot of previously developed codes.
> > > /save was an easy, but non-conforming fix, but then failed to provide the benefits that a dynamic stack could realise, especially for reducing the maximum memory usage that was then always an issue.
> >
> >
> > @JCampbell's terminology appears highly imprecise and it can misinform readers.
> >
> > There was no "dynamic allocation" in FORTRAN 77 if one takes FORTRAN 77 to mean ANSI X3.9-1978 publication.
> >
> > Nor did this ANSI X3.9-1978 edition allow "local variables and local arrays" to be 'dynamic' in any way e.g., local arrays in subprograms all had to be fixed size, meaning each array declarator had to be a constant declarator.
> >
> > ANSI X3.9-1978 FORTRAN 77 allowed dummy array arguments to have either constant declarators, or be adjustable, or have assumed size. But that is not 'dynamic allocation'.
>
> @FortranFan: Not sure how imprecise I was. In the quoted post I did not use the term "dynamic allocation",

You used the term "dynamic memory allocation" as in :-
"A big change at F77 was the change to dynamic memory allocation."

ga...@u.washington.edu

unread,
Mar 10, 2020, 8:52:27 PM3/10/20
to
On Tuesday, March 10, 2020 at 5:02:03 PM UTC-7, JCampbell wrote:

(snip)

> Lynn, did DYNOSOR use blank COMMON ? I am not sure if extendable
> blank COMMON is still part of modern Fortran and associated linkers,
> now that stack and extendable heap is used.

I am not sure what you mean by extendable, but the rule is that
all declarations of named COMMON must be the same size. Blank common
can be different.

The result is that you can declare a blank common block one place
link it into a library or whereever, then enlarge it by declaring
a larger one in another routine, compiled later.

Early Fortran systems would allocate memory for COMMON from the top
down, with blank COMMON on the bottom. It seems that is easier if all
the named blocks are the same size.

The OS/360 linker will link them of different size, but if named COMMON
is in BLOCK DATA, that must be the largest one. Some people using the
trick above wanted to put the size inside.

ga...@u.washington.edu

unread,
Mar 10, 2020, 8:59:42 PM3/10/20
to
On Tuesday, March 10, 2020 at 12:06:00 PM UTC-7, Lynn McGuire wrote:

(snip)

> When we ported from the Univac 1108 to the CDC 6600, we did not have
> many problems. When we ported from the Univac 1108 to the IBM MVS, we
> had a disaster on our hands. All of the 6H123456 became 4H1234, 2H56.
> That was our big disaster.

I remember using REAL*8 in Fortran 66 days to hold larger Hollerith
constants. (Except using '' instead of 8H). Some systems will normalize
floating point data at surprising times (such as assignment), but S/360
doesn't do that.

You could also use COMPLEX*8, or COMPLEX*16 for bigger ones, and
with H Extended, COMPLEX*32.

I haven't thought about A format and COMPLEX data. Do you need a
separate format descriptor for the real and imaginary parts?

I think at the time, I was using it for a run-time format, so
I could assign parts of the array with different format items.

Lynn McGuire

unread,
Mar 11, 2020, 3:50:59 PM3/11/20
to
Yes, it used a big array in a common block. Worked well and met our needs.

Lynn

tho...@antispam.ham

unread,
May 3, 2020, 9:01:46 PM5/3/20
to
> 2. Most programmers relied upon the programming manuals that came with
> their computer which typically described a version of Fortran including
> lots of vendor extensions without differentiating them adequately from
> features that were Standard Fortran.

I can attest to that. Back in 1974, the physics degree where I attended
college required a course in Fortran programming, so I took the Fortran
course and learned Fortran. Or so I thought. Turns out they taught the
language as accepted by the compiler without differentiating between that
and the standard. Not too big a deal, as I didn't do a lot of programming
in college.

That changed in grad school. The machine was a PDP 11/40 running RSX11D,
and I relied on the Fortran manual for that computer. Wrote a lot of
code and took that with me to my first job as a postdoc. Their machine
was a Perkin Elmer 3220 running BSD UNIX. Lots of my code simply didn't
work.

I eventually broke down and bought a copy of the standard (F77 by then)
and discovered all those things I had learned were vendor extensions or
processor dependent choices. The one that bit me the most frequently
was reading a non-empty file and getting the end-of-file condition on
the first read. Turns out the BSD UNIX compiler they had by default
put the file pointer at the end of the file upon opening, so I had to
add REWIND statements after every OPEN statement. That was after ten
years of experience with Fortran, and not a single source of information
mentioned that the file position was processor dependent upon opening.

How many Fortran courses actually taught the standard, as opposed to
whatever the compiler at hand would accept?

ga...@u.washington.edu

unread,
May 4, 2020, 12:03:29 AM5/4/20
to
On Sunday, May 3, 2020 at 6:01:46 PM UTC-7, tho...@antispam.ham wrote:
> > 2. Most programmers relied upon the programming manuals that came with
> > their computer which typically described a version of Fortran including
> > lots of vendor extensions without differentiating them adequately from
> > features that were Standard Fortran.

> I can attest to that. Back in 1974, the physics degree where I attended
> college required a course in Fortran programming, so I took the Fortran
> course and learned Fortran. Or so I thought. Turns out they taught the
> language as accepted by the compiler without differentiating between that
> and the standard. Not too big a deal, as I didn't do a lot of programming
> in college.

IBM manuals put gray shading over extensions. In subsequent versions
of manuals, they indicate changes from the previous, including fixing
errors in the shading. They consider getting it right important.

DEC uses blue ink for extensions. If scanned on a black and white
scanner, that part gets lost. The IBM gray shading usually works,
though sometimes so dark that you can't read the underneath.

For some years after IBM had Fortran IV, some competitors named
their compilers Fortran V. These might ignore the standard, and
might not describe what extensions they use.

Clive Page

unread,
May 4, 2020, 6:42:36 AM5/4/20
to
On 04/05/2020 02:01, tho...@antispam.ham wrote:

> How many Fortran courses actually taught the standard, as opposed to
> whatever the compiler at hand would accept?

Very few, I should think. The problem was that Fortran 66 was so lacking in features. Recently, because I've been drafting a contribution for a historical account of a project that I worked on long ago, I've been browsing some code dating back to 1973. We had versions for a PDP-8 (12 bits/word) and Cyber-72 (60 bits/word). We tried hard to generate code that worked on both but it was pretty difficult.

The vendors' manuals sometimes had a passing reference to the ANSI standard (now known as Fortran 66) but it was in practice impossible to avoid using vendor extensions to do real work.

In particular there was no CHARACTER data type so my code made use of Hollerith data stored in real variables and also had many ENCODE and DECODE statements. There was also no OPEN statement so files either had to be opened with a vendor's library function (e.g. IOPEN in the PDP-8 Fortran-II compiler) or assigned to a unit in the job control language before the program started (for the PDP-8 Fortran-IV compiler). In the face of difficulties like these the official standards were pretty irrelevant to the ordinary programmer.

I'm sure that when the Fortran 77 Standard came along I did know about it and it was considerably more useful to the ordinary programmer once compilers started to support it, or most of it.


--
Clive Page

ga...@u.washington.edu

unread,
May 4, 2020, 8:04:21 AM5/4/20
to
On Monday, May 4, 2020 at 3:42:36 AM UTC-7, Clive Page wrote:
> On 04/05/2020 02:01, tho...@antispam.ham wrote:

> > How many Fortran courses actually taught the standard, as opposed to
> > whatever the compiler at hand would accept?

> Very few, I should think. The problem was that Fortran 66 was so
> lacking in features. Recently, because I've been drafting a
> contribution for a historical account of a project that I worked
> on long ago, I've been browsing some code dating back to 1973.
> We had versions for a PDP-8 (12 bits/word) and Cyber-72
> (60 bits/word). We tried hard to generate code that worked on
> both but it was pretty difficult.

I have known programs that tried to be very close to the standard.

Character data has to be processed with A1 format, so one character
per array element. As well as I know it, the only extension that
it uses is the ability to compare values read in with A1 format,
and decide if they are equal or not, and also the ability to assign
them between array elements. Both of those the standard doesn't require.

The program is MORTRAN2, which is a preprocessor that converts MORTRAN
programs into Fortran. It then makes standard Fortran programs easier
to write.

I suspect the most common extension to the 66 standard is generalizing
the form of array subscript expressions. The standard only allows
for a few simple forms of expressions.








robin....@gmail.com

unread,
May 4, 2020, 9:18:59 AM5/4/20
to
On Monday, May 4, 2020 at 10:04:21 PM UTC+10, g.....@u.washington.edu wrote:
> On Monday, May 4, 2020 at 3:42:36 AM UTC-7, Clive Page wrote:
> > On 04/05/2020 02:01, t......@antispam.ham wrote:
>
> > > How many Fortran courses actually taught the standard, as opposed to
> > > whatever the compiler at hand would accept?
>
> > Very few, I should think. The problem was that Fortran 66 was so
> > lacking in features. Recently, because I've been drafting a
> > contribution for a historical account of a project that I worked
> > on long ago, I've been browsing some code dating back to 1973.
> > We had versions for a PDP-8 (12 bits/word) and Cyber-72
> > (60 bits/word). We tried hard to generate code that worked on
> > both but it was pretty difficult.
>
> I have known programs that tried to be very close to the standard.
>
> Character data has to be processed with A1 format, so one character
> per array element.

It was necessary to use an INTEGER array or variable.

> As well as I know it, the only extension that
> it uses is the ability to compare values read in with A1 format,
> and decide if they are equal or not, and also the ability to assign
> them between array elements. Both of those the standard doesn't require.
>
> The program is MORTRAN2, which is a preprocessor that converts MORTRAN
> programs into Fortran. It then makes standard Fortran programs easier
> to write.
>
> I suspect the most common extension to the 66 standard is generalizing
> the form of array subscript expressions. The standard only allows
> for a few simple forms of expressions.

It was always possible to assign an integer variable to any kind
of expression, and to use that variable as the subscript.

It didn't need extensions for that.

robin....@gmail.com

unread,
May 4, 2020, 9:36:56 AM5/4/20
to
On Monday, May 4, 2020 at 8:42:36 PM UTC+10, Clive Page wrote:
> On 04/05/2020 02:01, t.....@antispam.ham wrote:
>
> > How many Fortran courses actually taught the standard, as opposed to
> > whatever the compiler at hand would accept?
>
> Very few, I should think. The problem was that Fortran 66 was so lacking in features.

It was? Users of the day seemed to get along well with it.

However, PL/I was then available which had a lot more features,
including character-handling and bit-handling, error control,
and three kinds of storage allocation.

> Recently, because I've been drafting a contribution for a historical account of a project that I worked on long ago, I've been browsing some code dating back to 1973. We had versions for a PDP-8 (12 bits/word) and Cyber-72 (60 bits/word). We tried hard to generate code that worked on both but it was pretty difficult.
>
> The vendors' manuals sometimes had a passing reference to the ANSI standard (now known as Fortran 66) but it was in practice impossible to avoid using vendor extensions to do real work.
>
> In particular there was no CHARACTER data type

As already stated, PL/I had character data type.

Ron Shepard

unread,
May 4, 2020, 1:51:39 PM5/4/20
to
On 5/3/20 8:01 PM, tho...@antispam.ham wrote:
> How many Fortran courses actually taught the standard, as opposed to
> whatever the compiler at hand would accept?

I had the same first experiences with fortran. Especially with pre-f77
fortran, the standard language was simply too restrictive and too
primitive to do many things that were necessary. Yes, you could write
some small simple programs entirely within the standard, but anything
beyond that required nonstandard intrinsic functions and nonstandard
syntax. Pre-f77 fortran did not allow query of command line options, it
did not support any kind of character or character strings, you could
not query for the date or time, there were no bit operators, no namelist
i/o, no implicit none, no double precision complex, and on and on. Even
after f77, which included character variables, many of these other
things persisted. All of those standard libraries that everyone used
back then (eispak, linpack, and later lapack) that used Z* as the
convention for complex*16 did so with compiler extensions.

$.02 -Ron Shepard

ga...@u.washington.edu

unread,
May 4, 2020, 2:51:21 PM5/4/20
to
On Monday, May 4, 2020 at 6:18:59 AM UTC-7, robin...@gmail.com wrote:

(snip, I wrote)

> > I have known programs that tried to be very close to the standard.

> > Character data has to be processed with A1 format, so one character
> > per array element.

> It was necessary to use an INTEGER array or variable.

There is no requirement on that in the standard.

But note that, as above, the standard allows reading into, and
writing data back out, but not doing anything else with it.

For one, some systems will normalize floating point data on
assignment, which is not good if it actually characters. For others,
where the hardware integer is smaller than the actual INTEGER
variable, they only copy the hardware value on assignment.

(But as noted before, many such systems default to a smaller size.)

> > As well as I know it, the only extension that
> > it uses is the ability to compare values read in with A1 format,
> > and decide if they are equal or not, and also the ability to assign
> > them between array elements. Both of those the standard doesn't require.

> > The program is MORTRAN2, which is a preprocessor that converts MORTRAN
> > programs into Fortran. It then makes standard Fortran programs easier
> > to write.

The Mortran2 processor is about 1600 lines, avoiding extensions
as simple as more complicated subscript expressions.

> > I suspect the most common extension to the 66 standard is generalizing
> > the form of array subscript expressions. The standard only allows
> > for a few simple forms of expressions.

> It was always possible to assign an integer variable to any kind
> of expression, and to use that variable as the subscript.

> It didn't need extensions for that.

I suspect many don't even know about the limitation in
the standard, though.

A less common extension is allowing expressions for DO statements,
where many temporary variables are used to hold those values.

And my always favorite Fortran 77 addition is allowing expressions
in the I/O list of WRITE statements, again avoiding many
temporary variables.



JCampbell

unread,
May 4, 2020, 9:30:47 PM5/4/20
to
In the 70's I was using multiple hardware, each with their own FORTRAN manual.
We were all very aware of portability issues, especially for file I/O, memory management and many more. I did not even consider the concept of Standard Fortran until a stable F77 became available, which was probably about 1982.
If you used a data file, then you very clearly knew each Fortran was different. Most hardware/compilers provided the basic functionality required although in very different ways. Attaching files to Fortran unit numbers is a very good example. Early on I did have my own random access file library for each computer I used, based on CDC Fortran, where you basically managed a memory address, a file address and a record size.

I do find it difficult to believe that portability problems were not discussed in a first course in Fortran at University/College. Most departments had access to multiple hardware. I remember the "urban myth" of how someone spent the department's semester budget on the university mainframe after convincing their supervisor to use the new wiz-bang iterative solver! It did happen. The myth was that budgets were enforced.

Standard conforming Fortran programs were probably not a viable consideration until 90's. F77 was too processor dependent.

robin....@gmail.com

unread,
May 5, 2020, 1:09:12 AM5/5/20
to
On Tuesday, May 5, 2020 at 3:51:39 AM UTC+10, Ron Shepard wrote:
> On 5/3/20 8:01 PM, t......@antispam.ham wrote:
> > How many Fortran courses actually taught the standard, as opposed to
> > whatever the compiler at hand would accept?
>
> I had the same first experiences with fortran. Especially with pre-f77
> fortran, the standard language was simply too restrictive and too
> primitive to do many things that were necessary.

Nonsense.

> Yes, you could write
> some small simple programs entirely within the standard, but anything
> beyond that required nonstandard intrinsic functions and nonstandard
> syntax.

Rubbish.

> Pre-f77 fortran did not allow query of command line options,

True, but who needed it?

> it did not support any kind of character or character strings,

Character strings were provided via Hollerith constants, otherwise
how could you write out headings etc?

Portable character handling was possible pre-FORTRAN 77.

> you could not query for the date or time,

True. But the operating system provided printed evidence of that.

> there were no bit operators,

You could write your own.

> no namelist i/o,

You can use ordinary READ and WRITE statements for that.

> no implicit none, no double precision complex,

You can do double precision complex operations using DOUBLE PRECISION (real).

> and on and on. Even
> after f77, which included character variables, many of these other
> things persisted. All of those standard libraries that everyone used
> back then (eispak, linpack, and later lapack)

IBM, at least, provided the large library: "Scientific
Subroutine Package", from 1966 and probably earlier for
pre-S/360 machines.
The ACM also published many scientific subroutines.
Numerical algorithms were published in book form:
Reinsch & Wilkinson, Handbook for Automatic Computation: Linear
Algegra", vol II, 1971.

> that used Z* as the
> convention for complex*16 did so with compiler extensions.

Again, you can do double precision complex operations
using DOUBLE PRECISION (real).

robin....@gmail.com

unread,
May 5, 2020, 1:21:10 AM5/5/20
to
On Tuesday, May 5, 2020 at 4:51:21 AM UTC+10, ga...@u.washington.edu wrote:
> On Monday, May 4, 2020 at 6:18:59 AM UTC-7, robin...@gmail.com wrote:
>
> (snip, I wrote)
>
> > > I have known programs that tried to be very close to the standard.
>
> > > Character data has to be processed with A1 format, so one character
> > > per array element.
>
> > It was necessary to use an INTEGER array or variable.
>
> There is no requirement on that in the standard.

Hollerith constants were available in the 1966 standard.
Values could be stored in a variable of any type, including
INTEGER, LOGICAL, and REAL.

However, the most convenient type was INTEGER.

> But note that, as above, the standard allows reading into, and
> writing data back out, but not doing anything else with it.

Hollerith constants could be used in DATA statements and
CALL statements.

> For one, some systems will normalize floating point data on
> assignment, which is not good if it actually characters.

Naturally, which is why I said that INTEGER was best.

> For others,
> where the hardware integer is smaller than the actual INTEGER
> variable, they only copy the hardware value on assignment.
>
> (But as noted before, many such systems default to a smaller size.)
>
> > > As well as I know it, the only extension that
> > > it uses is the ability to compare values read in with A1 format,
> > > and decide if they are equal or not, and also the ability to assign
> > > them between array elements. Both of those the standard doesn't require.
>
> > > The program is MORTRAN2, which is a preprocessor that converts MORTRAN
> > > programs into Fortran. It then makes standard Fortran programs easier
> > > to write.
>
> The Mortran2 processor is about 1600 lines, avoiding extensions
> as simple as more complicated subscript expressions.
>
> > > I suspect the most common extension to the 66 standard is generalizing
> > > the form of array subscript expressions. The standard only allows
> > > for a few simple forms of expressions.
>
> > It was always possible to assign an integer variable to any kind
> > of expression, and to use that variable as the subscript.
>
> > It didn't need extensions for that.
>
> I suspect many don't even know about the limitation in
> the standard, though.
>
> A less common extension is allowing expressions for DO statements,
> where many temporary variables are used to hold those values.

Again, a variable could be used for the components of a DO statement.
Up to THREE only variables were required for the DO statement,
and typically, up to only TWO. No big deal.

> And my always favorite Fortran 77 addition is allowing expressions
> in the I/O list of WRITE statements, again avoiding many
> temporary variables.

Use a variable.

dpb

unread,
May 5, 2020, 10:00:50 AM5/5/20
to
On 5/4/2020 8:30 PM, JCampbell wrote:
> In the 70's I was using multiple hardware, each with their own FORTRAN manual.
> We were all very aware of portability issues, especially for file I/O, memory management and many more. I did not even consider the concept of Standard Fortran until a stable F77 became available, which was probably about 1982.
> If you used a data file, then you very clearly knew each Fortran was different. Most hardware/compilers provided the basic functionality required although in very different ways. Attaching files to Fortran unit numbers is a very good example. Early on I did have my own random access file library for each computer I used, based on CDC Fortran, where you basically managed a memory address, a file address and a record size.
>
> I do find it difficult to believe that portability problems were not discussed in a first course in Fortran at University/College. Most departments had access to multiple hardware. ...

If the course were in a computer science or programming-purpose-designed
course, perhaps. Although I'd venture even there would not be much
emphasis on it as every place I was even if were more than one machine,
the one for the particular course was the only one considered of
significance.

More likely would be that the only introduction to FORTRAN for virtually
all other engineering curricula would be similar to my experience in
that all the "formal" training we had was about 2 weeks of lecture time
on the rudiments of the language including writing code on coding forms
to be graded because actual computer time was too dear to waste on
learners so only one final "project" was actually even submitted to the
compiler.

After that, you were on your own with a TA who had little more
experience than we did as the resource beyond the vendor manual and
McCracken.

--

Ian D Chivers

unread,
May 5, 2020, 11:08:14 AM5/5/20
to
I worked in the Computer Centre at Imperial College from
1978 to 1986.

Two of us were given the job of moving the teaching from Fortran 66
to Fortran 77, and also doing a technology shift from punch cards
and line printers to timesharing systems and terminals.

We stuck to standard Fortran. Users had access to a variety of systems
including

CDC - The service at Imperial was CDC based
Cray - available at the University of London Computer Centre
aka ULCC
IBM - available in the Computing Department
Amdahl - available at ULCC
ICL - available at Queen Mary College
Harris - Chelsea College

Not sticking to standard Fortran would have
created a lot of problems.

The Fortran 77 book is available as a pdf
on our Fortranplus site.

Ian Chivers

Ron Shepard

unread,
May 5, 2020, 2:23:40 PM5/5/20
to
On 5/4/20 1:51 PM, ga...@u.washington.edu wrote:
>> It was necessary to use an INTEGER array or variable.
> There is no requirement on that in the standard.
>
> But note that, as above, the standard allows reading into, and
> writing data back out, but not doing anything else with it. >
> For one, some systems will normalize floating point data on
> assignment, which is not good if it actually characters.

I sometimes used logical variables, and arrays of logical variables to
store characters. The general idea was that compilers generally just
copied bits for logical variable assignments, while for floating point
and integers, sometimes other things happened.

> For others,
> where the hardware integer is smaller than the actual INTEGER
> variable, they only copy the hardware value on assignment.

Although it does not apply in this situation involving copying bits, the
Cray integer format was like that too for some operations. All of the
classic Crays (-1, -2, X-MP, Y-MP) had 64-bit words, they were not byte
addressable. The addressing capabilities started at 24-bits in the early
models and later 32-bits. Some integer operations would use the short
address units rather than the full 64-bit arithmetic units. Also, on
some models, some integer operations (multiplication, I think) would use
the floating point 48-bit mantissa functional units, effectively storing
48-bit integers in the 64-bit word. However, there were full 64-bit
logical and integer operations available too, so it was never clear to
me when the integers might be truncated and when they wouldn't.

$.02 -Ron Shepard

Ron Shepard

unread,
May 5, 2020, 2:31:58 PM5/5/20
to
On 5/4/20 8:30 PM, JCampbell wrote:
> I do find it difficult to believe that portability problems were not discussed in a first course in Fortran at University/College.

In my case, our school had one computer on campus. In the daytime, it
did business and accounting stuff for the administration, and after
hours it switched over to do coursework assignments for the programming
and numerical methods classes. Thus the only portability issues ever
encountered were when the textbook material didn't quite match up with
the hardware/software we had available.

$.02 -Ron Shepard

Brad Richardson

unread,
May 5, 2020, 2:44:32 PM5/5/20
to
The responses over the last few days have been a very interesting history lesson. I'd like to ask a question to those of you with that historical perspective.

Given that many of the issues and workarounds you guys have described have been solved, many of them 3 decades or more ago, why do we still have so much code around today that appears to have been written in complex ways to avoid those problems?

Is it just codes that have been handed down and had only the bare minimum maintenance for moving to newer platforms? Or was there a significant portion of the Fortran community that just kept programming kind of "by superstition"? Not really learning the newer features and not straying far from the already present examples or guidance they got from more senior colleagues.

ga...@u.washington.edu

unread,
May 5, 2020, 2:55:08 PM5/5/20
to
On Tuesday, May 5, 2020 at 11:23:40 AM UTC-7, Ron Shepard wrote:

(snip, I wrote, regarding the need for INTEGER with A format)

> > There is no requirement on that in the standard.

> > But note that, as above, the standard allows reading into, and
> > writing data back out, but not doing anything else with it. >
> > For one, some systems will normalize floating point data on
> > assignment, which is not good if it actually characters.

> I sometimes used logical variables, and arrays of logical variables to
> store characters. The general idea was that compilers generally just
> copied bits for logical variable assignments, while for floating point
> and integers, sometimes other things happened.

For the OS/360 compilers, the only one byte type is LOGICAL*1,
which is convenient for copying data in and out, but you can't
use it with relational operators. (Some compilers have an extension
that allows freely mixing of integer and logical values.)

INTEGER*2 isn't so bad, though.

Otherwise, with enough tricks with EQUIVALENCE you can compare
values from LOGICAL*1, but that is usually more work than it is
worth.

> > For others,
> > where the hardware integer is smaller than the actual INTEGER
> > variable, they only copy the hardware value on assignment.

> Although it does not apply in this situation involving copying bits, the
> Cray integer format was like that too for some operations. All of the
> classic Crays (-1, -2, X-MP, Y-MP) had 64-bit words, they were not byte
> addressable. The addressing capabilities started at 24-bits in the early
> models and later 32-bits. Some integer operations would use the short
> address units rather than the full 64-bit arithmetic units. Also, on
> some models, some integer operations (multiplication, I think) would use
> the floating point 48-bit mantissa functional units, effectively storing
> 48-bit integers in the 64-bit word. However, there were full 64-bit
> logical and integer operations available too, so it was never clear to
> me when the integers might be truncated and when they wouldn't.

Burroughs machines have a floating point normalization system that
prefers a zero exponent, if no bits are lost. That allows using the
same operations for floating point and integer arithmetic, which is
what Burroughs did.

The CDC 60 bit machines didn't do that completely, maybe only for
multiply. I believe you can add/subtract 60 bit integers, but
only multiply 48 bit integers.

I am less sure about Cray, but considering that he designed the
CDC machines, I suspect something similar.

dpb

unread,
May 5, 2020, 3:29:38 PM5/5/20
to
I would say from my experience about 50:50 of each.

And another 50 from the professors who never changed what they taught.

And one could still find a lot of that today outside of specific
computer science classes I'd think altho since it's now about 15 years
since I did retire from active consulting I've lost track and not been
in much contact with the newer hires as to what they've been given. By
now many of those former profs I knew will have retired themselves;
whether the new instructors in engineering intro classes are any better
in this regards I'd not now know.

With what I see in the MATLAB forum that reflects homework Q?, the level
of instruction in using it well and avoiding many similar styles of
coding, I'd not be terribly optimistic things will have changed that
much for Fortran, either.

--

Ron Shepard

unread,
May 5, 2020, 4:10:25 PM5/5/20
to
On 5/5/20 12:09 AM, robin....@gmail.com wrote:
> On Tuesday, May 5, 2020 at 3:51:39 AM UTC+10, Ron Shepard wrote:
>> On 5/3/20 8:01 PM, t......@antispam.ham wrote:
>>> How many Fortran courses actually taught the standard, as opposed to
>>> whatever the compiler at hand would accept?
>>
>> I had the same first experiences with fortran. Especially with pre-f77
>> fortran, the standard language was simply too restrictive and too
>> primitive to do many things that were necessary.
>
> Nonsense.

Where is your evidence?

>
>> Yes, you could write
>> some small simple programs entirely within the standard, but anything
>> beyond that required nonstandard intrinsic functions and nonstandard
>> syntax.
>
> Rubbish.

Evidence?

>
>> Pre-f77 fortran did not allow query of command line options,
>
> True, but who needed it?

Enough people so that it was a common vendor-specific extension.

>
>> it did not support any kind of character or character strings,
>
> Character strings were provided via Hollerith constants, otherwise
> how could you write out headings etc?

Mostly that was done with hollerith strings in format statements, not
with variables initialized with hollerith data statements.

>
> Portable character handling was possible pre-FORTRAN 77.

Hollerith strings made it possible, but it was not portable, much less
efficient. Even simple things, such as the number of characters in the
character set, or the number of those characters that could be stored in
each integer, or the order of the characters within each integer was
different from machine to machine. And if you actually did operations on
the characters, such as comparisons for ranges or incrementing to the
next character in a sequence, or converting between upper and lower
case, then the underlying character set caused portability issues. Not
all of that was due to limitations in the fortran standard, but it all
added to the difficulty of writing portable programs in pre-f77 fortran.
f77 made a big difference in all of that, it actually was practical to
write portable character processing code in f77.

>> you could not query for the date or time,
>
> True. But the operating system provided printed evidence of that.

The need for this was sufficient for every vendor to supply their own
nonportable extension. One often needed to do more with the date and
time than just print it, for example, the ability to time a section of
code in order to assess efficiency or to choose between different
algorithms for different data sets.

>> there were no bit operators,
>
> You could write your own.

In portable fortran? If not, then your statement is irrelevant.

Again, every vendor provided their own nonportable extensions for bit
operators. Fortran itself would not provide this until f90.

>
>> no namelist i/o,
>
> You can use ordinary READ and WRITE statements for that.

Again, every vendor provided their own nonportable extension for
namelist i/o. This would become standard in f90.

>
>> no implicit none, no double precision complex,
>
> You can do double precision complex operations using DOUBLE PRECISION (real).

Yet again, every vendor provided double precision complex as an extension.

>
>> and on and on. Even
>> after f77, which included character variables, many of these other
>> things persisted. All of those standard libraries that everyone used
>> back then (eispak, linpack, and later lapack)
>
> IBM, at least, provided the large library: "Scientific
> Subroutine Package", from 1966 and probably earlier for
> pre-S/360 machines.

A vendor-specific library does not provide portability to other machines.

> The ACM also published many scientific subroutines.
> Numerical algorithms were published in book form:
> Reinsch & Wilkinson, Handbook for Automatic Computation: Linear
> Algegra", vol II, 1971.
>
>> that used Z* as the
>> convention for complex*16 did so with compiler extensions.
>
> Again, you can do double precision complex operations
> using DOUBLE PRECISION (real).

Yet, that is not how those libraries were written. They all used the
common vendor extensions to provide double precision complex. An
exception to this general trend was that many fft routines were written
with just real arithmetic; that is because in that particular case the
real and imaginary operations naturally separate.

$.02 -Ron Shepard


Brad Richardson

unread,
May 5, 2020, 4:25:02 PM5/5/20
to
In my experience, most scientists and engineers (Fortran's primary user base) aren't taught more than a basic intro course in programming. And Fortran's use for that intro course is dwindling. And once most move on from undergraduate level stuff, they don't bother to learn much more about Fortran, or software development or design in general, than is required to complete their immediate task of making a minor modification to an existing code base, or writing a small-scale/scope research project not meant for general use.

Thus I have found that, even recently written Fortran codes are written by people who don't really have much training in the more advanced features of Fortran, let alone higher level concepts in software design.

I think the reason many Fortran users haven't adopted modern software development techniques is because most Fortran users aren't software developers, and don't see themselves as software developers.

Clive Page

unread,
May 5, 2020, 5:43:00 PM5/5/20
to
When I learned Fortran it was also at a time when there was just one computer in the entire university, and that didn't initially have a Fortran compiler at all. Portability doesn't start to become important until you have another computer to port something to.


--
Clive Page

Lynn McGuire

unread,
May 5, 2020, 6:58:24 PM5/5/20
to
I have a degree in Mechanical Engineering from Texas A&M University in
1982. I tested out of the Fortran courses in 1978 (4 credit hours
IIRC). I took CS 204 - IBM 370 Assembly Language to help me understand
what was going on underneath better. I started writing engineering
software in 1975 in Fortran IV and am still doing it. I have written
software in Fortran, Basic, Pascal, C, and C++. And a little Java and a
lot of HTML for grins.

Lynn

JCampbell

unread,
May 5, 2020, 10:35:00 PM5/5/20
to
One of the problems with adopting newer Fortran is that compilers have/had bugs and they are typically in new features. It is easier to use old Fortran approaches and move on to using the results, than keep checking the status of newer features.
This is evident with the lack of reliable F08 and F18 features, but it was also a significant problem with F77, which I did not adopt generally until at least 1985. The F77 generated code was slower than the FTN IV code, so we kept the developed FTN version and coding approach that worked with all it's system libraries, where most porting issues were localised.
With the significant improvements in F90/95, most of my production code is now at that level, but most newer F03+ adoption has been limited to the extended intrinsic library.
There may be some great new features since F95, but I don't solve problems that need them and my solution approaches don't consider them.
Most of my code development in the last 10 years has been for OpenMP and 64-bit, while F03, F08 and F18 have moved in other directions to new types of computing solutions. I am not aware of what they are achieving.

JCampbell

unread,
May 5, 2020, 11:27:08 PM5/5/20
to
As an engineer, I am a solutions developer.
I don't focus on "higher level concepts in software design", but more generating an accurate and reliable solution.
I don't see the evidence for the gains from "modern software development techniques" (MSDT), although that could be a very broad brush.
What are MSDT ? Do I use them already ? I've certainly spent a lot of time designing data structures.
Perhaps I mistake MSDT for OOP, where I can't identify the gains claimed, then I go back to what I know works. It's more a religion than a science, as the evidence is not readily available.
I was taught a hierarchical approach, built on a collection of simple routines, not the apparent complexity of OOP examples that are reported.
OOP and OpenMP is beyond my understanding.

Ron Shepard

unread,
May 6, 2020, 1:38:07 AM5/6/20
to
On 5/5/20 1:55 PM, ga...@u.washington.edu wrote:
>> I sometimes used logical variables, and arrays of logical variables to
>> store characters. The general idea was that compilers generally just
>> copied bits for logical variable assignments, while for floating point
>> and integers, sometimes other things happened.
> For the OS/360 compilers, the only one byte type is LOGICAL*1,
> which is convenient for copying data in and out, but you can't
> use it with relational operators. (Some compilers have an extension
> that allows freely mixing of integer and logical values.)
>
> INTEGER*2 isn't so bad, though.
>
> Otherwise, with enough tricks with EQUIVALENCE you can compare
> values from LOGICAL*1, but that is usually more work than it is
> worth.

I started doing this on 36-bit word addressable machines, which didn't
really support logical*1. When comparing characters, they had to be
masked out and extracted from the logical storage unit anyway, so there
was no extra overhead compared to storing them in integers or reals, but
without worrying about renormalizations or negative zeros vanishing.

One of those machines supported 6-bit characters, six per word, and the
other machine supported 7-bit ASCII characters, five per word with an
unused bit "wasted" in each word. Later, that code was ported to all
kinds of other machines, both byte addressable and word addressable with
various size words.

Having a standard bit string data type in fortran sure would have
simplified things regarding portability.

$.02 -Ron Shepard

Ron Shepard

unread,
May 6, 2020, 2:24:54 AM5/6/20
to
I've seen this too, and I think there are several reasons.

I know of several production codes in my field of quantum chemistry
that are even today largely written in f77 style fortran. One reason is
that funding in my field is based on applications, almost never on
software development, so there are disincentives to rewrite existing
codes. No one is going to get a degree by rewriting an existing code in
a new language, and no funding agency is going to fund that instead of
some other project that promises new capability. Code development in my
field has always been done as a byproduct. Other scientific fields are
probably similar, and in these days of ever tightening budgets for
research agencies, that's probably not going to change.

Another reason, that I have discussed here in c.l.f in the past, is the
lack of some important features in f90+ regarding memory management. On
most modern supercomputers, there is only so much physical memory, with
no paging to external storage, so if your program requests too much
memory your job fails. With f77 conventions, where the programmer
handles all of the heap and stack allocations, information about
previous and remaining memory allocation is always available. With f90+,
it is not possible to query the system to determine those allocations. I
think this is a needless limitation of f90, that information is
available at run time and it simply should be made available to the
programmer.

If you look at large software development projects, such as LAPACK, you
also still see lots of f77 style code. Unlike my field, these projects
are developed by software engineers who understand everything there is
to know about writing software, and their funding is directed towards
producing that software directly, not just as a byproduct. Yet, even
after 30 years, they are still written and maintained in f77 style
language rather than modern fortran. One possible reason for this is
that writing that kind of software is difficult. There are lots of
little subtle tricks that are incorporated in the existing codebase, and
a rewrite, in modern fortran or any other language for that matter,
would almost certainly reintroduce bugs that were long ago corrected in
the existing code. Seemingly simple things like computing accurately the
roots of a quadratic equation, or computing a quantity like
sqrt(x**2+y**2) without unnecessary overflow or underflow, can be
tricky. So there is inertia to avoid modifying the existing code, and to
instead extend the existing code to add new capabilities.

$.02 -Ron Shepard

robin....@gmail.com

unread,
May 6, 2020, 2:38:25 AM5/6/20
to
On Wednesday, May 6, 2020 at 4:23:40 AM UTC+10, Ron Shepard wrote:
The CDC 7600 and Cyber 72 series were like that.
60-bit integers could hold 60-bit integers.
The one multiply instruction did either floating-point multiply
or integer multiply.
If there were no significant bits in the upper 12 bits,
it did an integer multiply. (by "significant" I mean
that a negative value whose upper bits were ones did not
count as significant).
Otherwise it did an floating-point multiply.

Too bad if the integer(s) was/were oversize. You got a garbage
result without warning.

robin....@gmail.com

unread,
May 6, 2020, 4:26:57 AM5/6/20
to
On Wednesday, May 6, 2020 at 6:10:25 AM UTC+10, Ron Shepard wrote:
> On 5/5/20 12:09 AM, r......@gmail.com wrote:
> > On Tuesday, May 5, 2020 at 3:51:39 AM UTC+10, Ron Shepard wrote:
> >> On 5/3/20 8:01 PM, t......@antispam.ham wrote:
> >>> How many Fortran courses actually taught the standard, as opposed to
> >>> whatever the compiler at hand would accept?
> >>
> >> I had the same first experiences with fortran. Especially with pre-f77
> >> fortran, the standard language was simply too restrictive and too
> >> primitive to do many things that were necessary.
> >
> > Nonsense.
>
> Where is your evidence?

1. The IBM SSP for FORTRAN, a large library.
2. I was consultant at the computer center.

> >> Yes, you could write
> >> some small simple programs entirely within the standard, but anything
> >> beyond that required nonstandard intrinsic functions and nonstandard
> >> syntax.
> >
> > Rubbish.
>
> Evidence?

See above.
In any case, you claim is just not credible,
even in retrospect.

> >> Pre-f77 fortran did not allow query of command line options,
> >
> > True, but who needed it?
>
> Enough people so that it was a common vendor-specific extension.

> >> it did not support any kind of character or character strings,
> >
> > Character strings were provided via Hollerith constants, otherwise
> > how could you write out headings etc?
>
> Mostly that was done with hollerith strings in format statements, not
> with variables initialized with hollerith data statements.

Headings could be read in from cards and printed.

> > Portable character handling was possible pre-FORTRAN 77.
>
> Hollerith strings made it possible, but it was not portable,

Indeed it was. I ported a compiler from one machine to a
completely different machine with a completely different
character set (and completely different internal representation)
without a single change.

> much less
> efficient. Even simple things, such as the number of characters in the
> character set, or the number of those characters that could be stored in
> each integer,

It was always guaranteed that at least ONE character could be stored in
an integer variable.

> or the order of the characters within each integer was
> different from machine to machine.

Certainly. But that didn't stop programs from being portable.

> And if you actually did operations on
> the characters, such as comparisons for ranges or incrementing to the
> next character in a sequence, or converting between upper and lower
> case, then the underlying character set caused portability issues.

In pre-F77 days, upper case was principally used.

> Not
> all of that was due to limitations in the fortran standard, but it all
> added to the difficulty of writing portable programs in pre-f77 fortran.

Certainly possible.

> f77 made a big difference in all of that, it actually was practical to
> write portable character processing code in f77.

and pre-FORTRAN 77.

> >> you could not query for the date or time,
> >
> > True. But the operating system provided printed evidence of that.
>
> The need for this was sufficient for every vendor to supply their own
> nonportable extension. One often needed to do more with the date and
> time than just print it, for example, the ability to time a section of
> code in order to assess efficiency or to choose between different
> algorithms for different data sets.
>
> >> there were no bit operators,
> >
> > You could write your own.
>
> In portable fortran?

In a portable way, capable of running on any compiler.

> If not, then your statement is irrelevant.
>
> Again, every vendor provided their own nonportable extensions for bit
> operators. Fortran itself would not provide this until f90.

> >> no namelist i/o,
> >
> > You can use ordinary READ and WRITE statements for that.
>
> Again, every vendor provided their own nonportable extension for
> namelist i/o. This would become standard in f90.

As I said, ordinary READ and WRITE could do that, without
extensions.

> >> no implicit none, no double precision complex,
> >
> > You can do double precision complex operations using DOUBLE PRECISION (real).
>
> Yet again, every vendor provided double precision complex as an extension.

So?
Complex double precision arithmetic can be done in FORTRAN
using DOUBLE PRECISION variables.
And COMPLEX arithmetic was being done in ALGOL in the early 1960s
using REAL arithmetic.

> >> and on and on. Even
> >> after f77, which included character variables, many of these other
> >> things persisted. All of those standard libraries that everyone used
> >> back then (eispak, linpack, and later lapack)
> >
> > IBM, at least, provided the large library: "Scientific
> > Subroutine Package", from 1966 and probably earlier for
> > pre-S/360 machines.
>
> A vendor-specific library does not provide portability to other machines.

That's a rather glib statement.
It isn't necessarily true. And is demonstrably false.

> > The ACM also published many scientific subroutines.
> > Numerical algorithms were published in book form:
> > Reinsch & Wilkinson, Handbook for Automatic Computation: Linear
> > Algegra", vol II, 1971.
> >
> >> that used Z* as the
> >> convention for complex*16 did so with compiler extensions.
> >
> > Again, you can do double precision complex operations
> > using DOUBLE PRECISION (real).
>
> Yet, that is not how those libraries were written.

That's irrelevant. It's trivial to make the changes to use
DOUBLE PRECISION if one wants portable code.

robin....@gmail.com

unread,
May 6, 2020, 4:40:30 AM5/6/20
to
On Wednesday, May 6, 2020 at 3:38:07 PM UTC+10, Ron Shepard wrote:
As I have pointed out before, BIT strings have been available
in PL/I since 1966.
As well as that, so too have character strings.
As well as that, the equivalent of DOUBLE PRECISION COMPLEX
has been available in that language.
And many other features that people regularly complain about
not being present in FORTRAN right through to 1990.

ga...@u.washington.edu

unread,
May 6, 2020, 7:28:00 AM5/6/20
to
On Tuesday, May 5, 2020 at 11:38:25 PM UTC-7, robin...@gmail.com wrote:

(snip)

> The CDC 7600 and Cyber 72 series were like that.
> 60-bit integers could hold 60-bit integers.
> The one multiply instruction did either floating-point multiply
> or integer multiply.
> If there were no significant bits in the upper 12 bits,
> it did an integer multiply. (by "significant" I mean
> that a negative value whose upper bits were ones did not
> count as significant).
> Otherwise it did an floating-point multiply.

No.

http://classweb.ece.umd.edu/enee350-1.Sum2008/Notes/fltngpt.pdf

The multiply unit always does floating point multiply, using
the CDC floating point format.

First, unlike many floating point formats, the binary point is
to the right of the significand. That is, it is an integer.

Next, instead of using a biased exponent, they use a ones complement
exponent, such that bits are zero for an exponent of zero.

Then, and not so obvious, normalization prefers an exponent of zero.
Most systems normalize such that there are no high-order zero bits.
CDC, for values that don't lose low-order bits, shifts for an exponent
of zero.

And finally, for negative values, ones complement the whole word.

The result of all this is that integer values that fit in 48 bits
(49 including the sign) have the same representation as the ones
complement integer value. This also means no need for special
code or instructions to convert between integer and floating point.


> Too bad if the integer(s) was/were oversize. You got a garbage
> result without warning.

Note that the Fortran standard conveniently leaves the results
undefined in the case of overflow. Some programs expect two's
complement wrap, but CDC won't do that.



Brad Richardson

unread,
May 6, 2020, 10:50:46 AM5/6/20
to
One of the most concrete examples of MSDT I can point to are the SOLID[1] principles. Yes, they were developed using OOP style, but they are still applicable to plain imperative and functional styles. I also like the recommendation to make impossible states unrepresentable.

The most egregious violations I usually see in Fortran codes are the interface segregation principle. I see many procedures that do completely different things based on the value of an integer argument. Even many intrinsic procedures violate this in a small way with optional arguments. (i.e. `index(c1, c2, back = .true.)`).

Another I frequently see is, desiring to be able to switch between solution techniques for a given numerical algorithm, an integer argument is added, and the different algorithms embedded inside an if block inside the procedure. Once that's done, neither of the algorithms or the original procedure are applicable for use anywhere else. It violates the Single Responsibility, Open-Closed, and Interface Segregation principles. It also allows for unrepresentable states, because even though there are only 2, maybe 3 choices of algorithm, they used an integer argument. What happens if it's 4, or -1? It's supposed to be an impossible state, but since it's representable, for the procedure to be robust, it has to deal with that possibility.

That's not to say these are gospel that must be followed in every case. However, I've seen plenty of people refuse to write smaller, more focused procedures because (without having measured at all) they claim it will make their code run slower, in codes that are only taking a few seconds to run on a cheap laptop. And in sections of the code that are clearly not the bottle-neck.

I think many of the people on these forums don't do stuff like that, but they are a biased sample.

[1] - https://en.wikipedia.org/wiki/SOLID

Richard Weed

unread,
May 6, 2020, 10:58:41 AM5/6/20
to
On Wednesday, March 4, 2020 at 3:39:19 PM UTC-6, Lynn McGuire wrote:
> "With A Little Help From My Friends"
>
> "I spent last week at my first Fortran Standards Committee meeting. It
> was a pretty interesting experience. Everyone there was brilliant, and
> interested in trying to do a good job improving the language. And yet,
> it was still somehow very disfunctional."
>
> Yup, Fortran is dysfunctional nowadays. And I doubt that is going to
> change, the roots are just too old.
>
> Lynn

Interesting discussion. For those of you who are interested in or encounter an
unfamiliar compiler extension from what I call the Jurassic Fortran era (pre
f77) I suggest you try to find a copy of what was my Fortran programming
textbook circa 1970, Fredric Stuart's "Fortran Programming", Wiley and Sons.
Stuarts book has 25 tables that compare features of 79 different compilers and
152 different computer models of that era. I still find it a valuable reference
when I'm looking at really old code just to remind me of how things like
computed and assigned GO TO were suppose to work and what really arcane features
like SENSE SWITCH did.

As to the discussion about engineers vs "software developers", as an old
aerospace engineer I side with JCampbell. Engineers are by nature problem
solvers. Software is just a tool to solve the real problems we are paid to
solve. The reason most scientific programming was done by engineers (along with
math majors who in my experience make the best programmers and physics majors
who are some of the worst) is you don't want anyone developing a tool that has
no understanding of the physics/math/material science etc. of the problem the
tool is suppose to solve. That's like handing a toddler a box of matches and a can of gasoline and hoping the kid doesn't burn your house down. Also, like
JCampbell I've learned from painful experience that "modern software development techniques" are sometimes (many times) not the best approach for developing
engineering tools. OOP is a good example. I've wanted to embrace OOP but I've
come to realise that there is more hype than substance to it. There are a few useful concepts but as pointed out by the work of Decyk, Norton. and Szymanski at RPI and JPL along with Ed Akin's "Object Oriented Programmin via Fortran 90" an object based (as opposed to full-blown OOP) is a better approach for a wide range of problems because it forces you to think about what I consider the two most useful concepts to come out of OO. Mainly, you program to an interface and not an implementation and you favor aggregation and composition if you really really need something like inheritance. After a lot of wasted time trying to be fully OO, I've reverted to writting everything that does the bulk of the real computational work as good old procedural Fortran and limit the OO to do what I call "command and control" functions like intialization, managing memory, error
handling etc. Even then I'm basically following an object-based approach where
I'm using derived types to package data together in place of COMMON blocks and
as a way of presenting a simplified interface to potential users and hiding the procedural code that actually does most of the work if I want. A side benefit is
I have a library of (more) easily verified procedural code to fall back on if the OOP causes a compiler to gag (which unfortunately is still the case with most compilers that claim to support F03/F08 OOP). Finally, I feel the same way
about people who refer to themselves as "software developers" as a
a guy I worked with once with who was one of the pioneers of the Finite Element
Method and a member of the National Academy of Engineering and who spent his
entire career researching how you develop tools for solving real world problems
felt about people who called themselves "Computational Scientists". His opinion was that they needed to a better name to call themselves by because for the most part very few of them did either computations or science

Brad Richardson

unread,
May 6, 2020, 11:07:53 AM5/6/20
to
And so despite the fact that many of these codes contain solutions which could be reused in other applications, they don't get reused because they weren't written in a reusable way.

> software engineers who understand everything there is
> to know about writing software

These are the kind of statements that really get me. No one can possibly know everything there is to know about writing software. It's still a developing field. So unless you're actively studying all the advancements being made in programming language design, new techniques, category theory, and many other related fields, you don't even know everything that has been discovered so far about writing software.

And yet, this is a sentiment that I often find present in Fortran developers. They think that since they've been doing it for 30 years, they must be experts at it, and there can't possibly be anything they don't know or any better way to do things. Improvements can't be made if everyone believes there aren't any improvements that could be made.

Brad Richardson

unread,
May 6, 2020, 11:30:40 AM5/6/20
to
So are software developers. They just happen to have training in solving different kinds of problems.

> you don't want anyone developing a tool that has
> no understanding of the physics/math/material science etc. of the problem the
> tool is suppose to solve.

True, but that doesn't mean their input isn't valuable. I want a scientist/engineer to make sure the math and physics are right, but I want the software developer to tell me how to break the solution into pieces and put them together in a way that will be maintainable and possibly reusable.

> I've wanted to embrace OOP but I've
> come to realise that there is more hype than substance to it.

A lot of software developers are coming to that realization as well. The OOP practiced by most Java or C++ developers completely misses the mark about what OOP was supposed to be as articulated by Alan Kay, the original person who coined the term. Many are recommending and using a more functional, "object based" approach. I'd say you're probably doing it right the way you describe it.
It is loading more messages.
0 new messages