Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Fortran still going strong on TIOBE index

386 views
Skip to first unread message

Rudi Gaelzer

unread,
Jul 7, 2021, 11:28:14 AM7/7/21
to
On April last, it was reported that Fortran had returned to the top 20 languages, according to the TIOBE index.
Now, the July report shows that Fortran climbed to the 14th. position: https://www.tiobe.com/tiobe-index/
and still going up; possibly to the 13th. or 12th. position in the next report.
I think this is auspicious news.
What do you think were the causes of this renewed raise on the interest of Fortran?

Lynn McGuire

unread,
Jul 7, 2021, 4:02:34 PM7/7/21
to
Lots of very old programs out there are written in F66 or F77 and their
mainframes are going away. Time to port to Windows, Linux, or Unix !

I wonder how many of the programmers even know Fortran and understand
nuances such as automatic zero initialization ?

Lynn


Jos Bergervoet

unread,
Jul 7, 2021, 5:36:03 PM7/7/21
to
People start to realize there aren't many languages that have:

1) A language evolution based on steady, backward-compatible changes.
2) An unambiguous language definition, maintained by a standardization
committee.
3) A high enough age to be reasonably complete despite the very
slow progress guaranteed by points 1) and 2).
4) A large code base and libraries for most things you need.
5) A compiler included in the public domain gcc compiler suite.
6) An excellent reputation for execution speed.
7) Parallel programming as integrated part of the language.

It is understandable that this took some time to sink in, since some of
these things are a bit paradoxical. Especially the fact that Fortran
is as advanced as it is precisely because it is as old as it is.

But I think the list shows that the TIOBE ranking is well-deserved.

--
Jos

FortranFan

unread,
Jul 7, 2021, 8:22:03 PM7/7/21
to
On Wednesday, July 7, 2021 at 5:36:03 PM UTC-4, Jos Bergervoet wrote:

> ..
> People start to realize there aren't many languages that have:
>
> 1) A language evolution based on steady, backward-compatible changes.
> 2) An unambiguous language definition, maintained by a standardization
> committee.
> 3) A high enough age to be reasonably complete despite the very
> slow progress guaranteed by points 1) and 2).
> 4) A large code base and libraries for most things you need.
> 5) A compiler included in the public domain gcc compiler suite.
> 6) An excellent reputation for execution speed.
> 7) Parallel programming as integrated part of the language.
>
> It is understandable that this took some time to sink in, since some of
> these things are a bit paradoxical. Especially the fact that Fortran
> is as advanced as it is precisely because it is as old as it is.
>
> But I think the list shows that the TIOBE ranking is well-deserved.
> ..

Nice way to rationalize *whatever* might be going on that simply cannot be understood exactly but which makes Fortran move up the ranking!

This graph at the same ranking site might also be revealing, suggesting whatever is going on might be fleeting:
https://www.tiobe.com/tiobe-index/fortran/


JCampbell

unread,
Jul 7, 2021, 11:12:05 PM7/7/21
to
On Thursday, July 8, 2021 at 6:02:34 AM UTC+10, Lynn McGuire wrote:
>
> I wonder how many of the programmers even know Fortran and understand
> nuances such as automatic zero initialization ?
>
> Lynn
I wonder how many Fortran programmers know where in the standard "automatic zero initialization" is discussed ?

It is my impression this is not assumed in the standard, but I must be a Fortran user who doesn't know.

Robin Vowels

unread,
Jul 8, 2021, 12:14:08 AM7/8/21
to
On Thursday, July 8, 2021 at 6:02:34 AM UTC+10, Lynn McGuire wrote:
> On 7/7/2021 10:28 AM, Rudi Gaelzer wrote:
> > On April last, it was reported that Fortran had returned to the top 20 languages, according to the TIOBE index.
> > Now, the July report shows that Fortran climbed to the 14th. position: https://www.tiobe.com/tiobe-index/
> > and still going up; possibly to the 13th. or 12th. position in the next report.
> > I think this is auspicious news.
> > What do you think were the causes of this renewed raise on the interest of Fortran?
.
> Lots of very old programs out there are written in F66 or F77
.
and FORTRAN IV
.
>and their
> mainframes are going away. Time to port to Windows, Linux, or Unix !
>
> I wonder how many of the programmers even know Fortran and understand
> nuances such as automatic zero initialization ?
.
There is no automatic zero initialization.

Robin Vowels

unread,
Jul 8, 2021, 12:25:20 AM7/8/21
to
On Thursday, July 8, 2021 at 7:36:03 AM UTC+10, Jos Bergervoet wrote:
> On 21/07/07 5:28 PM, Rudi Gaelzer wrote:
> > On April last, it was reported that Fortran had returned to the top 20 languages, according to the TIOBE index.
> > Now, the July report shows that Fortran climbed to the 14th. position: https://www.tiobe.com/tiobe-index/
> > and still going up; possibly to the 13th. or 12th. position in the next report.
> > I think this is auspicious news.
> > What do you think were the causes of this renewed raise on the interest of Fortran?
> People start to realize there aren't many languages that have:
>
> 1) A language evolution based on steady, backward-compatible changes.
.
A number of old features are no longer standard and have been deleted
or are due to be deleted.
A number of old "features" that continue to be used, were never standard.
.
> 2) An unambiguous language definition, maintained by a standardization
> committee.
.
Ambiguous definitions arise in the case of certain COMPLEX constructs
that are error-prone.
.
The KIND specification is ambiguous, and can catch anyone.
.
> 3) A high enough age to be reasonably complete despite the very
> slow progress guaranteed by points 1) and 2).
> 4) A large code base and libraries for most things you need.
> 5) A compiler included in the public domain gcc compiler suite.
> 6) An excellent reputation for execution speed.
> 7) Parallel programming as integrated part of the language.
>
> It is understandable that this took some time to sink in, since some of
> these things are a bit paradoxical. Especially the fact that Fortran
> is as advanced as it is precisely because it is as old as it is.
>
> But I think the list shows that the TIOBE ranking is well-deserved.
.
> People start to realize there aren't many languages that have:
.
One such language is PL/I, which was introduced as an an improved
FORTRAN -- and a vast improvement it was! (and still is).

dpb

unread,
Jul 8, 2021, 10:20:06 AM7/8/21
to
On 7/7/2021 10:28 AM, Rudi Gaelzer wrote:
Given the manner in which those rankings are computed, I don't believe
there's any way to prove any correlation to any real application of Fortran.

At the level of those numbers; wild fluctuations are possible and mostly
what it "measures" is just random noise.

--

Thomas Koenig

unread,
Jul 8, 2021, 10:20:13 AM7/8/21
to
Jos Bergervoet <jos.ber...@xs4all.nl> schrieb:
> On 21/07/07 5:28 PM, Rudi Gaelzer wrote:
>> On April last, it was reported that Fortran had returned to the top 20 languages, according to the TIOBE index.
>> Now, the July report shows that Fortran climbed to the 14th. position: https://www.tiobe.com/tiobe-index/
>> and still going up; possibly to the 13th. or 12th. position in the next report.
>> I think this is auspicious news.
>> What do you think were the causes of this renewed raise on the interest of Fortran?
>
> People start to realize there aren't many languages that have:
>
> 1) A language evolution based on steady, backward-compatible changes.

And that is a _huge_ advantage.

Compare this to the C++ approach, where a serious discussion paper
co-authored by two WG21 members advocates dropping backwards and
forwards compatibility:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p2137r0.html

This paper should be enough to give anybody the creeps.

Jos Bergervoet

unread,
Jul 8, 2021, 1:38:04 PM7/8/21
to
On 21/07/08 6:25 AM, Robin Vowels wrote:
> On Thursday, July 8, 2021 at 7:36:03 AM UTC+10, Jos Bergervoet wrote:
>> On 21/07/07 5:28 PM, Rudi Gaelzer wrote:
>>> On April last, it was reported that Fortran had returned to the top 20 languages, according to the TIOBE index.
>>> Now, the July report shows that Fortran climbed to the 14th. position: https://www.tiobe.com/tiobe-index/
>>> and still going up; possibly to the 13th. or 12th. position in the next report.
>>> I think this is auspicious news.
>>> What do you think were the causes of this renewed raise on the interest of Fortran?
>> People start to realize there aren't many languages that have:
>>
>> 1) A language evolution based on steady, backward-compatible changes.
> .
> A number of old features are no longer standard and have been deleted
> or are due to be deleted.
> A number of old "features" that continue to be used, were never standard.

In the first decades of its life, FORTRAN fooled around just like
those young languages we see around us now! But with age comes
responsibility, and that's probably what we're seeing here.

> .
>> 2) An unambiguous language definition, maintained by a standardization
>> committee.
> .
> Ambiguous definitions arise in the case of certain COMPLEX constructs
> that are error-prone.
> .
> The KIND specification is ambiguous, and can catch anyone.

If any ambiguities still pop up then at least an 'interpretation' of
the rules can be given by the standardization body to resolve the
issue.

> .
>> 3) A high enough age to be reasonably complete despite the very
>> slow progress guaranteed by points 1) and 2).
>> 4) A large code base and libraries for most things you need.
>> 5) A compiler included in the public domain gcc compiler suite.
>> 6) An excellent reputation for execution speed.
>> 7) Parallel programming as integrated part of the language.
>>
>> It is understandable that this took some time to sink in, since some of
>> these things are a bit paradoxical. Especially the fact that Fortran
>> is as advanced as it is precisely because it is as old as it is.
>>
>> But I think the list shows that the TIOBE ranking is well-deserved.
> .
>> People start to realize there aren't many languages that have:
> .
> One such language is PL/I, which was introduced as an an improved
> FORTRAN -- and a vast improvement it was! (and still is).

But PL/I doesn't have points 4), 5) and 7), to name a few.

--
Jos

Lynn McGuire

unread,
Jul 8, 2021, 4:45:01 PM7/8/21
to
Fortran IV is Fortran 66.

All Fortran code developed on Univac 1108s or IBM mainframes or using
the Unix F77 compiler had automatic zero initialization as a feature.
Most Fortran programmers never realized that local variables should be
initialized until it was too late.

Lynn



Lynn McGuire

unread,
Jul 8, 2021, 4:52:48 PM7/8/21
to
No, automatic zero initialization for local variables was never a part
of the standard. It came into place as the mainframes had multiple
users and would zero initialize memory pages before usage by a program
to keep programs from accessing what was there by a previous user.
Modern cpus put random values, not zeros, into memory pages before usage
now.

Unfortunately, programmers started to assume that local variables did
not require initialization and wrote software accordingly. So Fortran
software dating to the 1960s, 1970s, and 1980s will probably have
problems with local variable initialization. YMMV (your mileage may
vary !).

Lynn

Lynn McGuire

unread,
Jul 8, 2021, 7:45:36 PM7/8/21
to
Probably as good as anything else for indicating interest / usage. I
suspect a lot of old Fortran 66 /77 software is getting ported to
Windows, Linux, and Unix servers now.

Lynn

gah4

unread,
Jul 8, 2021, 9:21:52 PM7/8/21
to
On Thursday, July 8, 2021 at 1:45:01 PM UTC-7, Lynn McGuire wrote:

(snip)

> All Fortran code developed on Univac 1108s or IBM mainframes or using
> the Unix F77 compiler had automatic zero initialization as a feature.
> Most Fortran programmers never realized that local variables should be
> initialized until it was too late.

IBM mainframes didn't (and don't) automatically zero initialization, but
it can be done. One I knew (OS/360 days) initialized to X'81'.

It is complicated, though. Such data isn't initialized at all by the compiler,
but left has 'holes' in the object program. Each record (card) of the object
program has a start address and length. None cover uninitialized data.

The linkage editor then takes object programs and combines them.
It likes to write out larger records, and so has to fill-in smaller such
blocks. Older ones left whatever was in its own memory space.
Later ones would initialize that space.

But larger ones are not written to the load module, but, as with the
object program are left out. In this case, it is whatever is in memory
before load module fetch. Part of that is the initiator that opens
data sets, and otherwise gets things ready, and then reads in the
actual load module records. So this also has to be changed to actually
initialize Fortran variables. As noted, it can be done.

That is all before MVS. OS/360 uses only real storage addresses.
OS/VS1 and OS/VS2 use a single virtual address space with all
programs in it. By the time of MVS, it is likely that security
requirements disallowed one program's data from being seen by
others, but not from itself.
But note as above, there are two places that have to be changed.
Even if the OS zeros all pages, it still reads in the load module which
might have already initialized such blocks with whatever is left
over in the linkage editor memory.

Fortran E, G, H, and VS Fortran all used only static allocation.
As well as I know, there is no IBM supplied Fortran 90 or
later compiler for descendants of OS/360.

gah4

unread,
Jul 8, 2021, 9:31:37 PM7/8/21
to
On Thursday, July 8, 2021 at 1:52:48 PM UTC-7, Lynn McGuire wrote:

(snip)

> No, automatic zero initialization for local variables was never a part
> of the standard. It came into place as the mainframes had multiple
> users and would zero initialize memory pages before usage by a program
> to keep programs from accessing what was there by a previous user.
> Modern cpus put random values, not zeros, into memory pages before usage
> now.

C requires static data, unless otherwise initialized, to be zero.
Dynamic data (auto variables or malloc() allocated) is not necessarily zero.

Last I knew, Linux uses a single zero initialized block, and maps that read only
into any allocated space. Then, when the program writes to it, a new block is
allocated, zeroed, and mapped into virtual address space.

> Unfortunately, programmers started to assume that local variables did
> not require initialization and wrote software accordingly. So Fortran
> software dating to the 1960s, 1970s, and 1980s will probably have
> problems with local variable initialization. YMMV (your mileage may
> vary !).

Fortran compilers based on C compilers naturally tend to follow the zero
of static space rule. As well as I know, they won't zero dynamically allocated
memory. Since Fortran now allows for either static or automatic variables,
other than those with the SAVE attribute, one can't be sure.

The usual Unix-like systems don't need to write large blocks of zeroes into
the object file or executable file. Non-zero data, they do. Initialize a (very)
large Fortran array with all elements to the same non-zero value, they will
all be written into the object program and executable file.

integer :: X(100000000) = 1
print *, x(1000000)
end


will give you a very large file.

Robin Vowels

unread,
Jul 8, 2021, 9:59:50 PM7/8/21
to
The ambiguities cited (COMPLEX, kind values) cannot be "resolved"
by an "interpretation". Those are deficiencies in the design.
30 years have now elapsed since these design flaws were introduced
and still these flaws have not been fixed.
.
> >> 3) A high enough age to be reasonably complete despite the very
> >> slow progress guaranteed by points 1) and 2).
> >> 4) A large code base and libraries for most things you need.
> >> 5) A compiler included in the public domain gcc compiler suite.
> >> 6) An excellent reputation for execution speed.
> >> 7) Parallel programming as integrated part of the language.
> >>
> >> It is understandable that this took some time to sink in, since some of
> >> these things are a bit paradoxical. Especially the fact that Fortran
> >> is as advanced as it is precisely because it is as old as it is.
> >>
> >> But I think the list shows that the TIOBE ranking is well-deserved.
> > .
> >> People start to realize there aren't many languages that have:
> > .
> > One such language is PL/I, which was introduced as an an improved
> > FORTRAN -- and a vast improvement it was! (and still is).
.
> But PL/I doesn't have points 4), 5) and 7), to name a few.
.
What? You think that PL/I does not lave a large code base and libraries
for most things that you need?
What? You think that PL/I does not have a public domain compiler?
What? You think that PLI has no parallel programming? PL/I has
had this since PL/I-F in 1966.
What else do you think that PL/I does not have?
Well, it does not have ambiguous definitions.

Robin Vowels

unread,
Jul 8, 2021, 10:06:20 PM7/8/21
to
On Friday, July 9, 2021 at 6:45:01 AM UTC+10, Lynn McGuire wrote:
> On 7/7/2021 11:14 PM, Robin Vowels wrote:
> > On Thursday, July 8, 2021 at 6:02:34 AM UTC+10, Lynn McGuire wrote:
> >> On 7/7/2021 10:28 AM, Rudi Gaelzer wrote:
> >>> On April last, it was reported that Fortran had returned to the top 20 languages, according to the TIOBE index.
> >>> Now, the July report shows that Fortran climbed to the 14th. position: https://www.tiobe.com/tiobe-index/
> >>> and still going up; possibly to the 13th. or 12th. position in the next report.
> >>> I think this is auspicious news.
> >>> What do you think were the causes of this renewed raise on the interest of Fortran?
> > .
> >> Lots of very old programs out there are written in F66 or F77
> > .
> > and FORTRAN IV
> > .
> >> and their
> >> mainframes are going away. Time to port to Windows, Linux, or Unix !
> >>
> >> I wonder how many of the programmers even know Fortran and understand
> >> nuances such as automatic zero initialization ?
> > .
> > There is no automatic zero initialization.
> Fortran IV is Fortran 66.
.
FORTRAN IV precedes FORTEAN 66.
.
> All Fortran code developed on Univac 1108s or IBM mainframes
.
Not on IBM 360 it didn't.
.
> or using
> the Unix F77 compiler had automatic zero initialization as a feature.
> Most Fortran programmers never realized that local variables should be
> initialized until it was too late.
.
That may be true (never realized...), but that last sentence contradicts what
you wrote in the sentence before that.

Ron Shepard

unread,
Jul 9, 2021, 2:26:11 AM7/9/21
to
On 7/8/21 3:52 PM, Lynn McGuire wrote:
> Unfortunately, programmers started to assume that local variables did
> not require initialization and wrote software accordingly.

I don't think this is a fair characterization of the fortran programmers
I knew in the 1970s. Most of us knew that variables needed to be
initialized, especially those of us who used overlay linkers. However,
there were few tools, either static or runtime analyzers, that could
locate such problems, so we wrote code that violated the standard by
mistake, despite our efforts, not by intent.

BTW, the univac 1108 I used in the 1970s did have an overlay linker, so
the programmer had to know about initialization of variables, either
with runtime assignments or with data and block data at compile time, to
make that work.

By the 1980s, tools did become available to check for uninitialized
variables. These included compiler options and also separate analysis
tools like ftnchek.

The previous comment about IBM virtual memory reminded me of something
odd. IBM thought that virtual memory was about dividing up the physical
memory into smaller address spaces (e.g. for CMS time sharing among
multiple users), while most other vendors used the term to describe
using external disk space in order to run programs that were larger than
the physical memory. From that perspective, those two were almost the
opposite meaning.

$.02 -Ron Shepard

gah4

unread,
Jul 9, 2021, 4:04:01 AM7/9/21
to
On Thursday, July 8, 2021 at 11:26:11 PM UTC-7, Ron Shepard wrote:

(snip)
> The previous comment about IBM virtual memory reminded me of something
> odd. IBM thought that virtual memory was about dividing up the physical
> memory into smaller address spaces (e.g. for CMS time sharing among
> multiple users), while most other vendors used the term to describe
> using external disk space in order to run programs that were larger than
> the physical memory. From that perspective, those two were almost the
> opposite meaning.

I am not so sure what IBM thought about it. The early OS/VS1 and OS/VS2
systems worked like OS/360 MFT and MVT, respectively. The big problem
with MVT was that after not so long, memory would be fragmented. To solve
that, people (that I knew) always requested 300K. (That was the maximum
for most job classes.) So, one advantage of OS/VS2 is that only virtual
storage was fragmented, not real storage. I believe I remember running 8MB
of VS on a 370/168 with 3MB real storage.

For OS/360 and OS/VSx and later, there is TSO. For real mode OS/360, TSO was
complicated by the inability to move a program once it started, as programs
could keep addresses (pointers). So, virtual storage was a big win for TSO.

CMS goes with VM, and yes also allows for time-shared use.

But yes, the larger systems ran more than one program (task) at a time,
which allowed another program to run while one was doing I/O, or otherwise.

Mostly this was done on other larger systems, such as DEC's PDP-10 and VAX
systems. It is the total address space of all programs that is bigger than
physical memory.

But okay, Unix systems in the late 1980's, such as Suns, were often enough
used by only one person. And for those, virtual memory might have been
used to allow larger programs, than physical memory.

But otherwise, it was mostly not until virtual storage for 80286 based
personal computers. I had OS/2 v1.0 running on an AT clone in 1990, when
most were running MS-DOS with 640K real memory. I think I had 5MB
on my first OS/2 system. (There was also Xenix, and some others.)


Themos Tsikas

unread,
Jul 9, 2021, 5:53:07 AM7/9/21
to
On Thursday, 8 July 2021 at 01:22:03 UTC+1, FortranFan wrote:

> Nice way to rationalize *whatever* might be going on that simply cannot be understood exactly but which makes Fortran move up the ranking!
>
> This graph at the same ranking site might also be revealing, suggesting whatever is going on might be fleeting:
> https://www.tiobe.com/tiobe-index/fortran/

One thing younger languages don't have is a forum where 50 year old practices are endlessly debated.

Themos Tsikas, NAG Ltd

Jos Bergervoet

unread,
Jul 9, 2021, 6:42:04 AM7/9/21
to
Less so than Fortran.

> What? You think that PL/I does not have a public domain compiler?

The claim was about the public-domain gcc compiler suite. You can
choose between C, C++ and Fortran.

> What? You think that PLI has no parallel programming? PL/I has
> had this since PL/I-F in 1966.

How are co-arrays defined and how are the images synchronized? Is that
in the official PL/I language standard?

> What else do you think that PL/I does not have?
> Well, it does not have ambiguous definitions.

I'm quite sure PL/I is a good language and was better than Fortran at
the time of introduction. Ans so was Algol68. But my list contains
things that aren't necessarily determined by those facts. Even if
you think that Fortran as a language is not the very best, you can
still agree that it is quite good and also has a few circumstantial
advantages, as mentioned in the list.

--
Jos

Rudi Gaelzer

unread,
Jul 9, 2021, 8:04:13 AM7/9/21
to
I'm somewhat delighted by the discussion sparked by my original post.
One aspect that I originally wanted to incite a debate about is regarded to the different rankings of popularity/usage of programming languages that one can find on the net.
It was pointed out that the PYPL index (https://pypl.github.io/PYPL.html) would be a more trustworthy indicator of the current popularity of a given language. And indeed, in the PYPL index, Fortran is not listed among the top 20.
In fact, the TIOBE and PYPL indices show quite disparate results, with languages poorly-ranked in the first appearing at the tops in the latter (Rust and Julia are examples).
IMVHO (very humble opinion), what happens with the PYPL index is that they rank the languages according to the number of searches for the language's tutorial. That seems to favor those languages that have "standard" tutorials that can be easily found, such as:
https://docs.python.org/3/tutorial/
https://en.cppreference.com/w/
https://julialang.org/learning/tutorials/
https://www.rust-lang.org/learn
just to name a few.
Some of those tutorials were built by the same maintainers of the "official" website of the languages, usually created by the community of developers/users. It seems to me that Fortran is lacking in this aspect. I have no knowledge of a comprehensive tutorial of Fortran, apart from some third-party iniciatives (such as tutorialspoint and Fortran Wiki).
IMVVHO, it should fall on the shoulders of the ISO/WG5 committee the task of creating and maintaining such a service, or to delegate the task to a group of maintainers and then provide validation for it.

Lynn McGuire

unread,
Jul 9, 2021, 7:20:49 PM7/9/21
to
We only ported to the IBM 370 and newer systems. I have no idea what
the IBM 360 system was like.

Lynn


gah4

unread,
Jul 9, 2021, 10:07:20 PM7/9/21
to
On Friday, July 9, 2021 at 3:42:04 AM UTC-7, Jos Bergervoet wrote:

(snip, someone wrote)

> > What? You think that PLI has no parallel programming? PL/I has
> > had this since PL/I-F in 1966.

> How are co-arrays defined and how are the images synchronized? Is that
> in the official PL/I language standard?

PL/I has multitasking. Not quite the same as multithreading.

The usual use would be to start a subtask, and then some time later WAIT
(that is the statement) for it to finish. There are EVENT variables, which
correspond to OS/360 ECBs (event control blocks).

I believe this is usually described as coarse grain multitasking, as
opposed to the fine grain of multithreading.

Maybe more obvious, it is also used for asynchronous I/O, which I believe
Fortran now has.

Among others, PL/I compilers generate not only reentrant routines
that can be used for recursion, but also reentrant for multiple tasks
at the same time. Among others, it complicates I/O, where different
tasks have different I/O streams from the same code.


gah4

unread,
Jul 9, 2021, 10:13:49 PM7/9/21
to
On Friday, July 9, 2021 at 4:20:49 PM UTC-7, Lynn McGuire wrote:

(snip, I wrote)
> > Fortran E, G, H, and VS Fortran all used only static allocation.
> > As well as I know, there is no IBM supplied Fortran 90 or
> > later compiler for descendants of OS/360.

> We only ported to the IBM 370 and newer systems. I have no idea what
> the IBM 360 system was like.

The compilers were the same, though they were under continual evolution.

As noted, though, it is mostly not the compiler, but the linkage editor and
program fetch (loading programs into memory to execute them), and those
may have changed along the way.

One that I remember, I suspect from OS/360 days, was executing the linkage
editor after a compilation failed. It then tried to read in the object program
from a data set that was never written. It instead read whatever was in those
disk blocks from the previous use. That was before people worried as much
about data security. I suspect that in the early days of OS/360, programs could
get data from other users left over in core. At some point, I suspect that had
to change. But that doesn't mean you don't get your own data left in core.

Robin Vowels

unread,
Jul 10, 2021, 12:11:01 AM7/10/21
to
.
Like the /370, only slower.

Lynn McGuire

unread,
Jul 10, 2021, 2:00:27 AM7/10/21
to
We had considerable problems porting to the IBM 370. It was our first 8
bit machine. Before, we developed on the Univac 1108 (36 bit) and
ported to the CDC 7600 (60 bit), both 6 bit machines. Since the Univac
1108 supported six characters per word, we had assumed that would work
everywhere. We had to rewrite all of our statements storing 6HXXXXXX
into an integer to storing 4HXXXX and 2HXX in two integers. Quite painful.

Lynn

gah4

unread,
Jul 10, 2021, 4:29:41 AM7/10/21
to
On Friday, July 9, 2021 at 11:00:27 PM UTC-7, Lynn McGuire wrote:

(snip)

> We had considerable problems porting to the IBM 370. It was our first 8
> bit machine. Before, we developed on the Univac 1108 (36 bit) and
> ported to the CDC 7600 (60 bit), both 6 bit machines. Since the Univac
> 1108 supported six characters per word, we had assumed that would work
> everywhere. We had to rewrite all of our statements storing 6HXXXXXX
> into an integer to storing 4HXXXX and 2HXX in two integers. Quite painful.

I was last week working with Spice 2g6, the last of the Fortran Spice programs.

I believe it was most often run on VAX 40 or so years ago.
It uses double precision for most of its data (except COMPLEX), and the double
precision includes character data of up to 8 characters. (All names are only
significant to the first 8 characters.)

That is a little tricky, as not all machines accurately compare floating point
data with characters in them. In most places, it just compares them, but
in some it uses a subroutine to do the comparison. The subroutine in
the version I have just compares them, but it could be replaced with a
fancier one.

There is also a subroutine to copy characters from one to another, which
(usually) has double precision arguments, but the dummy variables
are arrays of bytes. It would have used LOGICAL*1 for the IBM/370 version,
and maybe BYTE for the VAX version, but seems to work with INTEGER(1)
on gfortran. It might be that I am the first to compile it in 28 years.

I believe that the IBM/370 can reliably compare character data in REAL*8
variables. Also, S/370 doesn't normalize on assignment, which would
destroy character data. Spice has many 8H constants. For S/370 and VAX,
you could even use COMPLEX*16 for 16 characters.

One that I haven't figure out yet, gfortran allows initializing double precision
variables with 8H constants, but not apostrophe delimited constants.

Robin Vowels

unread,
Jul 10, 2021, 6:44:45 AM7/10/21
to
.
CHARACTER variables have been available since FORTRAN 77,
and in PL/I since 1966.
.
> Also, S/370 doesn't normalize on assignment, which would
> destroy character data. Spice has many 8H constants. For S/370 and VAX,
> you could even use COMPLEX*16 for 16 characters.
>
> One that I haven't figure out yet, gfortran allows initializing double precision
> variables with 8H constants, but not apostrophe delimited constants.
.
Is this something important?, since for 40+ years, FORTRAN has had
CHARACTER variables.

Jos Bergervoet

unread,
Jul 10, 2021, 7:58:03 AM7/10/21
to
On 21/07/10 4:07 AM, gah4 wrote:
> On Friday, July 9, 2021 at 3:42:04 AM UTC-7, Jos Bergervoet wrote:
>
> (snip, someone wrote)
>
>>> What? You think that PLI has no parallel programming? PL/I has
>>> had this since PL/I-F in 1966.
>
>> How are co-arrays defined and how are the images synchronized? Is that
>> in the official PL/I language standard?
>
> PL/I has multitasking. Not quite the same as multithreading.
>
> The usual use would be to start a subtask, and then some time later WAIT
> (that is the statement) for it to finish. There are EVENT variables, which
> correspond to OS/360 ECBs (event control blocks).

That was also possible in Algol68 in those times. The "parallel clause"
would start the tasks:
par( do_this, do_that, ... );
and then there was the "semaphore" type to make tasks aware of each
other's state and to wait if needed.

>
> I believe this is usually described as coarse grain multitasking, as
> opposed to the fine grain of multithreading.
>
> Maybe more obvious, it is also used for asynchronous I/O, which I believe
> Fortran now has.
>
> Among others, PL/I compilers generate not only reentrant routines
> that can be used for recursion, but also reentrant for multiple tasks
> at the same time. Among others, it complicates I/O, where different
> tasks have different I/O streams from the same code.

I/O is just one thing (and probably not dominant in the time
consumption for complicated numerical problems).

But how is the shared memory being handled? There you have the
same problem. If the user has to do that "by hand" for all shared
variables (using the semaphores and similar constructs) then it
is not a solution that is built into the language, as I meant it.
Only the tools to create a solution by hand are then built into
the language.

--
Jos

gah4

unread,
Jul 10, 2021, 3:35:36 PM7/10/21
to
On Saturday, July 10, 2021 at 3:44:45 AM UTC-7, Robin Vowels wrote:

(snip, I wrote)
> > One that I haven't figure out yet, gfortran allows initializing double precision
> > variables with 8H constants, but not apostrophe delimited constants.

> Is this something important?, since for 40+ years, FORTRAN has had
> CHARACTER variables.

And some programs are older than that.

Spice2 is 1975, following Spice1 earlier.

As well as I know, descendants of ECAP originally written
in Fortran II.

The ECAP that I have came from an IBM 1130, which uses a subset
version of Fortran IV. No logical variables or logical IF. Only five character
variable names. It seems that they took all the six character names
(from an earlier compiler), removed the middle two letters and replaced
them with Z. There are some WRITE statement that give the name of the
variable that they are writing out. I believe it goes back to about 1963.
For ECAP, all language keywords are recognized by only the first two
characters, and most programs abbreviate them.

Lynn McGuire

unread,
Jul 10, 2021, 5:03:42 PM7/10/21
to
There was no double precision or character data in 1975 when we did our
first port to the IBM 370.

Lynn

gah4

unread,
Jul 10, 2021, 7:32:37 PM7/10/21
to
On Saturday, July 10, 2021 at 2:03:42 PM UTC-7, Lynn McGuire wrote:

(snip)

> There was no double precision or character data in 1975 when we did our
> first port to the IBM 370.

CHARACTER didn't come until Fortran 77, or VS Fortran for the 370.

But double precision, including its use for A8 format and Hollerith constants
goes back to sometime in the Fortran II days, and definitely in all the 360
and 370 compilers. Many programs written for single precision on 36 but
predecessors, needed double precision for S/360.

A small number of systems would normalize, or otherwise change the value
of floating point variables, but S/370 is fine. COMPLEX works, too.

If you want to play with the bits, shift and mask them, then it is harder.
As above, Spice uses a subroutine to do all the moving of characters,
which could be in assembly if it can't be done in Fortran. But on S/370
it can be done with LOGICAL*1 and EQUIVALENCE.

Spice does all the comparisons in double precision. For individual
characters, it fills a variable with blanks (from an H constant), replaces
the leftmost characters (with a subroutine) and the compares to
an H constant. Doing the move in one subroutine localizes the
machine dependence to one place.


Lynn McGuire

unread,
Jul 10, 2021, 8:18:44 PM7/10/21
to
If so, I am not sure that the Univac 1108 and CDC 7600 supported double
precision. Of course, we moved to a Prime 450 (32 bit single precision)
in 1978 for our development. But, we supported our customers on the
Univac 1108 until 1982 or 1983. And we were up against the linker limit
on the Univac 1108 so we would not have had the room to convert from
single to double. We did convert to double precision around 2000,
solved a lot of precision problems we had and brought a few more of its own.

Lynn


Robin Vowels

unread,
Jul 11, 2021, 12:03:35 AM7/11/21
to
.
DOUBLE PRECISION has been available on IBM mainframes since
at least the IBM 360, and was therefore available on the IBM 370.

JCampbell

unread,
Jul 11, 2021, 12:08:10 AM7/11/21
to
I am puzzled why you have held onto storing character data in non-character variables or arrays for so long. Is this a personal challenge?
In the early 80's I converted a number of pre-F77 Fortran codes for both character and generic intrinsics.
Plan out the changes by defining new data structures and they became fairly easy to implement and test.
Now, with F90's ALLOCATE and derived types, this task is never difficult and the sooner these data structures were more clearly described, the easier it was for further development.
The most annoying part of this update was the lack of documentation of the compiler and hardware architecture used for initial development.
Many programs were moved between CDC and Vax/Pr1me and then later onto Apollo/Sun and PC. Now, for me at Ryzen it has been quite a journey.

Back to the OP, my focus in recent years has been OpenMP, which provides significant gains. Fortran does this very well here but outside the standard. Does this feature in the TIOBE space?

Thomas Koenig

unread,
Jul 11, 2021, 6:21:16 AM7/11/21
to
gah4 <ga...@u.washington.edu> schrieb:
> On Saturday, July 10, 2021 at 2:03:42 PM UTC-7, Lynn McGuire wrote:
>
> (snip)
>
>> There was no double precision or character data in 1975 when we did our
>> first port to the IBM 370.
>
> CHARACTER didn't come until Fortran 77, or VS Fortran for the 370.
>
> But double precision, including its use for A8 format and Hollerith constants
> goes back to sometime in the Fortran II days, and definitely in all the 360
> and 370 compilers. Many programs written for single precision on 36 but
> predecessors, needed double precision for S/360.

... as a result of a rather bad decision on floating point format.

Henry S. Warren Junior had a section on that decision in "Hacker's
Delight", and it wasn't positive.

Gary Scott

unread,
Jul 11, 2021, 9:18:48 AM7/11/21
to
On 7/10/2021 11:08 PM, JCampbell wrote:
> On Sunday, July 11, 2021 at 5:35:36 AM UTC+10, gah4 wrote:
>> On Saturday, July 10, 2021 at 3:44:45 AM UTC-7, Robin Vowels wrote:
>>
>> (snip, I wrote)
>>>> One that I haven't figure out yet, gfortran allows initializing double precision
>>>> variables with 8H constants, but not apostrophe delimited constants.
>>> Is this something important?, since for 40+ years, FORTRAN has had
>>> CHARACTER variables.
>> And some programs are older than that.
>>
>> Spice2 is 1975, following Spice1 earlier.
>>
>> As well as I know, descendants of ECAP originally written
>> in Fortran II.
>>
>> The ECAP that I have came from an IBM 1130, which uses a subset
>> version of Fortran IV. No logical variables or logical IF. Only five character
>> variable names. It seems that they took all the six character names
>> (from an earlier compiler), removed the middle two letters and replaced
>> them with Z. There are some WRITE statement that give the name of the
>> variable that they are writing out. I believe it goes back to about 1963.
>> For ECAP, all language keywords are recognized by only the first two
>> characters, and most programs abbreviate them.
> I am puzzled why you have held onto storing character data in non-character variables or arrays for so long. Is this a personal challenge?

In my case, character data was sometimes stored in integers. The main
reason for continuing to use them is because some of the operating
system system services were written with the arguments storing character
data in integers...so, not much choice

Ron Shepard

unread,
Jul 11, 2021, 1:00:43 PM7/11/21
to
On 7/10/21 7:18 PM, Lynn McGuire wrote:
> If so, I am not sure that the Univac 1108 and CDC 7600 supported double
> precision.

The univac 1108 did have double precision. Since it was a 36-bit word
machine, double precision meant 72-bits. During the time you are talking
about, 1975 to 1980, I also used the univac 1108 and the Decsystem-20.
Both had 36-bit words and 72-bit double precision, but their formats
were different. For characters, the univac used six 6-bit characters per
word, while the Dec used five 7-bit ascii characters per word (with one
bit left over, which was often used for something on those
memory-limited machines). The machines I used had 65k and 128k words of
memory, so we were always pushing against memory limits, packing groups
of small integers into the words, using any insignificant bits in
floating point words for various purposes (small offsets, logical flags,
etc.), reusing common block memory for unrelated purposes, and so on.
Despite all that, it was possible to write code that compiled and ran
correctly on both machines. I mostly ported code from IBM and CDC
machines at that time, not to them, but I did run on a few CDC 6400 and
6600 machines at that time too. When you are in that situation, you find
ways to write code in portable ways, code that did not always conform to
the standard. Using real*8 declarations rather than real/double
precision was one of those tricks -- portable but nonstandard.

Speaking of the univac machine, it had an odd way to set and mask bits.
In addition to the usual shift/and/or functions, there were two compiler
functions, fld() and sfld() if I remember correctly, for reading and
setting bits. The odd thing was that they could appear on either the
left or the right side of a statement. I remember seeing the MIL-STD
function mvbits() right after the f77 standard was adopted, and I
thought it was similar to those old univac functions I had used years
earlier, although it was only allowed on the right side of expressions.

Of course, we programmers were all looking forward to a fortran revision
in 1980 or so that would bring all of that MIL-STD stuff into the
standard. In fact, it would take almost 15 years for that to happen,
which is one reason why fortran lost so much ground to the lesser
languages during the 1980s, and why the fortran ranking in the TIOBE
index are being discussed even now 40 years later.

$.02 -Ron Shepard

gah4

unread,
Jul 11, 2021, 3:38:15 PM7/11/21
to
On Saturday, July 10, 2021 at 5:18:44 PM UTC-7, Lynn McGuire wrote:

(snip)

> If so, I am not sure that the Univac 1108 and CDC 7600 supported double
> precision. Of course, we moved to a Prime 450 (32 bit single precision)
> in 1978 for our development.

The standard required it, so everyone had it. But sometimes in software,
and much slower than single precision hardware.

That was especially true for CDC, where 60 bits was more like other's
double precision.

DEC had two forms for the PDP-10, one in software for the KA-10,
and a different one in hardware for later models.

But in the case of character data, it is just a place to put bits.
Though in the case of A format values, integer or real, (or anything
else), the standard doesn't say much about even assigning them.
Not all bits are guaranteed to copy in an assignment, or compare
in a comparison. But usually they do.

Lynn McGuire

unread,
Jul 11, 2021, 4:51:44 PM7/11/21
to
We could not use it anyway. We were at the linker limit on the Univac
1108 and playing all kinds to keep from hitting it.

Lynn

dpb

unread,
Jul 11, 2021, 5:56:35 PM7/11/21
to
On 7/11/2021 2:38 PM, gah4 wrote:
> On Saturday, July 10, 2021 at 5:18:44 PM UTC-7, Lynn McGuire wrote:
>
> (snip)
>
>> If so, I am not sure that the Univac 1108 and CDC 7600 supported double
>> precision. Of course, we moved to a Prime 450 (32 bit single precision)
>> in 1978 for our development.
>
> The standard required it, so everyone had it. But sometimes in software,
> and much slower than single precision hardware.
>
> That was especially true for CDC, where 60 bits was more like other's
> double precision.
>
> DEC had two forms for the PDP-10, one in software for the KA-10,
> and a different one in hardware for later models.
>
...

CDC 60-bit floating-point was 1 sign bit and an 11-bit exponent
including bias, with a 48-bit bit-normalized mantissa.

The CDC 6000/7000 instruction set included double precision instructions
for addition, subtraction, and multiplication. They operated on 60-bit
quantities, just as single precision numbers.

Double precision numbers were just two single precision numbers
back-to-back, with the second exponent being essentially redundant. It
was a waste of 12 bits, but you still got 96 bits of precision.

--

gah4

unread,
Jul 11, 2021, 10:05:51 PM7/11/21
to
On Sunday, July 11, 2021 at 2:56:35 PM UTC-7, dpb wrote:

(snip)
> CDC 60-bit floating-point was 1 sign bit and an 11-bit exponent
> including bias, with a 48-bit bit-normalized mantissa.

I thought CDC was one where if the value is an integer in the appropriate
range, it stores it unnormalized, and with zero exponent, such that it has
the integer value. Saves the need for separate integer instructions.

> The CDC 6000/7000 instruction set included double precision instructions
> for addition, subtraction, and multiplication. They operated on 60-bit
> quantities, just as single precision numbers.

> Double precision numbers were just two single precision numbers
> back-to-back, with the second exponent being essentially redundant. It
> was a waste of 12 bits, but you still got 96 bits of precision.

Yes, that is what IBM S/370 does with extended (quad) precision.
It makes it easier to do in software.

Robin Vowels

unread,
Jul 11, 2021, 10:47:09 PM7/11/21
to
On Monday, July 12, 2021 at 12:05:51 PM UTC+10, gah4 wrote:
> On Sunday, July 11, 2021 at 2:56:35 PM UTC-7, dpb wrote:
>
> (snip)
> > CDC 60-bit floating-point was 1 sign bit and an 11-bit exponent
> > including bias, with a 48-bit bit-normalized mantissa.
> I thought CDC was one where if the value is an integer in the appropriate
> range, it stores it unnormalized, and with zero exponent, such that it has
> the integer value. Saves the need for separate integer instructions.
.
60-bit integers are 60-bit integers.
All 60 bits of an integer were used for storing an integer.
Where the machine was deficient was in an integer multiply instruction.
If the floating-point multiply instruction saw that the upper 12 bits
as being zero or all ones, it did an integer multiplication,
otherwise it did a floating-point multiplication.
So, if an integer drifted into the high 12 bits, "Integer multiply"
gave garbage.

gah4

unread,
Jul 12, 2021, 12:02:42 AM7/12/21
to
On Sunday, July 11, 2021 at 7:47:09 PM UTC-7, Robin Vowels wrote:

(snip)
> 60-bit integers are 60-bit integers.
> All 60 bits of an integer were used for storing an integer.
> Where the machine was deficient was in an integer multiply instruction.
> If the floating-point multiply instruction saw that the upper 12 bits
> as being zero or all ones, it did an integer multiplication,
> otherwise it did a floating-point multiplication.
> So, if an integer drifted into the high 12 bits, "Integer multiply"
> gave garbage.

I believe that isn't the right description for what it does.

But yes, there are 60 bit integer add/subtract.

When it normalizes numbers, it prefers an exponent of zero, and the binary
point to the right of the LSB. It doesn't "check" for an integer, but it happens
naturally with the floating point format. And if a floating point multiply gives
an integer result that fits in the 48 bits, it will give the right integer result.

And if integer multiply is too big, then it gives the proper floating point result.

Robin Vowels

unread,
Jul 13, 2021, 1:37:03 PM7/13/21
to
On Monday, July 12, 2021 at 2:02:42 PM UTC+10, gah4 wrote:
> On Sunday, July 11, 2021 at 7:47:09 PM UTC-7, Robin Vowels wrote:
>
> (snip)
> > 60-bit integers are 60-bit integers.
> > All 60 bits of an integer were used for storing an integer.
> > Where the machine was deficient was in an integer multiply instruction.
> > If the floating-point multiply instruction saw that the upper 12 bits
> > as being zero or all ones, it did an integer multiplication,
> > otherwise it did a floating-point multiplication.
> > So, if an integer drifted into the high 12 bits, "Integer multiply"
> > gave garbage..
.
> I believe that isn't the right description for what it does.
>
> But yes, there are 60 bit integer add/subtract.
>
> When it normalizes numbers, it prefers an exponent of zero, and the binary
> point to the right of the LSB. It doesn't "check" for an integer, but it happens
> naturally with the floating point format. And if a floating point multiply gives
> an integer result that fits in the 48 bits, it will give the right integer result.
>
> And if integer multiply is too big, then it gives the proper floating point result.
.
That's irrelevant.
If you wrote and executed an integer multiply instruction, and the result happened to
exceed 48 bits, the result was garbage (in terms of an expected integer result).

Robin Vowels

unread,
Jul 13, 2021, 1:49:59 PM7/13/21
to
On Monday, July 12, 2021 at 2:02:42 PM UTC+10, gah4 wrote:
> On Sunday, July 11, 2021 at 7:47:09 PM UTC-7, Robin Vowels wrote:
>
> (snip)
> > 60-bit integers are 60-bit integers.
> > All 60 bits of an integer were used for storing an integer.
> > Where the machine was deficient was in an integer multiply instruction.
> > If the floating-point multiply instruction saw that the upper 12 bits
> > as being zero or all ones, it did an integer multiplication,
> > otherwise it did a floating-point multiplication.
> > So, if an integer drifted into the high 12 bits, "Integer multiply"
> > gave garbage.
.
> I believe that isn't the right description for what it does.
>
> But yes, there are 60 bit integer add/subtract.
>
> When it normalizes numbers, it prefers an exponent of zero, and the binary
> point to the right of the LSB. It doesn't "check" for an integer,
.
The manual explicitly states that the Integer Multiply instruction (IM)
checks for zero (+ or -, i.e. all zeors or all ones) in the upper 12 bits
of both 60-bit operands. It then performs an integer multiply. The instruction
op code is the same as the floating-point multiply instruction, and the
operation is performed by the floating-point hardware.
.

gah4

unread,
Jul 13, 2021, 7:37:03 PM7/13/21
to
On Tuesday, July 13, 2021 at 10:49:59 AM UTC-7, Robin Vowels wrote:

(snip)
> The manual explicitly states that the Integer Multiply instruction (IM)
> checks for zero (+ or -, i.e. all zeors or all ones) in the upper 12 bits
> of both 60-bit operands. It then performs an integer multiply. The instruction
> op code is the same as the floating-point multiply instruction, and the
> operation is performed by the floating-point hardware.

It is well described in Blaauw & Brooks "Computer Architecture Concepts and Evolution",
and is worse than either yours or my explanation.



JCampbell

unread,
Jul 13, 2021, 10:42:41 PM7/13/21
to
Thankfully we got IEEE 754 Standard for Floating Point. ( and it's commentary references to bytes such as REAL*8 )
0 new messages