Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Fortran Code on GitHub and books with Fortran code

468 views
Skip to first unread message

Beliavsky

unread,
Jul 8, 2021, 8:16:52 AM7/8/21
to
I have created a GitHub repo https://github.com/Beliavsky/Fortran-code-on-GitHub/blob/main/README.md that lists 400+ Fortran codes by category, either by task, such as numerical integration, or domain, such as astrophysics. I will continue to update it. Please make suggestions for additions by creating an issue at the repo.

Another repo https://github.com/Beliavsky/Fortran-related-books lists books with Fortran code, other than textbooks.

A third repo https://github.com/Beliavsky/Burkardt-Fortran-90 categorizes the many Fortran codes of John Burkardt at https://people.sc.fsu.edu/~jburkardt/f_src/f_src.html

FortranFan

unread,
Jul 8, 2021, 12:30:47 PM7/8/21
to
Kudos on the great effort, such initiatives are of tremendous help to boost the "ecosystem" around Fortran. Thank you.

Arjen Markus

unread,
Jul 8, 2021, 1:57:54 PM7/8/21
to
It certainly is an impressive collection! Sifting through this alone requires serious reading :).

Regards,

Arjen

dpb

unread,
Jul 8, 2021, 8:19:47 PM7/8/21
to
On 7/8/2021 7:16 AM, Beliavsky wrote:
> I have created a GitHub repo https://github.com/Beliavsky/Fortran-code-on-GitHub/blob/main/README.md that lists 400+ Fortran codes by category, either by task, such as numerical integration, or domain, such as astrophysics. I will continue to update it. Please make suggestions for additions by creating an issue at the repo.
>

Not sure the purpose other than a list but RSICC Radiation Safety
Information Computer Center at ORNL is the repository of almost all the
shielding and other reactor codes in use or past use in the
non-defense-specific nuclear power field.

Most of those were at least originally written in FORTRAN for Philco or
then CDC and lesser degree IBM machines. Every vendor had their own
proprietary versions they developed for their specific variations from
the publicly available versions...

There's easily another 50 and maybe 100 there and perhaps a lot more
than that -- most that are there I have no idea what they are or what
they were written in, but certainly a larger number were FORTRAN
originally. Having now been out of the field 25 years or so, I don't
know what is currently being used by the vendors; even the vendor I
worked for was writing new codes to replace the public-based ones with
more features and better solution techniques long before I left there --
I'm sure all the other vendors were, too. Of course, those never were
released or provided to RISCC--only publicly-developed stuff from the
national labs like Bettis or ORNL got to that open platform.

Thomas Koenig

unread,
Jul 9, 2021, 2:19:24 AM7/9/21
to
dpb <no...@none.net> schrieb:
> On 7/8/2021 7:16 AM, Beliavsky wrote:
>> I have created a GitHub repo https://github.com/Beliavsky/Fortran-code-on-GitHub/blob/main/README.md that lists 400+ Fortran codes by category, either by task, such as numerical integration, or domain, such as astrophysics. I will continue to update it. Please make suggestions for additions by creating an issue at the repo.
>>
>
> Not sure the purpose other than a list but RSICC Radiation Safety
> Information Computer Center at ORNL is the repository of almost all the
> shielding and other reactor codes in use or past use in the
> non-defense-specific nuclear power field.

Seems to be somewhat restricted... when I click on any link below

https://www.ornl.gov/project/radiation-safety-information-computational-center-rsicc

I get "You are not authorized to access this page."

dpb

unread,
Jul 9, 2021, 7:53:04 AM7/9/21
to
Oh. The actual RSICC portal is at

<https://rsicc.ornl.gov/Default.aspx>

--



Rudi Gaelzer

unread,
Jul 9, 2021, 8:14:48 AM7/9/21
to
Kudos for the hard work!
I goes along the point of necessity of increasing Fortran's visibility that I stressed in my post here: https://groups.google.com/g/comp.lang.fortran/c/ldON2kwWM-4/m/waDyX9k2AAAJ

Rudi Gaelzer

unread,
Jul 9, 2021, 8:20:11 AM7/9/21
to
On Thursday, July 8, 2021 at 9:16:52 AM UTC-3, Beliavsky wrote:
A suggestion for your "interoperability" section: the Forpy package (for Fortran-Python interop), developed by Elias Rabel:
https://github.com/ylikx/forpy

Beliavsky

unread,
Jul 9, 2021, 8:38:10 AM7/9/21
to
On Friday, July 9, 2021 at 8:20:11 AM UTC-4, rgae...@gmail.com wrote:

> A suggestion for your "interoperability" section: the Forpy package (for Fortran-Python interop), developed by Elias Rabel:
> https://github.com/ylikx/forpy

Thanks -- that was in the general purpose section, and I just moved it to interoperability.

Thomas Koenig

unread,
Jul 9, 2021, 8:48:05 AM7/9/21
to
dpb <no...@none.net> schrieb:
Interesting, thanks!

I don't work in nuclear engineering myself, but I know somebody
who does, and I will forward this.

dpb

unread,
Jul 9, 2021, 9:47:43 AM7/9/21
to
It is a gold mine if one is in the area, indeed...

In professional life while still in the vendor fold before migrating to
independent consulting role I was in the reactor physics side and there
was another group that did the stuff that required all the shielding
calculations so I'm not all that familiar with those -- and it's been so
long now that I don't know what are the current codes still in use for
core design and fuel cycle analyses, etc., that was what I did. The big
tool then was still PDQ-07/HARMONY (again, a proprietary version of) and
the internally-developed code used to generate few-group nuclear cross
sections for input to those from the detailed multi-group ENDF tables.
That code was also FORTRAN + (originally Philco) assembly; it was
totally rewritten (my first major introduction to writing code) when
moved to the CDC machines.

There's another whole arena of reactor transient analyses for LOCA (loss
of coolant analysis) that was the holy grail of the safety analysis
section. RELAP was the tool then; we ported it to the PC environment in
the days of the x386 by using a multiprocessor coprocessor board running
two 68332s with a FORTRAN compiler by a now defunct outfit whose name
now escapes me. We used this as the simulator in conjunction with the
real Foxboro IA control system hardware in doing the first NRC-accepted
conversion from fully analog to a hybrid analog-digital control system
on a US PWR, replacing the feedwater control subsystem that had become
obsolete and repair parts difficult to procure.

Eventually, the NRC left the dark ages and allowed modernized control
systems overall, but it was another 5 years or so after that before the
first transition in an operating US plant occurred.

Geezer tales aside, if one wants to add to the list of codes that were
written in FORTRAN in all or in major part, there's a humongous list,
the majority of which I would think were/are.

What's happened in the last 20 years in the transition to the
desktop/workstation I really don't know; I migrated totally away from
the large code environment to embedded systems and then instrumentation
and controls for the fossil utilites and other things by then...

I did do a few other pieces for ORNLL/NASA of some thermodynamics
simulations of heat exchangers and the like to modify them to account
for redesigns for the shuttle and some other experimental work in the
90s; those also were derived from original ORNL work in FORTRAN. Not
surprising, I didn't find them in the RSICC archives; I don't know where
they might be, but I know that ORNL group also had a "veritable
plethora" of both production-type as well as research-level FORTRAN
codes for thermal analyses totally unrelated to anything nuclear.

Then, of course, there iss the highly classified defense related side of
things that is its own private world. It was all FORTRAN originally,
too. I never did code development on that side; only peripheral
facilities work associated with the operations ends of things.

--



Beliavsky

unread,
Jul 9, 2021, 9:57:15 AM7/9/21
to
On Friday, July 9, 2021 at 7:53:04 AM UTC-4, dpb wrote:

> Oh. The actual RSICC portal is at
>
> <https://rsicc.ornl.gov/Default.aspx>
>
> --
Thanks, I added RSICC to the Fortran Wiki software repositories page http://fortranwiki.org/fortran/show/Software+repositories . There probably many scientific and engineering disciplines with their own repositories of codes, some of which are in Fortran. If you know of a repository not listed at the Fortran Wiki, please add it there or post here.

Gary Scott

unread,
Jul 9, 2021, 10:47:39 AM7/9/21
to
I'm eagerly anticipating some of the newer mini and micro/modular
systems...and of course, fusion...before I die...:)

dpb

unread,
Jul 9, 2021, 12:08:06 PM7/9/21
to
On 7/9/2021 9:47 AM, Gary Scott wrote:
...

big snip -- dpb

> I'm eagerly anticipating some of the newer mini and micro/modular
> systems...and of course, fusion...before I die...:)

It will still be "right around the corner, only 50 years away..." :)

Had forgotten about the stint supporting Princeton Plasma Physics Lab
years ago...three of us bought a surplussed PDP-11/45 with RSX and two
DEC compilers from them to be able to have our own development
environment and not have to pay the exorbitant CPU rates were charged
through the IT group in the consulting firm through which were working
at the time...it comprised 4 6-ft 19" cabinet racks plus the standalone
top-loading removable platters hard disk assembly...

It was about half the VAX 11/780; far above the XT of the time.

--


dpb

unread,
Jul 9, 2021, 1:54:48 PM7/9/21
to
On 7/9/2021 8:47 AM, dpb wrote:
...

> There's another whole arena of reactor transient analyses for LOCA (loss
> of coolant analysis) that was the holy grail of the safety analysis
> section.  RELAP was the tool then; we ported it to the PC environment in
> the days of the x386 by using a multiprocessor coprocessor board running
> two 68332s with a FORTRAN compiler by a now defunct outfit whose name
> now escapes me. ...

I can't let that episode pass without at least one war story -- those
old enough to remember the CDC recall it was a 60 bit word as opposed to
32/64 and all that entails.

In addition, the CDC compilers decoded only the lower 18 bits of
integers for array address indexing wise; the wise guys who wrote RELAP
thus used those same variables for other purposes simultaneously by
storing other control and logic variables in the upper 32 bits.

Owing to this, masking/shifting with intrinsics in line ended up being a
major performance bottleneck; eventually the compiler vendor made a
similar patch to the compiler for us.

--dpb

gah4

unread,
Jul 9, 2021, 9:56:37 PM7/9/21
to
On Friday, July 9, 2021 at 10:54:48 AM UTC-7, dpb wrote:

(snip)

> I can't let that episode pass without at least one war story -- those
> old enough to remember the CDC recall it was a 60 bit word as opposed to
> 32/64 and all that entails.

> In addition, the CDC compilers decoded only the lower 18 bits of
> integers for array address indexing wise; the wise guys who wrote RELAP
> thus used those same variables for other purposes simultaneously by
> storing other control and logic variables in the upper 32 bits.

Is this a software operation, or what the hardware did?

> Owing to this, masking/shifting with intrinsics in line ended up being a
> major performance bottleneck; eventually the compiler vendor made a
> similar patch to the compiler for us.

IBM S/360 uses 24 bit addresses, and OS/360 uses the high byte often for
other uses, especially in system control blocks. For S/370, LCM and STCM
were added to allow load/store of some of the bytes of a register
(as selected by mask bits), with the primary use being addresses.

This, then, complicated the change to 31 bit addressing. Hardware still
knows how to do operations ignoring the high byte. Also, many control
blocks have to be below 16M, and the system knows how to make that
work.

Not so many years later, the Apple Macintosh, using the 68000 processor
again with 24 bit addressing, again used the high bytes for other uses.
Though not quite as much as IBM, but it still took some time to get
older programs using them out. Not quite as deep into the OS, though,
as memory costs were less than in S/360 days.

I don't know of any OS/360 Fortran programs that used high bits of indexing
that would be ignored by the hardware, but it should have been possible.

I suspect that there are more stories out there. Thanks for that one, though.

Thomas Koenig

unread,
Jul 11, 2021, 5:03:26 AM7/11/21
to
gah4 <ga...@u.washington.edu> schrieb:

>
> IBM S/360 uses 24 bit addresses, and OS/360 uses the high byte often for
> other uses, especially in system control blocks. For S/370, LCM and STCM
> were added to allow load/store of some of the bytes of a register
> (as selected by mask bits), with the primary use being addresses.

I guess that was Gene Amdahl's revenge. He wanted a 24-bit machine
from the start and got overruled.

> This, then, complicated the change to 31 bit addressing. Hardware still
> knows how to do operations ignoring the high byte. Also, many control
> blocks have to be below 16M, and the system knows how to make that
> work.
>
> Not so many years later, the Apple Macintosh, using the 68000 processor
> again with 24 bit addressing, again used the high bytes for other uses.

Failure to learn from previous experience is a characteristic of
computer design, it seems.

Robin Vowels

unread,
Jul 11, 2021, 11:22:30 PM7/11/21
to
On Saturday, July 10, 2021 at 11:56:37 AM UTC+10, gah4 wrote:
> On Friday, July 9, 2021 at 10:54:48 AM UTC-7, dpb wrote:
>
> (snip)
> > I can't let that episode pass without at least one war story -- those
> > old enough to remember the CDC recall it was a 60 bit word as opposed to
> > 32/64 and all that entails.
>
> > In addition, the CDC compilers decoded only the lower 18 bits of
> > integers for array address indexing wise; the wise guys who wrote RELAP
> > thus used those same variables for other purposes simultaneously by
> > storing other control and logic variables in the upper 32 bits.
> Is this a software operation, or what the hardware did?
> > Owing to this, masking/shifting with intrinsics in line ended up being a
> > major performance bottleneck; eventually the compiler vendor made a
> > similar patch to the compiler for us.
.
> IBM S/360 uses 24 bit addresses, and OS/360 uses the high byte often for
> other uses, especially in system control blocks. For S/370, LCM and STCM
> were added to allow load/store of some of the bytes of a register
> (as selected by mask bits), with the primary use being addresses.
.
The S/370 instructions for loading and storing individual bytes
are ICM and STCM.
These supplemented the S/360 instructions IC and STC that
loaded and stored a single byte at the least-significant end of the register.
.
ICM/STCM could be used to extract/store the exponent of a floating-point
number.

dpb

unread,
Jul 12, 2021, 10:19:00 AM7/12/21
to
On 7/9/2021 8:56 PM, gah4 wrote:
> On Friday, July 9, 2021 at 10:54:48 AM UTC-7, dpb wrote:
>
> (snip)
>
>> I can't let that episode pass without at least one war story -- those
>> old enough to remember the CDC recall it was a 60 bit word as opposed to
>> 32/64 and all that entails.
>
>> In addition, the CDC compilers decoded only the lower 18 bits of
>> integers for array address indexing wise; the wise guys who wrote RELAP
>> thus used those same variables for other purposes simultaneously by
>> storing other control and logic variables in the upper 32 bits.
>
> Is this a software operation, or what the hardware did?

...
I am not positive -- but I think it was just inherent in the compiler
that the variable as part of an indexing expression was silently masked
whereas if fetched and stored the full integer value was retrieved (as
it was either way in the compiler running the coprocessor code, of
course, the cause of the problems).

The INEL guys had had packed program control data in the upper end --
since the code that used those was only needed on input/output
processing the overhead of the explicit bit-munging to get those values
wasn't significant in overall run time.

Not so, of course, when the same variables were then used as the
indexing variables inside the bowels of the numerical iterations for the
solution during the transient simulation...

When we were able to get the compiler vendor to make the change in their
compiler to also use do that in generating the array addressing code
instructions, its performance also soared in comparison to using the
explicit intrinsics that it didn't inline initially.

I still cannot recall the name of that little outfit out of California,
though...they didn't make it when the PC soon evolved to the 286/386
machines and the compute power matched/exceeded the coprocessor
solutions at far less expense.

--

gah4

unread,
Jul 12, 2021, 7:18:18 PM7/12/21
to
On Monday, July 12, 2021 at 7:19:00 AM UTC-7, dpb wrote:
> On 7/9/2021 8:56 PM, gah4 wrote:
> > On Friday, July 9, 2021 at 10:54:48 AM UTC-7, dpb wrote:

(snip)

> >> In addition, the CDC compilers decoded only the lower 18 bits of
> >> integers for array address indexing wise; the wise guys who wrote RELAP
> >> thus used those same variables for other purposes simultaneously by
> >> storing other control and logic variables in the upper 32 bits.

> > Is this a software operation, or what the hardware did?

> I am not positive -- but I think it was just inherent in the compiler
> that the variable as part of an indexing expression was silently masked
> whereas if fetched and stored the full integer value was retrieved (as
> it was either way in the compiler running the coprocessor code, of
> course, the cause of the problems).

OK, it seems that addresses, and address arithmetic are done
in 18 bit address registers:

https://en.wikipedia.org/wiki/CDC_6600#Central_Processor_(CP)
(see the figure on the right)

So yes, indexing will be done in 18 bits, and ignore the rest.


> The INEL guys had had packed program control data in the upper end --
> since the code that used those was only needed on input/output
> processing the overhead of the explicit bit-munging to get those values
> wasn't significant in overall run time.

> Not so, of course, when the same variables were then used as the
> indexing variables inside the bowels of the numerical iterations for the
> solution during the transient simulation...

> When we were able to get the compiler vendor to make the change in their
> compiler to also use do that in generating the array addressing code
> instructions, its performance also soared in comparison to using the
> explicit intrinsics that it didn't inline initially.

> I still cannot recall the name of that little outfit out of California,
> though...they didn't make it when the PC soon evolved to the 286/386
> machines and the compute power matched/exceeded the coprocessor
> solutions at far less expense.

I remember FPS, Floating Point Systems, which made coprocessors
that could be used with VAX and some minicomputers of the time.
It seems that FPS was in Oregon, though.

More recently, I have known C programmers to use the low bits of pointers
on word addressed machines. Then that failed when moved to word
addressed Cray machines.

Robin Vowels

unread,
Jul 13, 2021, 8:54:43 AM7/13/21
to
On Saturday, July 10, 2021 at 3:54:48 AM UTC+10, dpb wrote:
> On 7/9/2021 8:47 AM, dpb wrote:
> ...
> > There's another whole arena of reactor transient analyses for LOCA (loss
> > of coolant analysis) that was the holy grail of the safety analysis
> > section. RELAP was the tool then; we ported it to the PC environment in
> > the days of the x386 by using a multiprocessor coprocessor board running
> > two 68332s with a FORTRAN compiler by a now defunct outfit whose name
> > now escapes me. ...
>
> I can't let that episode pass without at least one war story -- those
> old enough to remember the CDC recall it was a 60 bit word as opposed to
> 32/64 and all that entails.
>
> In addition, the CDC compilers decoded only the lower 18 bits of
> integers for array address indexing wise;
.
The memory address did not necessarily refer to an array.
Scalar variables were accessed the same way as array values.
.
That was inherent in the hardware. SA instructions specified an 18-bit
immediate address; when the SA instruction was executed, the relevant
Address register was loaded, and the corresponding 60-bit register
was loaded or stored (depending on the register number).
An Address register could also be loaded from one of the 60-bit registers.
.

dpb

unread,
Jul 13, 2021, 11:34:48 AM7/13/21
to
...

Yeah, but wasn't them. Been too long ago; the coprocessor plugin board
wasn't from the compiler vendor; iirc there was a supplied utility that
downloaded the code generated by the compiler of choice as long as it
generated compatible object files.

I knew/remembered the 18-bit addressing registers; just didn't recall if
the hardware itself only decoded the lower bits automagically or if had
to ensure was the right length first that the compiler had to deal
with...hence the uncertainty. I was fairly confident was hardware but
not enough so to state it as being so...

--

Ron Shepard

unread,
Jul 13, 2021, 12:40:05 PM7/13/21
to
On 7/12/21 6:18 PM, gah4 wrote:
>> I still cannot recall the name of that little outfit out of California,
>> though...they didn't make it when the PC soon evolved to the 286/386
>> machines and the compute power matched/exceeded the coprocessor
>> solutions at far less expense.
> I remember FPS, Floating Point Systems, which made coprocessors
> that could be used with VAX and some minicomputers of the time.
> It seems that FPS was in Oregon, though.

I used FPS attached processors all through the 1980s. It was not PCs
that doomed them, but rather the rise of the cheap, unix based, RISC
machines, which I also used during that time. Especially in the early
80s, they were very cost effective in delivering floating point
operations. You programmed them in assembler and fortran, both of which
were cross compilers than ran on a VAX front-end machine.

https://doi.org/10.1002/qua.560240865

$.02 -Ron Shepard

Harold Stevens

unread,
Jul 13, 2021, 2:59:29 PM7/13/21
to
In <CljHI.22896$VU3....@fx46.iad> Ron Shepard:

[Snip...]

> I used FPS attached processors all through the 1980s

[Snip...]

> You programmed them in assembler and fortran, both of which
> were cross compilers than ran on a VAX front-end machine.

They were very useful as math coprocessors for our VAX machines
in radar systems simulations during the early 80's.

Also agree about RISC/Unix having more to do with the demise of
FPS, than PC's (Wintel) of the era.

--
Regards, Weird (Harold Stevens) * IMPORTANT EMAIL INFO FOLLOWS *
Pardon any bogus email addresses (wookie) in place for spambots.
Really, it's (wyrd) at att, dotted with net. * DO NOT SPAM IT. *
I toss GoogleGroup (http://twovoyagers.com/improve-usenet.org/).

dpb

unread,
Jul 13, 2021, 3:50:14 PM7/13/21
to
Yeah, but those were a whole different class than the one/ones we were
using that were just plug-ins into a PC that are talking about here.

There was a VAX in the office, but the price to use it was exorbitant
and we had to have something we could take to Foxboro, MA, to the
Foxboro factory floor to tie into the customer's actual control system
to do the simulations during checkout/factory acceptance/NRC licensing
approval/demonstration. Not something could do with the corporate
11/780! even if had had the budget which wasn't even close. :)

--

dpb

unread,
Jul 13, 2021, 7:28:50 PM7/13/21
to
On 7/13/2021 2:50 PM, dpb wrote:
> On 7/13/2021 11:40 AM, Ron Shepard wrote:
>> On 7/12/21 6:18 PM, gah4 wrote:
...

>> I used FPS attached processors all through the 1980s. It was not PCs
>> that doomed them, but rather the rise of the cheap, unix based, RISC
>> machines, which I also used during that time. Especially in the early
>> 80s, they were very cost effective in delivering floating point
>> operations. You programmed them in assembler and fortran, both of
>> which were cross compilers than ran on a VAX front-end machine.
>>
>> https://doi.org/10.1002/qua.560240865
>>
>> $.02 -Ron Shepard
>
> Yeah, but those were a whole different class than the one/ones we were
> using that were just plug-ins into a PC that are talking about here.
...

I've gargled some, but found no references to anything similar to these
-- the last ones I recall were based on the i960 RISC processor as they
had passed Motorola by...on reflection I don't know what became of that
system; I suppose at some point it was just pitched as obsolete or maybe
it was delivered with the systej, I just can't recall for sure.

About then is when my consulting clientele base begin to change markedly
and kinda' left that chapter behind and onto other areas with far fewer
regulatory hassles.

--dpb

gah4

unread,
Jul 13, 2021, 8:02:19 PM7/13/21
to
On Tuesday, July 13, 2021 at 4:28:50 PM UTC-7, dpb wrote:


(snip on coprocessors of various kinds)
> I've gargled some, but found no references to anything similar to these
> -- the last ones I recall were based on the i960 RISC processor as they
> had passed Motorola by...on reflection I don't know what became of that
> system; I suppose at some point it was just pitched as obsolete or maybe
> it was delivered with the systej, I just can't recall for sure.

> About then is when my consulting clientele base begin to change markedly
> and kinda' left that chapter behind and onto other areas with far fewer
> regulatory hassles.

It does seem that there were a variety of machines about that time.

Ones based on the i960, and possibly plugging into some form
of IBM PC sound familiar. I believe also ones based on the
National 32032.

The one I remember most is the Masscomp MC-500 which is,
I believe, 68010 based, with (included) floating point accelerator,
and ran some form of Unix. The Fortran compiler would generate code
for the special floating point hardware. As well as I remember, the
C compiler didn't do that.

Not so much later, 68020/68881 systems came along, which might not have
been as fast as the special accelerators, but not so bad for the price.

I also remember a Sun FPA for VME based Sun3 (68020) systems,
which it seems is Weitek 1164/1165 based. Others might also have
been based on those chips.






gah4

unread,
Jul 13, 2021, 8:03:55 PM7/13/21
to
On Thursday, July 8, 2021 at 5:19:47 PM UTC-7, dpb wrote:

(snip)

> Not sure the purpose other than a list but RSICC Radiation Safety
> Information Computer Center at ORNL is the repository of almost all the
> shielding and other reactor codes in use or past use in the
> non-defense-specific nuclear power field.

I tried looking there, but didn't see any code at all. Maybe I looked in the
wrong place, though. Is there a link to any Fortran programs there?

Jeff Ryman

unread,
Jul 13, 2021, 11:10:26 PM7/13/21
to
The majority of the codes available from RSICC are written in Fortran, although there are a scattering of codes in other languages. The MCNP Monte Carlo radiation transport code and the SCALE code system (that also includes the AMPX code system for nuclear cross section processing) comprise over 90% of the code requests from RSICC in recent years. MCNP, from Los Alamos National Laboratory (https://mcnp.lanl.gov/), is written in Fortran. The SCALE code system from Oak Ridge National Laboratory (https://www.ornl.gov/scale) is written in a mixture of Fortran and C++. My understanding is that it will be converted completely to C++ as new revisions are released. When I worked at ORNL (over 20 years ago now) SCALE was almost completely written in Fortran (with a little IBM assembler back in mainframe days). In recent years the group maintaining SCALE hired a few computer science majors to help with the updating and maintenance of the code system. I suspect (but do not know for sure) that the computer science folks have pushed for the language conversion, since Fortran seems not to be popular among computer science types.

Ron Shepard

unread,
Jul 14, 2021, 1:43:58 AM7/14/21
to
On 7/13/21 6:28 PM, dpb wrote:
> On 7/13/2021 2:50 PM, dpb wrote:
>> On 7/13/2021 11:40 AM, Ron Shepard wrote:
>>> On 7/12/21 6:18 PM, gah4 wrote:
> ...
>
>>> I used FPS attached processors all through the 1980s. It was not PCs
>>> that doomed them, but rather the rise of the cheap, unix based, RISC
>>> machines, which I also used during that time. Especially in the early
>>> 80s, they were very cost effective in delivering floating point
>>> operations. You programmed them in assembler and fortran, both of
>>> which were cross compilers than ran on a VAX front-end machine.
>>>
>>> https://doi.org/10.1002/qua.560240865
>>>
>>> $.02 -Ron Shepard
>>
>> Yeah, but those were a whole different class than the one/ones we were
>> using that were just plug-ins into a PC that are talking about here.
> ...
>
> I've gargled some, but found no references to anything similar to these
> -- the last ones I recall were based on the i960 RISC processor as they
> had passed Motorola by...on reflection I don't know what became of that
> system; I suppose at some point it was just pitched as obsolete or maybe
> it was delivered with the systej, I just can't recall for sure.

I used an Alliant fx2800 parallel machine which was also based on the
Intel i960 RISC cpu. This was in the late 1980s and early 1990s. The
fortran compiler supported both shared-memory and distributed-memory
programming models, and it incorporated many f90 features (e.g.
allocatable arrays) even before the final f90 approval. That was a nice
machine to use, nice programming environment, and nice performance. I
thought the i960 cpu had a lot of potential. It was possible to build
anything from PC class machines to mid-level parallel machines (ours had
16 cpus), to massive supercomputers (which at that time would have been
about 1000 cpus). That would have been an ideal situation for program
development. However, for some reason the chip was not successful in the
market, while the lower-performing 286/386/486 etc. line did survive. It
would take another decade, the early 2000s, before these cpus caught up
with the performance of the i960. Without further development of the
i960 line, and probably other contributing reasons, Alliant closed its
doors in 1992.

$.02 -Ron Shepard

dpb

unread,
Jul 14, 2021, 9:55:53 AM7/14/21
to
It is a repository through which you can then request code from RSICC;
not a direct link to the code itself.

As Jeff Ryman's note indicates, virtually all of it was originally
mostly FORTRAN; as I indicated above much is now quite dated and the
vendors have moved on for the kinds of reactor calculations done for
reactor design and licensing analyses, however.

How much of those are still Fortran I have no idea; likely a lot are now
C++ at least frontends although core numerical-only sections may still
be the original code unless the approaches have been modified entirely
as with the transition from explicit diffusion neutron transport to
nodal approximations or the like.

--



dpb

unread,
Jul 14, 2021, 10:41:00 AM7/14/21
to
Indeed, and unfortunately for "real" computing, the open PC platform and
price drove the mass market when IBM picked Intel '86 the others were
doomed.

The above would, indeed, have been a marvelous development system; the
coprocessor board we had for this specific project was "only" the
M68020/68881; iirc there wasn't yet the cross-compiler for the i860 yet
at the time we needed it for our project.

It was a F77 compiler but was all that was needed to port the RELAP code
developed with FTN/FTN4. I don't recall any real hassles in the
conversion (other than having to go to DP of course)* other than the
issue with the 60-bit integer and high-order storage as outlined before.
We took the expedient of simply using the native 32-bit integer
elsewhere as being sufficient to hold all actual values used otherwise
which turned out to be adequate for our purposes.

(*) Which leads to another war story -- we had a summer student at the
time and gave him the task of making the global substitution (before
SELECTEDxxx of course) and he made the rookie mistake of submitting a
TECO batch command to do a global substitution that turned out to be
quite painful as he also didn't make a backup copy of the source code
files before submitting the job. It ran overnight on the DEC 10 before
they finally killed it, but it destroyed most of the previous work done
before having done...

--

JCampbell

unread,
Jul 14, 2021, 11:42:21 PM7/14/21
to
You gave the summer student the master and no copy ?
> --
Ron's experience with an Alliant fx2800 parallel machine is very interesting. If only I knew!
In early 90's price was a big consideration and my experience was that Apollo/Sparc workstations were too expensive for individual use so most of us used individual pc and 32-bit Lahey / Salford Fortran when the Vax / Pr1me multi-users shut down. (many private companies struggled in early 90's)
After the Vax / Pr1me experience, IBM and other large systems were so unfriendly, we didn't complain.

Ron, how reliable was the Alliant Fortran compiler that supported both shared-memory and distributed-memory programming models ? That suspicion would have made it hard to get funding when workstations were seen as the more expensive way forward.
Looking back, they were incredibly slow and the memory bandwidth would have been a challenge for shared-memory.

The low cost of pc's in the 90's caused the demise of many other hardware alternatives that could have been.

Ron Shepard

unread,
Jul 17, 2021, 4:20:36 PM7/17/21
to
On 7/14/21 10:42 PM, JCampbell wrote:
> On Thursday, July 15, 2021 at 12:41:00 AM UTC+10, dpb wrote:
[...]
> Ron's experience with an Alliant fx2800 parallel machine is very interesting. If only I knew!
> In early 90's price was a big consideration and my experience was that Apollo/Sparc workstations were too expensive for individual use so most of us used individual pc and 32-bit Lahey / Salford Fortran when the Vax / Pr1me multi-users shut down. (many private companies struggled in early 90's)

It was interesting to me at the time which companies survived and which
didn't. It sometimes seemed to have little to do with the quality of
their hardware or software. There are all kinds of architectures at that
time, from large instruction word (VLIW) machines (I include the FPS-164
in that class, although it was only 64-bit words) to large scale SIMD
machines such as the Connection Machine. I experimented with a good
fraction of those machines, all of them based on fortran compilers (f77
plus extensions).

> After the Vax / Pr1me experience, IBM and other large systems were so unfriendly, we didn't complain.

IBM started selling RISC machines in the late 80s, based on their RS6000
CPUs and on their AIX unix operating system. A few years later, 1993 I
think, they partnered with Apple and Motorola in the design and
manufacture of PowerPC cpus. By the mid-90s, they were selling
unix-based parallel machines. We had a 64-cpu IBM SP-1 machine that
overlapped by a few months our Alliant machine, which was at the end of
its life cycle in 1995.

> Ron, how reliable was the Alliant Fortran compiler that supported both shared-memory and distributed-memory programming models ? That suspicion would have made it hard to get funding when workstations were seen as the more expensive way forward.

I also experimented with several unix-based RISC machines. Sun
workstations were reliable, but did not perform very well for our
applications (various quantum chemistry codes). We also had
Ardent/Titan/Stardent workstations (the company kept changing its name).
These were cost effective, but only scaled up to 4 cpus, if I remember
correctly. Those were made by Kubota of Japan, the same company that
made fork lifts and tractors! I also had a unix DEC workstation based on
their ALPHA cpu, which we bought a year or so before DEC closed its
doors. This was a common problem at that time, companies were bought and
sold like a Monopoly game, and a good fraction of the cutting edge, high
performance machines that were available at the time were caught up in
that buying and selling market. ETA, Kendall Square, Connection Machine,
and on and on. It still amazes me how far DEC fell as a company, partly
because of Ken Olsen and his poor vision, partly because of general
economics of the time, reduced government spending, and so on.

I used the ETA machine that was sited at Tallahassee. It ran in a liquid
nitrogen flow bath. There was more plumbing hardware in that machine
room than there was computer hardware. I think at the time that was the
most cost effective computer (dollars per MFLOP), but the company, a
subsidiary of CDC, shut down in 1990.

> Looking back, they were incredibly slow and the memory bandwidth would have been a challenge for shared-memory.

Yes, it was tricky to get maximum performance out of the Alliant FX2800
hardware. There were two caches, a local cache for each CPU, and a
shared cache used by all of the CPUs. This was in addition to shared
main memory and the swap space that was on disk. To get maximum
performance (which I think was about 40 MFLOPS per cpu, 640 MFLOPS
total), you had to use each of those levels of memory in an optimal way.

This is not unlike getting max performance out of current hardware.
There are multiple levels of memory and cache, and to get max
performance you need to get data into the GPU subsystem, reuse it as
much as possible, and extract the results back out and back through the
memory hierarchy.

> The low cost of pc's in the 90's caused the demise of many other hardware alternatives that could have been.

At the time of the Alliant, the typical PC performance was about 1
MFLOPS. The Kubota machines I mentioned above were about 10 MFLOPS. A
single i960 was capable of 40 MFLOPS -- I never understood why it did
not replace the x86 CPUs. PC performance improved in the 1990s, but they
really never factored into any of our hardware decisions until the late
1990s and early 2000s, when linux was available and you could build
rack-mounted parallel machines based on Xeon CPUs with SSE and fast ECC
memory. There were other application areas where PCs were useful, with
smaller memory, smaller disk, lower CPU performance requirements. But
for us, PCs were never in the picture until the 21st century, and then
not really as PCs but as rack-mount units running linux.

I expect my experiences were not entirely unique during those times, but
there was so much hardware available, it would be entirely possible for
someone else to have used entirely different hardware. For example, I
never used any SGI workstations. I exchanged code with people who did,
but I didn't use one myself. I did use SGI machines after they bought
CRAY. In fact, my first programming experience with coarray fortran was
on a SGI-era CRAY computer. I also never used Fujitsu supercomputers,
although, again, I exchanged code with those who did.

$.02 -Ron Shepard


0 new messages