On 7/14/21 10:42 PM, JCampbell wrote:
> On Thursday, July 15, 2021 at 12:41:00 AM UTC+10, dpb wrote:
[...]
> Ron's experience with an Alliant fx2800 parallel machine is very interesting. If only I knew!
> In early 90's price was a big consideration and my experience was that Apollo/Sparc workstations were too expensive for individual use so most of us used individual pc and 32-bit Lahey / Salford Fortran when the Vax / Pr1me multi-users shut down. (many private companies struggled in early 90's)
It was interesting to me at the time which companies survived and which
didn't. It sometimes seemed to have little to do with the quality of
their hardware or software. There are all kinds of architectures at that
time, from large instruction word (VLIW) machines (I include the FPS-164
in that class, although it was only 64-bit words) to large scale SIMD
machines such as the Connection Machine. I experimented with a good
fraction of those machines, all of them based on fortran compilers (f77
plus extensions).
> After the Vax / Pr1me experience, IBM and other large systems were so unfriendly, we didn't complain.
IBM started selling RISC machines in the late 80s, based on their RS6000
CPUs and on their AIX unix operating system. A few years later, 1993 I
think, they partnered with Apple and Motorola in the design and
manufacture of PowerPC cpus. By the mid-90s, they were selling
unix-based parallel machines. We had a 64-cpu IBM SP-1 machine that
overlapped by a few months our Alliant machine, which was at the end of
its life cycle in 1995.
> Ron, how reliable was the Alliant Fortran compiler that supported both shared-memory and distributed-memory programming models ? That suspicion would have made it hard to get funding when workstations were seen as the more expensive way forward.
I also experimented with several unix-based RISC machines. Sun
workstations were reliable, but did not perform very well for our
applications (various quantum chemistry codes). We also had
Ardent/Titan/Stardent workstations (the company kept changing its name).
These were cost effective, but only scaled up to 4 cpus, if I remember
correctly. Those were made by Kubota of Japan, the same company that
made fork lifts and tractors! I also had a unix DEC workstation based on
their ALPHA cpu, which we bought a year or so before DEC closed its
doors. This was a common problem at that time, companies were bought and
sold like a Monopoly game, and a good fraction of the cutting edge, high
performance machines that were available at the time were caught up in
that buying and selling market. ETA, Kendall Square, Connection Machine,
and on and on. It still amazes me how far DEC fell as a company, partly
because of Ken Olsen and his poor vision, partly because of general
economics of the time, reduced government spending, and so on.
I used the ETA machine that was sited at Tallahassee. It ran in a liquid
nitrogen flow bath. There was more plumbing hardware in that machine
room than there was computer hardware. I think at the time that was the
most cost effective computer (dollars per MFLOP), but the company, a
subsidiary of CDC, shut down in 1990.
> Looking back, they were incredibly slow and the memory bandwidth would have been a challenge for shared-memory.
Yes, it was tricky to get maximum performance out of the Alliant FX2800
hardware. There were two caches, a local cache for each CPU, and a
shared cache used by all of the CPUs. This was in addition to shared
main memory and the swap space that was on disk. To get maximum
performance (which I think was about 40 MFLOPS per cpu, 640 MFLOPS
total), you had to use each of those levels of memory in an optimal way.
This is not unlike getting max performance out of current hardware.
There are multiple levels of memory and cache, and to get max
performance you need to get data into the GPU subsystem, reuse it as
much as possible, and extract the results back out and back through the
memory hierarchy.
> The low cost of pc's in the 90's caused the demise of many other hardware alternatives that could have been.
At the time of the Alliant, the typical PC performance was about 1
MFLOPS. The Kubota machines I mentioned above were about 10 MFLOPS. A
single i960 was capable of 40 MFLOPS -- I never understood why it did
not replace the x86 CPUs. PC performance improved in the 1990s, but they
really never factored into any of our hardware decisions until the late
1990s and early 2000s, when linux was available and you could build
rack-mounted parallel machines based on Xeon CPUs with SSE and fast ECC
memory. There were other application areas where PCs were useful, with
smaller memory, smaller disk, lower CPU performance requirements. But
for us, PCs were never in the picture until the 21st century, and then
not really as PCs but as rack-mount units running linux.
I expect my experiences were not entirely unique during those times, but
there was so much hardware available, it would be entirely possible for
someone else to have used entirely different hardware. For example, I
never used any SGI workstations. I exchanged code with people who did,
but I didn't use one myself. I did use SGI machines after they bought
CRAY. In fact, my first programming experience with coarray fortran was
on a SGI-era CRAY computer. I also never used Fujitsu supercomputers,
although, again, I exchanged code with those who did.
$.02 -Ron Shepard