Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A Computer That Never Was: the IBM 7095

453 views
Skip to first unread message

Quadibloc

unread,
Nov 21, 2017, 5:55:39 PM11/21/17
to
Passed by Bitsavers, noted there were some interesting new items there.

One of them was some internal IBM documents referring to a successor to the IBM
7094.

This would have had an incompatible instruction set, but would be able to switch
into a compatibility mode. In its native mode, addresses would be 18 bits long
instead of 15 bits - and instead of seven (or three) index registers, it would
have sixteen index registers (or maybe 15).

Also, the document I looked at noted that the 7095 would be able to connect to
NPL peripherals: that is, the "new product line", which later became known as
the System/360.

I think that making successors to the 1401 would have actually have been a good
business decision on the part of IBM, but I don't see any compelling rationale
to consider making an exception for something like the 7095. It is incompatible
anyways, and in addition scientific computer users are a relatively small
market.

John Savard

hanc...@bbs.cpcn.com

unread,
Nov 21, 2017, 8:15:54 PM11/21/17
to
On Tuesday, November 21, 2017 at 5:55:39 PM UTC-5, Quadibloc wrote:
> Passed by Bitsavers, noted there were some interesting new items there.

Did you mean "posted" by bitsavers?
I think the 709x series was used by large businesses as well as
scientific users. For example SABRE used two for the reservation
system. (I know very little about the 709x user community). I
suspect as they got more applications, they needed more horsepower.
I think there were some elementary on-line applications, I believe
the 709x had some interrupt capability.

As to the 1401 successor, I respectfully must disagree. The key
issue was that S/360 had hardware emulation allowing easy and faster
execution of 1401 programs on the user's new S/360. Given that, there
was no point to have a faster 1401 in competition with S/360. Further,
for 1401 users who did want more speed, there were both the 1410
and 1460.

(I'd love to know if anyone is still running 1401 emulation or
when the last emulation was shut down. I know one user still using
it in the late 1990s (and couldn't wait to get rid of it.))




Peter Flass

unread,
Nov 21, 2017, 8:25:35 PM11/21/17
to
Small maybe, but prestigious.

--
Pete

hanc...@bbs.cpcn.com

unread,
Nov 21, 2017, 8:42:28 PM11/21/17
to
On Tuesday, November 21, 2017 at 8:25:35 PM UTC-5, Peter Flass wrote:

> > I think that making successors to the 1401 would have actually have been a good
> > business decision on the part of IBM, but I don't see any compelling rationale
> > to consider making an exception for something like the 7095. It is incompatible
> > anyways, and in addition scientific computer users are a relatively small
> > market.
> >
>
> Small maybe, but prestigious.

I'll throw out a couple of items for discussion--I'm curious as
to what people think...

Both Watsons Sr. and Jr. pursued the scientific market and pure
research for prestige. While there were clearly some benefits to
the company, I wonder how much there actually was. For instance,
Watson Jr's decision to compete with Control Data with computers
he didn't have and skills his people didn't have turned out to
bite him on the butt rather badly. Even if Watson did manage to
build those machines, would having them in the product line really
attracted more business customers? Business customers want to get
out the payroll predictably week after week. They don't care what's
under the hood, rather, what's coming out of the printer.

On the flip side, embracing research and Columiba Univ in 1940, IMHO,
benefited the general product line. IBM ended up with the 604
calculator which sold 5,600 units, not bad at all. IBM learned a
lot from doing the SSEC, which contributed to its mainframe development.

IBM learned a lot and had paying customers for its 701 computer, it's
first. IMHO, the lessons from that were applied to the 702 (the first
business machine) and contributed toward developing its people in
electronics and computer support, critical skills.

Quadibloc

unread,
Nov 22, 2017, 1:49:19 PM11/22/17
to
On Tuesday, November 21, 2017 at 6:15:54 PM UTC-7, hanc...@bbs.cpcn.com wrote:
> On Tuesday, November 21, 2017 at 5:55:39 PM UTC-5, Quadibloc wrote:

> > Passed by Bitsavers, noted there were some interesting new items there.

> Did you mean "posted" by bitsavers?

I meant that I casually visited their site.

John Savard

Jon Elson

unread,
Nov 23, 2017, 11:09:44 PM11/23/17
to
Ugh! The 709x were behemoths. Was this going to be implemented in a newer
technology, or the same SMS of the 7090-7094? The 7094 was about 12
cabinets, each bigger that a 1401 (mostly taller), interconnected with 100
conductor coax cables with 200-pin connectors.

The 360/30 ** WAS ** a 1401 in sheep's clothing, ie a character machine (8
bit memory, 8 bit data paths). A lot of early "360 users" ran them
exclusively in 14xx emulation mode.

I think IBM was trying to crush the use of the old machine series to reduce
their programming support effort. They were supporting FOUR major product
lines before the 360, and they thought that effort was eating them alive.
(14xx, 707x for business, and 1620 and 709x for scientific.) My THOUGHT
comment refers to the effort of getting OS/360 variants running was so much
bigger than they expected, that it dwarfed the earlier programming support.
On the other hand, they delivered SO MUCH MORE software with the 360 series,
where the program products on the earlier machines was really pretty
limited. But, this was made possible because it was all for ONE
architecture.

Jon

Quadibloc

unread,
Nov 24, 2017, 7:24:19 AM11/24/17
to
On Thursday, November 23, 2017 at 9:09:44 PM UTC-7, Jon Elson wrote:
> They were supporting FOUR major product
> lines before the 360, and they thought that effort was eating them alive.

It's odd that they thought that, considering that they were more than four times
as big as their nearest competitor.

John Savard

jmfbahciv

unread,
Nov 24, 2017, 10:29:30 AM11/24/17
to
that wouldn't have mattered. Software development, support and maintenance
of each would cost more as time went on. DEC had a similar conclusion
and decided to concentrate on one _hardware_ product line in the early 80s.
Don't forget that both companies were hardware, not software, companies.

/BAH

John Levine

unread,
Nov 24, 2017, 10:31:29 AM11/24/17
to
In article <Wtydnb8Bq4kfBorH...@giganews.com>,
Jon Elson <el...@pico-systems.com> wrote:
>I think IBM was trying to crush the use of the old machine series to reduce
>their programming support effort. They were supporting FOUR major product
>lines before the 360, and they thought that effort was eating them alive.
>(14xx, 707x for business, and 1620 and 709x for scientific.) My THOUGHT
>comment refers to the effort of getting OS/360 variants running was so much
>bigger than they expected, that it dwarfed the earlier programming support.
>On the other hand, they delivered SO MUCH MORE software with the 360 series,
>where the program products on the earlier machines was really pretty
>limited. But, this was made possible because it was all for ONE
>architecture.

That is all true, and worked well for commercial programming. It
worked less well for scientific programming due to the botched
floating point design of the 360 which wasn't fixed until decades
later when they added IEEE floating point.

R's,
John

J. Clarke

unread,
Nov 24, 2017, 3:08:14 PM11/24/17
to
On Fri, 24 Nov 2017 15:31:28 -0000 (UTC), John Levine <jo...@iecc.com>
wrote:
Pity they don't make it available for VS Fortran. Bloody annoying
that they refuse to update their mainframe Fortran.
>
>R's,
>John

John Levine

unread,
Nov 24, 2017, 3:43:18 PM11/24/17
to
In article <nvug1dtl57v25duef...@4ax.com>,
J. Clarke <jclarke...@gmail.com> wrote:
>> [ moving everything to the 360 ]
>>That is all true, and worked well for commercial programming. It
>>worked less well for scientific programming due to the botched
>>floating point design of the 360 which wasn't fixed until decades
>>later when they added IEEE floating point.
>
>Pity they don't make it available for VS Fortran. Bloody annoying
>that they refuse to update their mainframe Fortran.

There's always Linux.

R's,
John

J. Clarke

unread,
Nov 24, 2017, 4:32:45 PM11/24/17
to
On Fri, 24 Nov 2017 20:43:17 -0000 (UTC), John Levine <jo...@iecc.com>
wrote:
Good luck linking gfortran to CICS.

John Levine

unread,
Nov 25, 2017, 11:02:48 AM11/25/17
to
In article <8t3h1dt0f2ravr1q8...@4ax.com>,
J. Clarke <jclarke...@gmail.com> wrote:
>>>>later when they added IEEE floating point.
>>>
>>>Pity they don't make it available for VS Fortran. Bloody annoying
>>>that they refuse to update their mainframe Fortran.
>>
>>There's always Linux.
>
>Good luck linking gfortran to CICS.

CICS has a POSIX interface. That could be a fun way to waste a few
months.



J. Clarke

unread,
Nov 25, 2017, 11:26:26 AM11/25/17
to
On Sat, 25 Nov 2017 16:02:46 -0000 (UTC), John Levine <jo...@iecc.com>
wrote:
What does that have to do with Linux? The intefaces there would all
be through the Z/OS Unix component (Z/OS is a certified Unix you
know--it doesn't need Linux running on top of it) and as far as I can
find out there is no Z/OS port of gfortran..

Ahem A Rivet's Shot

unread,
Nov 25, 2017, 12:11:05 PM11/25/17
to
On 24 Nov 2017 15:28:47 GMT
jmfbahciv <See....@aol.com> wrote:

> that wouldn't have mattered. Software development, support and
> maintenance of each would cost more as time went on. DEC had a similar
> conclusion and decided to concentrate on one _hardware_ product line in
> the early 80s. Don't forget that both companies were hardware, not
> software, companies.

I always thought of IBM as primarily a services company, they
leased you hardware and software so that they could sell you consultancy
services deciding what you want, installation services to set things up,
maintenance contracts to keep it going as well as sundry other items and be
the first port of call whenever you wanted something else or a bigger faster
system.

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/

Anne & Lynn Wheeler

unread,
Nov 25, 2017, 12:11:05 PM11/25/17
to
Jon Elson <el...@pico-systems.com> writes:
> I think IBM was trying to crush the use of the old machine series to reduce
> their programming support effort. They were supporting FOUR major product
> lines before the 360, and they thought that effort was eating them alive.
> (14xx, 707x for business, and 1620 and 709x for scientific.) My THOUGHT
> comment refers to the effort of getting OS/360 variants running was so much
> bigger than they expected, that it dwarfed the earlier programming support.
> On the other hand, they delivered SO MUCH MORE software with the 360 series,
> where the program products on the earlier machines was really pretty
> limited. But, this was made possible because it was all for ONE
> architecture.

from old post
http://www.garlic.com/~lynn/94.html#44

I ran across this description at the time of the government anti-trust
suit (in the early '70s) ... I never ran across any validation/repeat,
so I don't know if it is real:

An "expert witness" representing one of the companies (that left the
business) testified regarding the state-of-the-art in the late
50s. Supposedly in the late 50s ALL of the computer companies realized
the SINGLE MOST IMPORTANT criteria to be successful in the computer
business was to have compatibility across the whole computer line,
from entry level to the largest machine. The witness went on to
observe that only one company was successful in meeting this goal, all
the other companies failed. The remaining companies failed to
adequately deal with multiple plant operations (each responsible for a
particular model in the product line) that locally optimized the
hardware architecture/implementation for the specific model.

... snip ...

aka computer market issue more than software development cost.

account of end of ACS360 ... that executives were afraid that it would
advance the state-of-the-art too fast resulting in loosing control of
the market ... shortly later Amdahl leaves and starts his own clone
compatible company
https://people.cs.clemson.edu/~mark/acs_end.html

Early 70s, Amdahl gives talk in large MIT auditorium ... some students
make an issue of him becoming agent of far east companies (owned half
the company and did a lot of the manufacturing). He is also asked what
justification did he use to get investment money for his new company. He
made some reference that even if IBM was to completely walk away from
360 ... customers had invested large billions in 360 software, that it
would keep him in business until the end of the century.

This was early in the FS period ... which was going to completely
replace 360/370 and was completely different ... account of FS
http://www.jfsowa.com/computer/memo125.htm

later Amdahl claims he had no knowledge or awareness of FS ... but his
comments at MIT sure seems to have overtones of FS reference. Note that
during the FS period, internal politics were shutting down 360/370
efforts ... and the claim is that the lack of 360/370 products during
the FS period gave clone processor makers a market foothold.

some past posts
http://www.garlic.com/~lynn/submain.html#futuresys

total trivia: in the 80s, IBM TSS/370 group got contract with AT&T to do
stripped down TSS/370 kernel (SSUP) with unix layered on top ... UNIX
heavily leveraging low-level TSS/370 for hardware & device support. At
the same time Amdahl had UNIX port running in VM370 virtual machine
(GOLD/UTS) and IBM had UNIX work-alike, UCLA LOCUS as AIX/370 ... also
running in VM370 virtual machine. At least in the AIX/370 case,
mainframe hardware field support said that they wouldn't provide support
if software didn't have full EREP support. The issue was that the cost
to retrofit full mainframe EREP support to UNIX was several times the
cost of straight-forward UNIX port to 370 (resulting in running under
VM370 providing full EREP).

Somewhat looking at the TSS/UNIX effort, for some time in the mid-80s
there was a project to do low-level mainframe kernel that provided EREP,
device support, device error recovery, etc ... which would be common for
IBM's four mainframe operating systems, MVS, VM370, VS1, DOS/VS (as
development cost savings justification) ... at one point having
something like 500 people ... but never getting much past writing
specifications.

other post referencing to testimony about need for compatible
product line:
http://www.garlic.com/~lynn/96.html#20 1401 series emulation still running?
http://www.garlic.com/~lynn/99.html#231 Why couldn't others compete against IBM?
http://www.garlic.com/~lynn/2000e.html#10 Is Al Gore The Father of the Internet?^
http://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
http://www.garlic.com/~lynn/2001j.html#33 Big black helicopters
http://www.garlic.com/~lynn/2001j.html#38 Big black helicopters
http://www.garlic.com/~lynn/2001j.html#39 Big black helicopters
http://www.garlic.com/~lynn/2001n.html#85 The demise of compaq
http://www.garlic.com/~lynn/2002c.html#0 Did Intel Bite Off More Than It Can Chew?
http://www.garlic.com/~lynn/2003.html#71 Card Columns
http://www.garlic.com/~lynn/2003o.html#43 Computer folklore - forecasting Sputnik's orbit with
http://www.garlic.com/~lynn/2005k.html#0 IBM/Watson autobiography--thoughts on?
http://www.garlic.com/~lynn/2005k.html#4 IBM/Watson autobiography--thoughts on?
http://www.garlic.com/~lynn/2006q.html#60 Was FORTRAN buggy?
http://www.garlic.com/~lynn/2007f.html#77 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007g.html#42 1960s: IBM mgmt mistrust of SLT for ICs?
http://www.garlic.com/~lynn/2007m.html#34 IBM 8000 ???
http://www.garlic.com/~lynn/2007p.html#8 what does xp do when system is copying
http://www.garlic.com/~lynn/2007t.html#63 Remembering the CDC 6600
http://www.garlic.com/~lynn/2010.html#45 360 programs on a z/10
http://www.garlic.com/~lynn/2010b.html#14 360 programs on a z/10
http://www.garlic.com/~lynn/2010k.html#21 Snow White and the Seven Dwarfs
http://www.garlic.com/~lynn/2011b.html#57 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011j.html#69 Who was the Greatest IBM President and CEO of the last century?
http://www.garlic.com/~lynn/2011l.html#12 Selectric Typewriter--50th Anniversary
http://www.garlic.com/~lynn/2012e.html#105 Burroughs B5000, B5500, B6500 videos
http://www.garlic.com/~lynn/2012l.html#27 PDP-10 system calls, was 1132 printer history
http://www.garlic.com/~lynn/2013i.html#73 Future of COBOL based on RDz policies was Re: RDz or RDzEnterprise developers
http://www.garlic.com/~lynn/2014e.html#50 The mainframe turns 50, or, why the IBM System/360 launch was the dawn of enterprise IT
http://www.garlic.com/~lynn/2016d.html#66 PL/I advertising
http://www.garlic.com/~lynn/2017f.html#40 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017h.html#50 System/360--detailed engineering description (AFIPS 1964)

posts referencing TSS/SSUP/UNIX
http://www.garlic.com/~lynn/2007m.html#69 Operating systems are old and busted
http://www.garlic.com/~lynn/2010e.html#17 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
http://www.garlic.com/~lynn/2010h.html#61 (slightly OT - Linux) Did IBM bet on the wrong OS?
http://www.garlic.com/~lynn/2010i.html#44 someone smarter than Dave Cutler
http://www.garlic.com/~lynn/2010l.html#2 TSS (Transaction Security System)
http://www.garlic.com/~lynn/2010o.html#0 Hashing for DISTINCT or GROUP BY in SQL
http://www.garlic.com/~lynn/2011.html#73 Speed of Old Hard Disks - adcons
http://www.garlic.com/~lynn/2011.html#96 History of copy on write
http://www.garlic.com/~lynn/2011f.html#85 SV: USS vs USS
http://www.garlic.com/~lynn/2012.html#67 Has anyone successfully migrated off mainframes?
http://www.garlic.com/~lynn/2012f.html#28 which one came first
http://www.garlic.com/~lynn/2012o.html#34 Regarding Time Sharing
http://www.garlic.com/~lynn/2013n.html#24 Aging Sysprogs = Aging Farmers
http://www.garlic.com/~lynn/2013n.html#92 'Free Unix!': The world-changing proclamation made30yearsagotoday
http://www.garlic.com/~lynn/2014f.html#74 Is end of mainframe near ?
http://www.garlic.com/~lynn/2014j.html#17 The SDS 92, its place in history?
http://www.garlic.com/~lynn/2017.html#20 {wtf} Tymshare SuperBasic Source Code
http://www.garlic.com/~lynn/2017d.html#76 Mainframe operating systems?
http://www.garlic.com/~lynn/2017d.html#80 Mainframe operating systems?
http://www.garlic.com/~lynn/2017d.html#82 Mainframe operating systems?
http://www.garlic.com/~lynn/2017g.html#102 SEX

--
virtualization experience starting Jan1968, online at home since Mar1970

Ahem A Rivet's Shot

unread,
Nov 25, 2017, 1:11:05 PM11/25/17
to
Step 1: Port CICS to Linux.
Step 2: Validate.

See you next century.

hanc...@bbs.cpcn.com

unread,
Nov 25, 2017, 3:29:56 PM11/25/17
to
The IBM S/360 history states one the of motivations for a unified
product line was to reduce separate programming efforts for the
various product lines. In addition to the four mentioned, there
was also the 650. The 14xx line was split between the base
1401 and the advanced 1410/7010, so different utilities would
be required.

In addition to programming, there was also the peripheral product
line, each of which had to customized for a particular hardware group.

The flip side of this policy--argued passionately by some in IBM,
was that while S/360 was under development, IBM was supposedly
losing to competitors (the Honeywell was a notable product
stealing 1401 customers). The 7094 and likely 7095 were known
as "temporizing" products to hold the customer base until S/360
was ready. (See the IBM S/360 history for full details on this,
excellent lessons on business in there.)

After a lot of internal debate, IBM finally made the decision to
go 100% behind S/360. One manager, Haanstra, even built a 1401-S
out of SLT chips. I don't believe his effort was appreciated since
it ran counter to the single-line policy.

As mentioned, having efficient hardware emulation was a critical
key to a successful unified product line.

hanc...@bbs.cpcn.com

unread,
Nov 25, 2017, 3:36:41 PM11/25/17
to
Yes, that makes for a very interesting question.

Watson Jr touched on that in his memoir (Father/Son &Co). To me,
IMHO, it seems that he almost saw it as his and IBM's birthright
to control all computer sales and any advance by the competition was
not to be tolerated. IMHO, it appears that IBM needlessly panicked
about the Honeywell stealing 1401 sales--perhaps 200 machines out
of 10,000, IIRC. IBM went nuts about Control Data's super computers,
and their reaction was an anti-trust violation costing them dearly
down the road. (Ironically, Control Data still didn't last too long
and Cray spun off.)

Rightly or wrongly, one thing that frightened Watson Jr was
that certain other competitors, like RCA and GE, had well
established electronics units, while he perceived IBM as being
weak in that. RCA and GE were also well established industrial
companies. Yet both failed in the marketplace.

I remember a lot of site managers of _non_ IBM shops did not
like IBM as a company, and went out of their way to acquire
non-IBM machines.



hanc...@bbs.cpcn.com

unread,
Nov 25, 2017, 3:38:12 PM11/25/17
to
On Friday, November 24, 2017 at 10:31:29 AM UTC-5, John Levine wrote:

> That is all true, and worked well for commercial programming. It
> worked less well for scientific programming due to the botched
> floating point design of the 360 which wasn't fixed until decades
> later when they added IEEE floating point.

What did customers do when S/360 replaced the 709x, if they didn't
like the floating point issue? Did they program their way around
it, or use a competing machine? How many customers are we talking
about?




hanc...@bbs.cpcn.com

unread,
Nov 25, 2017, 3:42:19 PM11/25/17
to
On Friday, November 24, 2017 at 3:08:14 PM UTC-5, J. Clarke wrote:

> Pity they don't make it available for VS Fortran. Bloody annoying
> that they refuse to update their mainframe Fortran.

I'm not sure, but it does appear that they don't bother to upgrade
the mainframe (Z series) Fortran. If you want to use the latest
Fortran, you would have to use a different model or different brand.

But how many people are still using Fortran? Our site used to have
a lot of it (some of it doing commercial applications), but most of
it has been converted to other languages, very little left. A lot
of the engineering work once done in Fortran is now done on CAD/CAM
machines. One engineer told me simple spreadsheets can do a lot of
the stuff. Ironically, they added some math functions (e.g. SIN)
to COBOL, though I can't imagine using COBOL for an engineering job.

I would guess scientific work that once used DEC mini-computers
in the lab now uses PCs.


hanc...@bbs.cpcn.com

unread,
Nov 25, 2017, 3:47:12 PM11/25/17
to
On Saturday, November 25, 2017 at 12:11:05 PM UTC-5, Ahem A Rivet's Shot wrote:

> I always thought of IBM as primarily a services company, they
> leased you hardware and software so that they could sell you consultancy
> services deciding what you want, installation services to set things up,
> maintenance contracts to keep it going as well as sundry other items and be
> the first port of call whenever you wanted something else or a bigger faster
> system.

Even though historically IBM was a huge manufacturing company with
massive factories, their thrust was always _service_. Going back to
Hollerith, they didn't just sell hardware, but helped the customer
use it. Their "salesmen" were vigorously trained application support
people.

To this day, it amazes me that RCA, GE, Honeywell, etc.--all large
industrial companies used to serving other industrial customers--
never understood this and never could provide the kind of customer
support IBM did.

For instance, if RCA sold a television studio to someone, did they
send out instructors to teach maintenance (or provide maintenance),
and to teach how to use the cameras, control boards, transmitters,
etc? I would think this would've been critical in the early days
of TV when newcomers were establishing studios or radio stations
were upgrading.

If Honeywell sold a process control system to a refinery, did it
likewise send instructors on how to run it?


Quadibloc

unread,
Nov 25, 2017, 3:56:44 PM11/25/17
to
On Saturday, November 25, 2017 at 1:42:19 PM UTC-7, hanc...@bbs.cpcn.com wrote:

> But how many people are still using Fortran?

Fortran, not C or Pascal or Python, is still what is primarily used by scientists
to solve large numerical problems on supercomputers.

John Savard

hanc...@bbs.cpcn.com

unread,
Nov 25, 2017, 3:57:22 PM11/25/17
to
On Saturday, November 25, 2017 at 12:11:05 PM UTC-5, Anne & Lynn Wheeler wrote:

> An "expert witness" representing one of the companies (that left the
> business) testified regarding the state-of-the-art in the late
> 50s. Supposedly in the late 50s ALL of the computer companies realized
> the SINGLE MOST IMPORTANT criteria to be successful in the computer
> business was to have compatibility across the whole computer line,
> from entry level to the largest machine. The witness went on to
> observe that only one company was successful in meeting this goal, all
> the other companies failed. The remaining companies failed to
> adequately deal with multiple plant operations (each responsible for a
> particular model in the product line) that locally optimized the
> hardware architecture/implementation for the specific model.

In the late 1950s, compatibility was not so easy to achieve.
Hardware circuits were very expensive in those days, to the extent
that any logic feature had to be carefully analyzed for cost/benefit.

A small, cheaper computer saved critical money by having simpler
circuits, including fewer circuits for addressing. Thus, a 1401
could only address a small amount of memory. Adding address space
for large memory could waste a lot of circuitry. Even S/360,
with its base+displacement approach used more circuitry than a
simple machine and represented a compromise, but deemed worth it.

Indeed, S/360 at first wasn't truly compatible. Machines were
offered with a basic instruction set, optional commercial set
and optional scientific set. I think in later years they
just made machines (or customers bought) the universal instruction
set. (Remember, PCs originally had a floating point chip as an option).


... snip ...

Also in the early 1960s was pressure from academics who wanted
certain features, like timesharing, which were expensive to
implement in hardware and software. One source said early
time sharing efforts failed once a larger load was put on the
system, and it took developers (I think MTSS) a lot longer
then anticipated.


Quadibloc

unread,
Nov 25, 2017, 3:58:05 PM11/25/17
to
On Saturday, November 25, 2017 at 1:38:12 PM UTC-7, hanc...@bbs.cpcn.com wrote:

> What did customers do when S/360 replaced the 709x, if they didn't
> like the floating point issue?

They used double precision for everything, whereas on the 709x, single precision
was adequate for many purposes.

John Savard

J. Clarke

unread,
Nov 25, 2017, 4:29:52 PM11/25/17
to
On Sat, 25 Nov 2017 12:42:18 -0800 (PST), hanc...@bbs.cpcn.com wrote:

>On Friday, November 24, 2017 at 3:08:14 PM UTC-5, J. Clarke wrote:
>
>> Pity they don't make it available for VS Fortran. Bloody annoying
>> that they refuse to update their mainframe Fortran.
>
>I'm not sure, but it does appear that they don't bother to upgrade
>the mainframe (Z series) Fortran. If you want to use the latest
>Fortran, you would have to use a different model or different brand.
>
>But how many people are still using Fortran?

Doesn't matter how many. What matters is that the code is handling
mission-critical tasks and has to be maintained. We are busily
porting to C but there's a _lot_ of code and the output has to match
_exactly_. If it doesn't then we have to keep beating on it until it
does.

Anne & Lynn Wheeler

unread,
Nov 26, 2017, 4:02:22 AM11/26/17
to

hancock4 writes:
> Also in the early 1960s was pressure from academics who wanted
> certain features, like timesharing, which were expensive to
> implement in hardware and software. One source said early
> time sharing efforts failed once a larger load was put on the
> system, and it took developers (I think MTSS) a lot longer
> then anticipated.

re:
http://www.garlic.com/~lynn/2017j.html#68 The true story behind Thanksgiving is a bloody struggle that decimated the population and ended with a head on a stick

Some number of the 7094/CTSS people
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
went to project mac on the 5th floor for multics
https://en.wikipedia.org/wiki/Multics
http://multicians.org/history.html
others went to the science center on the 4th flr ... some past posts
http://www.garlic.com/~lynn/subtopic.html#545tech
and did virtual machine cp40 for 360/50 modified with virtual memory
http://www.garlic.com/~lynn/cp40seas1982.txt
upgraded to cp67 when 360/67 became available. TSS/360 was suppose to
be the "real" system for 360/67 ... but most of the machines were eventually
running cp67.
https://en.wikipedia.org/wiki/CP/CMS
http://multicians.org/thvv/360-67.html
Melinda Varian's history
http://www.leeandmelindavarian.com/Melinda/neuvm.pdf

from above, Les Comeau has written:

Since the early time-sharing experiments used base and limit registers
for relocation, they had to roll in and roll out entire programs when
switching users....Virtual memory, with its paging technique, was
expected to reduce significantly the time spent waiting for an
exchange of user programs.

What was most significant was that the commitment to virtual memory
was backed with no successful experience. A system of that period that
had implemented virtual memory was the Ferranti Atlas computer, and
that was known not to be working well. What was frightening is that
nobody who was setting this virtual memory direction at IBM knew why
Atlas didn't work.35

... snip ...

this is post discussing the reference to "not to be working well"
referencing ATLAS used paging for large virtual memory ... but not
multiprogramming (multiple concurrent address spaces)
http://www.garlic.com/~lynn/2011e.html#2 Multiple Virtual Memory

Virtual memory mods. to 360/40 was also associative memory (each real
page tagged with its virtual memory address) but also contained 4-bit
address space/process-id (supporting multiple concurrent processes)

other posts mentioning the "not to be working well"
http://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
http://www.garlic.com/~lynn/2001h.html#10 VM: checking some myths.
http://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003b.html#1 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2005o.html#4 Robert Creasy, RIP
http://www.garlic.com/~lynn/2006i.html#30 virtual memory
http://www.garlic.com/~lynn/2007e.html#1 Designing database tables for performance?
http://www.garlic.com/~lynn/2007r.html#51 Translation of IBM Basic Assembler to C?
http://www.garlic.com/~lynn/2007r.html#64 CSA 'above the bar'
http://www.garlic.com/~lynn/2007t.html#54 new 40+ yr old, disruptive technology
http://www.garlic.com/~lynn/2007u.html#77 IBM Floating-point myths
http://www.garlic.com/~lynn/2011d.html#81 Multiple Virtual Memory
http://www.garlic.com/~lynn/2012l.html#37 S/360 architecture, was PDP-10 system calls
http://www.garlic.com/~lynn/2015c.html#47 The Stack Depth

Initial CP67 installed Jan1968 didn't have page thashing control ... and
very heavy weight dispatch multiprogramming control ... easily took 10%
of processor with just 30-35 users. Next release had simple fixed page
thrashing control (number of concurrent tasks based on amount of real
storage) from MIT Lincoln Labs along with a simplified dispatch control
... but still quite a bit of overhead. Also, I/O was FIFO queud and one
request at a time ... which tended to hit a brick wall for paging
(especially when page thrashing control didn't work because workload was
different than Lincoln Labs).

I rewrote page replacement algorithm that was much more efficient and a
dynamic monitored page thrashing control. I rewrote
dispatching&scheduling to be dynamic with default policy resource "fair
share". I also implemented ordered seek queuing which degraded much more
gracefully as load increased. I also did multiple page request chaining
for both disk and fixed head device. Multiple page request chaining for
2301 paging drum increase peak throughput from 80/sec to 270/sec.

some past posts
http://www.garlic.com/~lynn/subtopic.html#fairshare
http://www.garlic.com/~lynn/subtopic.html#wsclock.

I also significantly reduced critical pathlengths, part of old SHARE
presentation ... regarding CP67 pathlengths rewrites for OS/360 MFT14
running in virtual machine.
http://www.garlic.com/~lynn/94.html#18

Original OS/360 benchmark under CP/67 increased elapsed time from 322
sec to 787 secs (465 CP67 CPU time). Early pathlengths Rewrite reduced
that to 435 secs (113 CP67 CPU time).

SHARE presentation also references having significantly optimized OS/360
throughput by careful STAGE2 sysgen, optimizing placement of files and
PDS members for avg. seek access ... as well as multi-track PDS
directory search time.

Anne & Lynn Wheeler

unread,
Nov 26, 2017, 10:24:46 AM11/26/17
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:
> re:
> http://www.garlic.com/~lynn/2017j.html#68 The true story behind
> Thanksgiving is a bloody struggle that decimated the population and ended with a head on a stick

re:
http://www.garlic.com/~lynn/2017j.html#71 A Computer That Never Was: the IBM 7095

copied wrong reference, should be the previous
http://www.garlic.com/~lynn/2017j.html#67 A Computer That Never Was: the IBM 7095

Anne & Lynn Wheeler

unread,
Nov 26, 2017, 12:00:38 PM11/26/17
to
On 2017-11-25, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
> Step 1: Port CICS to Linux.
> Step 2: Validate.
>
> See you next century.

re:
http://www.garlic.com/~lynn/2017j.html#66 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#71 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#72 A Computer That Never Was: the IBM 7095

IBM lost much of the education market with the gov. litigation and steep
discounts that it had been giving universities. The litigation also results
in the 23Jun1969 "unbundling" announcement (charging for software,
maintenance, SE services, et) some posts
http://www.garlic.com/~lynn/submain.html#unbundle

It tried to come back in the 80s with creation of ACIS (academic unit)
that started with $300M for "grants" to educational institution. Saw
$25M going to MIT Athena (joint with DEC also giving $25M) and $50M for
CMU unit. The CMU work includes MACH (unix work alike) and CAMELOT
transaction processing, andrew file system, etc

In the unix wars ... trying to offset the SUN/AT&T unix lashup
https://en.wikipedia.org/wiki/Open_Software_Foundation

includes trying to create something independent from SUN/AT&T ... OSF
pulls together Athena, Andrew and LOCUS (ucla work-alike) pieces.

IBM also tries to offset AT&T TUXEDO
https://en.wikipedia.org/wiki/Tuxedo_(software)

with its own UNIX transaction processing ... IBM funding spinoff of
(CMUs) CAMELOT/ENCINA (and AFS) as independent business unit ... and then
purchasing it outright ... for "unix-based CICS"
https://en.wikipedia.org/wiki/Transarc

additional reference
https://en.wikipedia.org/wiki/Encina_(software)
http://www.cs.cmu.edu/~abh/summary.html
https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=OC&subtype=NA&htmlfid=897/ENUS5620-AYV&appname=totalstorage

part of the issue was CICS was highly optimized for 60s OS/360
environment ... that was extremely lightweight ... but didn't scale well
(or easy to adapt to other environments). Around the turn of the
century, I was in datacenter ... that was running something like 130
(separate) "instances" of CICS on large mainframe complex ... aka
single CICS instance not scaling to support resources available. part
of the issue was the instruction sequences were extremely highly
optimized for internal single-thread operation (and couldn't utilize
multiple processors). Finally in 2004, CICS gets some "multiprocessor
exploitation"

https://en.wikipedia.org/wiki/CICS#Early_evolution

some more cics reference ... gone 404, but lives on at wayback machine

The Evolution of CICS: CICS and Multiprocessor Exploitation (2004)
http://web.archive.org/web/20041023110006/http://www.yelavich.com/history/ev200402.htm
and
http://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
and
http://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm

trivia: univ library got ONR grant to do online catalog, part of the
money goes to getting 2321 datacell. Effort was also selected as
beta-test for original cics program product ... and I got tasked to
support/debug activity. some past CICS (&/or BDAM) posts
http://www.garlic.com/~lynn/submain.html#cics

Bill Findlay

unread,
Nov 26, 2017, 4:47:57 PM11/26/17
to
That is because it DID work.
Enough with this bullshit.

--
Bill Findlay

John Levine

unread,
Nov 26, 2017, 7:48:17 PM11/26/17
to
In article <f80r4a...@mid.individual.net>,
Bill Findlay <findl...@blueyonder.co.uk > wrote:
>> that was known not to be working well. What was frightening is that
>> nobody who was setting this virtual memory direction at IBM knew why
>> Atlas didn't work.
>
>That is because it DID work.
>Enough with this bullshit.

The paging worked but the performance was pretty bad. We didn't
understand paging algorithms like working set and LRU until the mid
1960s.

R's,
John

Anne & Lynn Wheeler

unread,
Nov 26, 2017, 8:27:40 PM11/26/17
to
John Levine <jo...@iecc.com> writes:
> The paging worked but the performance was pretty bad. We didn't
> understand paging algorithms like working set and LRU until the mid
> 1960s.

re:
http://www.garlic.com/~lynn/2017j.html#66 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#71 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#72 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#76 A Computer That Never Was: the IBM 7095

about the time I was doing "global LRU" (using page ref bits set &
periodic reset) at the univ. ... there were papers published in ACM and
academic journals on "local LRU" (original CP67 didn't use reference
bits, replacement 1st for pages belonging to inactive tasks, and if
there was none, basically became FIFO).

At SIGOPS (Asilomar, 14-16Dec81), Jim Gray asked me if I could help
co-worker at Tandem get his Stanford PHD which involved global LRU. I
had worked with Jim at SJR and he knew I had down a lot of work on
global LRU and had apple-to-apple comparison between local and global
LRU. Some of the "local LRU" forces were heavily lobbying Stanford to
block awarding PHD involving "global LRU".

Unfortunately, I had been blamed for online computering conferencing on
the internal network (larger than arpanet/internet from just about the
beginning until sometime mid-80s) in the late 70s and early
80s. Folklore is that when the corporate executive committee were told
about online computer conferencing (and the internal network), 5of6
wanted to fire me.

When I went to send information ... management said that I wasn't
allowed to (even tho none of the information involved anything after
joining IBM). I've commented that I hoped that it was done as punishment
for online computer conferencing ... rather than they taking part in the
global/local LRU academic dispute. Finally was allowed to send this
http://www.garlic.com/~lynn/2006w.html#email821019
in this old post
http://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's After Multi-Core?

posts mentioning working set, page replacement, and paging I/O
algorithms
http://www.garlic.com/~lynn/subtopic.html#wsclock

Note that the Atlas documentation that I've referenced said that paging
was supported with associative value giving virtual address for each
real address but w/o any process or address space id ... which implied
that it provided single process at a time with virtual memory larger
than real storage. As a result working set page thrashing controls
wouldn't come into play ... just page replacement algorithm.

also note traditional straight LRU algorithms tend to degenerated to
FIFO under heavy load &/or pathelogical conditions. In the early 70s, I
did slight of hand coding trick that resulted in it degenerating to
RANDOM rather than FIFO (which would outperform traditional LRU).

posts referencing the 19Oct1982 communication
http://www.garlic.com/~lynn/2007c.html#47 SVCs
http://www.garlic.com/~lynn/2007c.html#56 SVCs
http://www.garlic.com/~lynn/2007f.html#18 What to do with extra storage on new z9
http://www.garlic.com/~lynn/2008c.html#65 No Glory for the PDP-15
http://www.garlic.com/~lynn/2008e.html#16 Kernels
http://www.garlic.com/~lynn/2008f.html#3 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
http://www.garlic.com/~lynn/2008h.html#70 New test attempt
http://www.garlic.com/~lynn/2008h.html#79 Microsoft versus Digital Equipment Corporation
http://www.garlic.com/~lynn/2008j.html#6 What is "timesharing" (Re: OS X Finder windows vs terminal window weirdness)
http://www.garlic.com/~lynn/2008k.html#32 squirrels
http://www.garlic.com/~lynn/2008m.html#7 Future architectures
http://www.garlic.com/~lynn/2010f.html#85 16:32 far pointers in OpenWatcom C/C++
http://www.garlic.com/~lynn/2010g.html#0 16:32 far pointers in OpenWatcom C/C++
http://www.garlic.com/~lynn/2010g.html#44 16:32 far pointers in OpenWatcom C/C++
http://www.garlic.com/~lynn/2010i.html#18 How to analyze a volume's access by dataset
http://www.garlic.com/~lynn/2010l.html#23 OS idling
http://www.garlic.com/~lynn/2010m.html#5 Memory v. Storage: What's in a Name?
http://www.garlic.com/~lynn/2010n.html#41 Central vs. expanded storage
http://www.garlic.com/~lynn/2011.html#70 Speed of Old Hard Disks
http://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)
http://www.garlic.com/~lynn/2011c.html#88 Hillgang -- VM Performance
http://www.garlic.com/~lynn/2011d.html#82 Multiple Virtual Memory
http://www.garlic.com/~lynn/2011f.html#73 Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past
http://www.garlic.com/~lynn/2011p.html#53 Odd variant on clock replacement algorithm
http://www.garlic.com/~lynn/2012c.html#17 5 Byte Device Addresses?
http://www.garlic.com/~lynn/2012g.html#21 Closure in Disappearance of Computer Scientist
http://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
http://www.garlic.com/~lynn/2012l.html#37 S/360 architecture, was PDP-10 system calls
http://www.garlic.com/~lynn/2012m.html#18 interactive, dispatching, etc
http://www.garlic.com/~lynn/2013c.html#17 I do not understand S0C6 on CDSG
http://www.garlic.com/~lynn/2013c.html#49 What Makes an Architecture Bizarre?
http://www.garlic.com/~lynn/2013d.html#7 What Makes an Architecture Bizarre?
http://www.garlic.com/~lynn/2013f.html#42 True LRU With 8-Way Associativity Is Implementable
http://www.garlic.com/~lynn/2013i.html#30 By Any Other Name
http://www.garlic.com/~lynn/2013k.html#70 What Makes a Tax System Bizarre?
http://www.garlic.com/~lynn/2014e.html#14 23Jun1969 Unbundling Announcement
http://www.garlic.com/~lynn/2014g.html#97 IBM architecture, was Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
http://www.garlic.com/~lynn/2014i.html#98 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
http://www.garlic.com/~lynn/2014l.html#22 Do we really need 64-bit addresses or is 48-bit enough?
http://www.garlic.com/~lynn/2014m.html#138 How hyper threading works? (Intel)
http://www.garlic.com/~lynn/2015c.html#39 Virtual Memory Management
http://www.garlic.com/~lynn/2015g.html#90 IBM Embraces Virtual Memory -- Finally
http://www.garlic.com/~lynn/2016e.html#2 S/360 stacks, was self-modifying code, Is it a lost cause?
http://www.garlic.com/~lynn/2016g.html#40 Floating point registers or general purpose registers
http://www.garlic.com/~lynn/2017d.html#52 Some IBM Research RJ reports
http://www.garlic.com/~lynn/2017d.html#66 Paging subsystems in the era of bigass memory

Bill Findlay

unread,
Nov 26, 2017, 10:22:30 PM11/26/17
to
Anne & Lynn Wheeler <ly...@garlic.com> wrote:
None of this blather supports your claim that Atlas did nor work.
Unlike you, I had the privilege of using it, and know that it did.
Retract, please.

--
Bill Findlay

Anne & Lynn Wheeler

unread,
Nov 27, 2017, 12:31:14 AM11/27/17
to

re:
http://www.garlic.com/~lynn/2017j.html#66 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#71 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#72 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#76 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#78 thrashing, was Re: A Computer That Never Was: the IBM 7095

comment about Atlas not working well was by Les Comeau mid-60s, possibly
1964 or 1965 (I have looked for possible reasons for the comment). Not
long later, he left science center in cambridge and went to gburg and
for a time "owned" one of the Future System sections (where my wife
reported to him, this was before we had even met).

In addition to be previous descriptions I found for associative/content
addressable mapping of virtual to real w/o any address space/process
identifier. This reference not only doesn't mention any page changed or
reference bits ... implying that it wouldn't be able to implement any
sort of LRU algorithm.
http://www.chilton-computing.org.uk/acl/technology/atlas/p002.htm

One-level Store Concept

Although the main-store of the machine is a combination of both drum
and core stores, to the programmer it may be regarded as a one-level
store, i.e. the core and drum stores have been unified and any
location specified by the programmer can be in either. This
unification is achieved by a set of registers in the V-store known as
the page-address registers. These registers contain a list of all the
blocks currently in the core store. When a particular store access is
required, the page address registers are scanned extremely rapidly in
parallel by special hardware as illustrated in Figure 8.

... snip ...

the hardware modifications adding virtual memory to 360/40 had
associative lookup for each real page to see if it matched that virtual
address ... it also had reference & changed bits and a four bit process
id ... i.e. when task was dispatched the process id was also loaded
... and in addition to matching the virtual address, it also had to
match the process id.

from reference
http://www.garlic.com/~lynn/cp40seas1982.txt

The 64 words were designed to give us a relocate mechanism for each 4k
bytes of our 256K bytes memory. Relocation was achieved by loading a
user number in the search argument register of the associative array,
turning on relocate mode and presenting a CPU address. The match with
user number and address would result in a word selected in the
associative array. The position of the word 0-64 would yield the high
order 6 bits of a memory address. Because of a rather loose cycle time
this was accomplished on the 360/40 with no degradation of the overall
memory cycle. In addition to the translate function, the associative
array was used to record the hardware use and changed statue and our
software noted transient and locked conditions relative to a
particular block of 4K bytes in the memory.

... snip ...

other trivia ... in addition to the issues of supporting multiple
concurrent tasks in memory (each page taged as process/user id as in
cp40) the one level store concept was also major performance problem for
both tss/360 as well as the aborted/failed Future System effort, some
posts
http://www.garlic.com/~lynn/submain.html#futuresys

I've periodically mentioned that when I did the CMS page mapped
filesystem implemention ... I took into account what I learned what not
to do from tss/360. some past posts
http://www.garlic.com/~lynn/submain.html#mmap

The CP40 article also discusses that the 360/40 associative (content
addressable) store doesn't scale up:

Since it appears logically that memory mapping. a la S/360/40, is
superior to program mapping than why isn't it prevalent today? The
answer is cost: the original array cost 35 times what a conventional
memory cell did at that time and since then it seems that associative
logic still is roughly 8 to 10 times what conventional logic cost. There
has been little work done with IBM on associative technology and
therefore there is little likelihood that it will ever become price
competitive in our hardware. what we should now look at is absolute cost
and what associative logic can give us in additional function.

... snip ...

360/40 was 256kbytes with 64 4kbyte pages. 360/67 could have 1-2mbytes
... 512 4kbyte pages. It had an 8-entry associative array that cached
the most recently used virtual page addresses from segment/page tables
in real storage (rather than each real page tagged with virtual
address). 370/168 went to 128 entry table look aside buffer "cache",
that was four-way associative that were five-bit indexed (bits from
virtual page address). If the virtual address wasn't found in one of the
four indexed entries, it would replace one of the four with real
addresses loaded from segment/page tables.

jmfbahciv

unread,
Nov 27, 2017, 9:20:32 AM11/27/17
to
COBOL was used for the Patriot missles. COBOL knew how to do
decimal arithmetic.

>
> I would guess scientific work that once used DEC mini-computers
> in the lab now uses PCs.

Usually, the minis were used to collect raw data which was
down-loaded to a mainframe for caluculations, analysis, etc.

/BAH

William Pechter

unread,
Nov 27, 2017, 12:33:18 PM11/27/17
to
In article <PM00055EF...@aca418c5.ipt.aol.com>,
jmfbahciv <See....@aol.com> wrote:
>hanc...@bbs.cpcn.com wrote:
>
>Usually, the minis were used to collect raw data which was
>down-loaded to a mainframe for caluculations, analysis, etc.
>
>/BAH

By the time the 11/780 was out and the Patriot Missile was being deployed
the analysis and calculations were often done on 32 bit Vaxes since they
were more capable than the PDP's.

The Raytheon militarized Vaxes allowed them to implement the same Vax used
back home in the field. The Motorola 68k also gave them a significant jump
in embedded CPU's.

Bill

hanc...@bbs.cpcn.com

unread,
Nov 27, 2017, 1:50:17 PM11/27/17
to
Ok, but AFAIK, the Fortran the code was written with is still
available and running (it was on our machine when I last checked
and modified an old program.)

Bill Findlay

unread,
Nov 27, 2017, 3:33:24 PM11/27/17
to
Anne & Lynn Wheeler <ly...@garlic.com> wrote:
>
> re:
> http://www.garlic.com/~lynn/2017j.html#66 A Computer That Never Was: the IBM 7095
> http://www.garlic.com/~lynn/2017j.html#71 A Computer That Never Was: the IBM 7095
> http://www.garlic.com/~lynn/2017j.html#72 A Computer That Never Was: the IBM 7095
> http://www.garlic.com/~lynn/2017j.html#76 A Computer That Never Was: the IBM 7095
> http://www.garlic.com/~lynn/2017j.html#78 thrashing, was Re: A Computer
> That Never Was: the IBM 7095
>
> comment about Atlas not working well was by Les Comeau mid-60s, possibly
> 1964 or 1965 (I have looked for possible reasons for the comment).

But repeated by you ad nauseam.
Stop blaming others.
Give evidence or retract.

--
Bill Findlay

J. Clarke

unread,
Nov 27, 2017, 9:45:13 PM11/27/17
to
Yes, that Fortran is available but it is still at an extended Fortran
77 standard, which in the modern world makes it a pain in the butt to
work with, especially considering that the rest of the world has gone
IEEE which means that it's another pain in the butt to get
calculations performed on the mainframe to exactly match calculations
performed on an IEEE system.

People who say "well the ancient whatever is still available" don't
have to live with it on a day to day basis.

Anne & Lynn Wheeler

unread,
Nov 27, 2017, 9:50:22 PM11/27/17
to

jmfbahciv <See....@aol.com> writes:
> Usually, the minis were used to collect raw data which was
> down-loaded to a mainframe for caluculations, analysis, etc.

SLAC & CERN did bit-slice 370 ... 1st 168Es that ran problem state 370
sufficient to execute fortran programs (at 370/168 throughput, 3mips)
... doing initial data reduction along line ... then upgraded to 3081E

slan/cern refs
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3069.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3680.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3753.pdf

in this (ibm-main mailing list) recent post
http://www.garlic.com/~lynn/2017d.html#78 Mainframe operating systems?

triva: this was about the time of xt/370 ... card with 68k programmed to
emulate 370 sufficient to run highly modified version of vm/370 ...
running at 100kips. A couple years later IBM Germany had small chipset
that implemented full 370 running at 168speed (3mips). A german
mainframe clone maker come into possession of a copy of the detailed
specification (for the "ROMAN" chipset). They had partnered with Amdahl
and when somebody from Amdahl saw it, it took possession of it and sent
it to me (being illegal for it to be out of IBM's possession).

a few posts mentionin ROMAN
http://www.garlic.com/~lynn/2011e.html#16 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
http://www.garlic.com/~lynn/2012e.html#46 A bit of IBM System 360 nostalgia
http://www.garlic.com/~lynn/2012l.html#72 zEC12, and previous generations, "why?" type question - GPU computing
http://www.garlic.com/~lynn/2012l.html#77 zEC12, and previous generations, "why?" type question - GPU computing
http://www.garlic.com/~lynn/2014m.html#172 Slushware

other past post mentioning 168Es
http://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?]
http://www.garlic.com/~lynn/2003n.html#8 The IBM 5100 and John Titor
http://www.garlic.com/~lynn/2012l.html#72 zEC12, and previous generations, "why?" type question - GPU computing
http://www.garlic.com/~lynn/2013l.html#27 World's worst programming environment?
http://www.garlic.com/~lynn/2014.html#85 the suckage of MS-DOS, was Re: 'Free Unix!
http://www.garlic.com/~lynn/2015.html#69 Remembrance of things past
http://www.garlic.com/~lynn/2015.html#79 Ancient computers in use today
http://www.garlic.com/~lynn/2015.html#87 a bit of hope? What was old is new again
http://www.garlic.com/~lynn/2015b.html#28 The joy of simplicity?
http://www.garlic.com/~lynn/2015c.html#52 The Stack Depth
http://www.garlic.com/~lynn/2016b.html#78 Microcode
http://www.garlic.com/~lynn/2016e.html#24 Is it a lost cause?
http://www.garlic.com/~lynn/2017c.html#10 SC/MP (1977 microprocessor) architecture

Anne & Lynn Wheeler

unread,
Nov 28, 2017, 2:10:55 AM11/28/17
to

jmfbahciv <See....@aol.com> writes:
> Usually, the minis were used to collect raw data which was
> down-loaded to a mainframe for caluculations, analysis, etc.

jmfbahciv

unread,
Nov 28, 2017, 8:05:54 AM11/28/17
to
I wonder if they used the VAX's FORTRAN libraries. When I was told about
them using COBOL, the VAXes hadn't been a glint in Jud's eye yet.

/BAH

Bill Findlay

unread,
Nov 28, 2017, 8:36:21 AM11/28/17
to
Huge <Hu...@nowhere.much.invalid> wrote:
> B-o-o-o-o-o-o-o-o-ring.
>
> You can go and argue with each other in my killfile.

I quake in terror.

If a group devoted to computer history is not now interested in historical
facts,
I see no reason for its continued existence.

--
Bill Findlay

Peter Flass

unread,
Nov 28, 2017, 2:07:17 PM11/28/17
to
The 68K - a great improvement over its successors.

--
Pete

Peter Flass

unread,
Nov 28, 2017, 2:07:18 PM11/28/17
to
That's surprizing, considering that PL/I has had IEEE FP support for
years.

>
> People who say "well the ancient whatever is still available" don't
> have to live with it on a day to day basis.
>



--
Pete

Charles Richmond

unread,
Nov 28, 2017, 6:00:37 PM11/28/17
to
On 11/26/2017 4:32 AM, Huge wrote:
> On 2017-11-24, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
>> On 24 Nov 2017 15:28:47 GMT
>> jmfbahciv <See....@aol.com> wrote:
>>
>>> that wouldn't have mattered. Software development, support and
>>> maintenance of each would cost more as time went on. DEC had a similar
>>> conclusion and decided to concentrate on one _hardware_ product line in
>>> the early 80s. Don't forget that both companies were hardware, not
>>> software, companies.
>>
>> I always thought of IBM as primarily a services company, they
>> leased you hardware and software so that they could sell you consultancy
>> services deciding what you want, installation services to set things up,
>> maintenance contracts to keep it going as well as sundry other items and be
>> the first port of call whenever you wanted something else or a bigger faster
>> system.
>
> +1
>
>

At big companies like General Dynamics, IBM had "resident engineers" who
kept an ear to the ground and knew when additional computing power or
storage space would be needed. The engineer would have a proposal
written up and on the desk of the decision-making person... before the
call ever went out to request proposals for new equipment.

--
numerist at aquaporin4 dot com

J. Clarke

unread,
Nov 28, 2017, 10:28:29 PM11/28/17
to
I don't know why IBM is so down on Fortran on the mainframe--they have
an updated Fortran but only on the Power architecture.

Quadibloc

unread,
Nov 29, 2017, 10:53:01 AM11/29/17
to
On Tuesday, November 28, 2017 at 8:28:29 PM UTC-7, J. Clarke wrote:

> I don't know why IBM is so down on Fortran on the mainframe--they have
> an updated Fortran but only on the Power architecture.

Since they sell the mainframes at high prices for the amount of CPU power they
provide, basically to database users who need high reliability and security, they
feel that few people would buy a mainframe to run Fortran on it. This is not
unreasonable, although it shows an abandonment of the old idea of a 360-degree
computer.

John Savard

Quadibloc

unread,
Nov 29, 2017, 11:34:19 AM11/29/17
to
On Monday, November 27, 2017 at 1:33:24 PM UTC-7, Bill Findlay wrote:

> Give evidence or retract.

We do have a post by John Levine in this thread:

"The paging worked but the performance was pretty bad. We didn't
understand paging algorithms like working set and LRU until the mid
1960s."

In the case of the Atlas, its single-level store (virtual memory) system
embraced three physical forms of storage: core storage, drum storage, and
random-access tape storage. While core and drum storage were jointly referred to
as "central storage", blocks of memory had to be transferred from the drum to
the core before use.

Even had the algorithms on the Atlas worked very well, using data from tape would have been much slower than getting it from disk; on the other hand, whether they were bad or good, a drum memory, like a head-per-track disk, would be relatively fast, with only a modest performance gap between it and core memory. Even with poor algorithms, the system would be usable.

In the case of IBM's virtual memory, tape was not included, but while there was
an optional drum peripheral which it could include, mostly systems would have
both core and disk.

So here good performance would be expected - something obviously slow like tape
wasn't involved - but it wouldn't be inevitable, disks being significantly
slower than drums.

I remember from contemporary articles in Datamation that IBM's virtual memory
was seen as... a way to sell more core memory. It let users make use of programs
with big memory requirements, getting them hooked, but since the performance
would not be acceptable without adequate core memory, they would end up buying
it. So it's clear enough that IBM's early VM had its problems.

John Savard

Anne & Lynn Wheeler

unread,
Nov 29, 2017, 12:25:26 PM11/29/17
to
Quadibloc <jsa...@ecn.ab.ca> writes:
> I remember from contemporary articles in Datamation that IBM's virtual memory
> was seen as... a way to sell more core memory. It let users make use of programs
> with big memory requirements, getting them hooked, but since the performance
> would not be acceptable without adequate core memory, they would end up buying
> it. So it's clear enough that IBM's early VM had its problems.

360/67 typically had 768k-1m real storage, cp/67 something like 100-160
pageable pages (after real store requirements). 2301 & 2303 fixed head
drums were similar with 4mbyte of storage ... except 2301 read/wrote
four heads in parallel for 1.2mbyte/sec data transfer.

I've mentioned that I rewrote CP/67 page replacement algorithm, working
set scheduling algorithm (for trashing control), lots of critical paths
performance, multiple chained page transfers for both 2301 fixed head
drum (increased 2301 throughput from 80pages/sec to 270pages/sec) and
2314 disk ... and implemented ordered seek queueing for 2314 (improving
2314 disk i/o throughput both for file i/o and page i/o).
http://www.garlic.com/~lynn/2017j.html#71 A Computer That Never Was: the IBM 7095

the "official" operating system for 360/67 was TSS/360 ... which also
implemented one level store. TSS/360 never quite made it to production
quality ... in part because of its performance. Early when CP67 came to
the univesity ... the IBM SE supporting TSS/360 and I had to share
dedicated machine time on the weekend (I was responsible for os/360
support as well as playing with cp67). Early in time-frame that CP67 was
installed (before I got to rewrite a lot of stuff), we did a simulated
fortran edit, compile, link & go script. It turns out that CP67 (before
most of my performance improvements) supported 30-35 users with better
throughput and interactiver response than TSS/360 did with four users.

Two big TSS/360 performance issues was 1) bloated fixed kernel size,
leaving less pageable pages for running applications and 2) single level
store that did synchronous 4k block transfers at a time (while
application was blocked). This is compared to normal file i/o that can
do contiguous allocation with multiple block transfers and multiple
bufferred read-ahead and write-behind overlapped with execution
... significantly improving cpu use, file i/o throughput, and
application throughput. This is also one of the problems with future
system effort that also specified a TSS/360-like single level store and
those performance issues contributed to its demise.
http://www.garlic.com/~lynn/submain.html#futuresys

As I mentioned previously, when I did CMS paged-mapped filesystem, I had
learned what not to do from TSS/360 single-level store ... aka being
able to support filesystem buffering non-blocking multiple page
transfers with some other slight of hand coding tricks. While CMS normal
filesystem I/O (similar to os/360 model) significantly outperformed
TSS/360 single level store paradigm ... I could get three times
filesystem throughput for moderately I/O bound workload (compared
to standard cms filesystem) ... past posts
http://www.garlic.com/~lynn/submain.html#mmap

a problem that I battled tho was that TSS/360 did address the (OS/360)
problem of executables having (virtual) address specific binding. CMS
used a lot of OS/360 did relied on the os/360 relocatable adcon
convention that was updated at load time ... which gave me constant
headaches in a real page-mapped paradigm ... past posts
http://www.garlic.com/~lynn/submain.html#adcon

other posts in this thread
http://www.garlic.com/~lynn/2017j.html#66 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#72 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#76 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#78 thrashing, was Re: A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#79 thrashing, was Re: A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#81 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#82 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#83 Ferranti Atlas paging
http://www.garlic.com/~lynn/2017j.html#84 VS/Repack
http://www.garlic.com/~lynn/2017j.html#85 Ferranti Atlas paging
http://www.garlic.com/~lynn/2017j.html#87 Ferranti Atlas paging

John Levine

unread,
Nov 29, 2017, 12:38:08 PM11/29/17
to
In article <c67eab66-86c0-41dc...@googlegroups.com>,
Quadibloc <jsa...@ecn.ab.ca> wrote:
>> I don't know why IBM is so down on Fortran on the mainframe--they have
>> an updated Fortran but only on the Power architecture.
>
>Since they sell the mainframes at high prices for the amount of CPU power they
>provide, basically to database users who need high reliability and security, they
>feel that few people would buy a mainframe to run Fortran on it. This is not
>unreasonable, although it shows an abandonment of the old idea of a 360-degree
>computer.

IBM tried to make scientific mainframe computing work for a long time.
I have a copy of the manual for the ESA/390 vector facility from the
early 1990s, and the zSeries still has a similar vector facility, most
recently updated to do vector decimal arithmetic. There is a full set
of IEEE float vector instructions, too.

But it's been clear for quite a while that it's a lot more cost effective
to build computational systems out of ordinary CPU chips without all of the
expensive overhead of a mainframe.

Anne & Lynn Wheeler

unread,
Nov 29, 2017, 12:53:03 PM11/29/17
to
Quadibloc <jsa...@ecn.ab.ca> writes:
> In the case of IBM's virtual memory, tape was not included, but while there was
> an optional drum peripheral which it could include, mostly systems would have
> both core and disk.
>
> So here good performance would be expected - something obviously slow like tape
> wasn't involved - but it wouldn't be inevitable, disks being significantly
> slower than drums.

re:
http://www.garlic.com/~lynn/2017j.html#90 thrashing, was Re: A Computer That Never Was: the IBM 7095

other trivia ... was IBM did do the 3850 mass storage system,
which had virtual 3330s and real 3330s and staged (paged) a 3330
cylinder to/from tape.
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_3850.html
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_3850b.html

The MSS transferred data from a cartridge to the 3330 DASD drive when
requested, in a process called "staging." Once staged, the data were
treated the same as any other data resident on a 3330 disk
file. Conversely, when additional 3330 space was needed for incoming
data, a cylinder of 3330 data was automatically selected for "destaging"
back to its data cartridge. Each cartridge was capable of storing 202
cylinders in the 3330 format.

... snip ...

even more trivia: In late 70s, I had done cmsback for internal systems
... that nightly backedup/archived new/changed data to tape. This went
through a number of internal releases ... then Almaden enhanced it for
release as 'work station datasave" (i.e. in addition to supporting cms
filesystems, it had network support for doing something similar for
workstations & PCs that ran backup/archive app). computerworld ref
https://books.google.com/books?id=_4os_r4VnO4C&pg=PA53&lpg=PA53&dq=ibm+workstation+datasave&source=bl&ots=jlC0tGU-NG&sig=v1z2jvt6AU_1ffIhlBuFPWoSVs0&hl=en&sa=X&ved=0ahUKEwiCudGcq-TXAhUVzWMKHQznAwAQ6AEIUDAG#v=onepage&q=ibm%20workstation%20datasave&f=false

this was then enhanced and rebranded as ADSM ... adstar storage manager
... this was in period when IBM had gone into red and was being
reorganized into the 13 "baby blues" in preparation for breaking up the
company (later reversed by new CEO) ... and the disk division was the
furthest along being rebranded as "adstar". However, while the general
breakup was reversed ... "adstar" was eventually sold off
https://en.wikipedia.org/wiki/Adstar

... and ADSM was moved to the IBM Tivoli unit and rebrand as TSM.
https://en.wikipedia.org/wiki/IBM_Tivoli_Storage_Manager

old "CMSBACK" email
http://www.garlic.com/~lynn/lhwemail.html#cmsback
and past posts
http://www.garlic.com/~lynn/submain.html#cmsback

Quadibloc

unread,
Nov 29, 2017, 1:13:35 PM11/29/17
to
On Wednesday, November 29, 2017 at 10:38:08 AM UTC-7, John Levine wrote:

> IBM tried to make scientific mainframe computing work for a long time.
> I have a copy of the manual for the ESA/390 vector facility from the
> early 1990s, and the zSeries still has a similar vector facility, most
> recently updated to do vector decimal arithmetic. There is a full set
> of IEEE float vector instructions, too.

My understanding is that the vector arithmetic capability recently added to
z/Series is similar to SSE and AVX, while the ESA/390 vector facility was
similar to a Cray I.

So, while the vector facility of the ESA/390 was definitely intended for
scientific computing, that of the current z/Series is simply there because it
would be perceived as unusual and inappropriate for a mainframe to lack a
facility found on common garden-variety microprocessors from Intel and others.

So while I agree with your statement, which is undeniably true, that they tried
to make mainframe scientific computing work for a long time, the current vector
facility does not appear to me to be evidence of a renewed commitment to do so.

John Savard

Anne & Lynn Wheeler

unread,
Nov 29, 2017, 1:42:06 PM11/29/17
to
John Levine <jo...@iecc.com> writes:
> IBM tried to make scientific mainframe computing work for a long time.
> I have a copy of the manual for the ESA/390 vector facility from the
> early 1990s, and the zSeries still has a similar vector facility, most
> recently updated to do vector decimal arithmetic. There is a full set
> of IEEE float vector instructions, too.
>
> But it's been clear for quite a while that it's a lot more cost effective
> to build computational systems out of ordinary CPU chips without all of the
> expensive overhead of a mainframe.

IBM was trying to get back into the univ. market and in the early 80s
had initially formed ACIS with $300M to spread around univ. recent ref
http://www.garlic.com/~lynn/2017i.htmL#76 A Computer That Never Was: the IBM 7095

in the mid-80s, ... part of this was playing in NSF supercomputer
centers (at univ) ... adding vector processing option to 3090. The
engineers doing trout 1.5 (internal code name, announced as 3090)
complained that they had so increased (scalar) floating point
performance so that it ran at memory speed. They claimed a major
motivation for vector in the past was that floating point was so much
slower than memory ... that memory could support dozens of floating
point units all running concurrently. Having speeded up (scalar)
floating point to memory speed, they felt that only under very special
conditions would 3090 vector have significantly higher throughput than
scalar (since scalar was already running as fast as memory could feed
it).

The guy at Palo Alto science center that had been involved in doing the
APL microcode assist for 370/145 (ran APL with throughput of APL running
on 370/168). In the early 80s, he was responsible for the FORTRAN-Q
(internal reference) optimizations ... which eventually gets released as
FORTRAN-HX (also doing a lot of work with SLAC up the road). In this
time-frame SLAC (& CERN) had lots of activities ... recent reference
SLAC/CERN doing their own 370 that ran FORTRAN
http://www.garlic.com/~lynn/2017j.htmL#81 A Computer That Never Was: the IBM 7095

however, IBM then outsourced some amount of compiler work and there were
a lot of internal complaints that the Q/HX optimization technology was
given away free to the contractor. some recent posts mentiong Q/HX
http://www.garlic.com/~lynn/2014g.html#71 Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
http://www.garlic.com/~lynn/2015h.html#35 high level language idea

In the late 70s and early 80s, I was blamed for online computer
conferencing on the internal network (larger than arpanet/internet from
just about the beginning until sometime mid-80s). Folklore is that when
the corporate executive committee was told about online computer
conferencing (and the internal network), 5of6 wanted to fire me.
computer conferencing posts
http://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
http://www.garlic.com/~lynn/subnetwork.html#internal

however, the 6th then provided funding out of his office ... including
for a project I called HSDT ... posts
http://www.garlic.com/~lynn/subnetwork.html#hsdt

which included working with the director of NSF on interconnecting the
NSF supercomputer centers. HSDT was suppose to get $20M, but then
congress cuts the budget, some other things happen, and finally NSF
released RFP (in part based on what we already had running). Internal
politics prevents us from bidding ... the NSF director tries to help
writing the company a lettter (with support from other agencies) but
that just makes the internal politics worse (as does statements that
what we already had running was at least 5yrs ahead of the RFP
responses). As regional network connect into the centers, it morphs
into the NSFNET backbone, precursor to modern internet.
https://www.technologyreview.com/s/401444/grid-computing/

old email
http://www.garlic.com/~lynn/lhwemail.html#nsfnet
past posts
http://www.garlic.com/~lynn/subnetwork.html#nsfnet

Part of HSDT included high-speed link into Clementi's IBM E&S lab in
Kingston (not to be confused with the supercomputer effort in Kingston).
Clementi's lab included a 3090 with vector processing facility, but also
several Floating Point System boxes (which also had 40mbyte/sec disk
array support). 3090VF and FPS post, including mention that cornell
"supercomputer center" had 3090-400VF and five FPS boxes
http://www.garlic.com/~lynn/2011h.html#73 Vector Processing on the 3090
http://www.garlic.com/~lynn/2011h.html#74 Vector Processing on the 3090

other past posts mentioning Celementi's lab
http://www.garlic.com/~lynn/2013i.html#14 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2014b.html#4 IBM Plans Big Spending for the Cloud ($1.2B)
http://www.garlic.com/~lynn/2014c.html#63 11 Years to Catch Up with Seymour
http://www.garlic.com/~lynn/2014c.html#72 11 Years to Catch Up with Seymour
http://www.garlic.com/~lynn/2014j.html#35 curly brace languages source code style quides
http://www.garlic.com/~lynn/2016e.html#95 IBM History
http://www.garlic.com/~lynn/2017h.html#50 System/360--detailed engineering description (AFIPS 1964)

we were then doing cluster scaleup for our HA/CMP product ... recent
reference
http://www.garlic.com/~lynn/2017j.html#67 US to regain supercomputer supremacy with Summit
past posts
http://www.garlic.com/~lynn/subtopic.html#hacmp


working with national labs and other on scientific/technical scaleup and
with RDBMS vendors for commercial scalup ... old reference to JAN1992
meeting in Ellison's conference room
http://www.garlic.com/~lynn/95.html#13

with a few weeks of the Ellison's meeting, cluster scaleup was
transferred, announced as IBM supercomputer and we were told we couldn't
work on anything with more than four processors (possibly in part
because mainframe DB2 complained if this went ahead, it would be at
least 5yrs ahead of them) ... some old email from the period until just
prior to the transfer (including discussion of work with LLNL) ... we
depart IBM a few months later.
http://www.garlic.com/~lynn/lhwemail.html#medusa

11FEB1992 press, announce for scientific and technical "ONLY"
http://www.garlic.com/~lynn/2001n.html#6000clusters1
11MAY1992 press, IBM "caught by surprise" by interest in
cluster supercomputing
http://www.garlic.com/~lynn/2001n.html#6000clusters2

hanc...@bbs.cpcn.com

unread,
Nov 29, 2017, 2:35:32 PM11/29/17
to
On Tuesday, November 28, 2017 at 10:28:29 PM UTC-5, J. Clarke wrote:

> I don't know why IBM is so down on Fortran on the mainframe--they have
> an updated Fortran but only on the Power architecture.

That is a very good question.

hanc...@bbs.cpcn.com

unread,
Nov 29, 2017, 2:40:04 PM11/29/17
to
On Wednesday, November 29, 2017 at 10:53:01 AM UTC-5, Quadibloc wrote:

> > I don't know why IBM is so down on Fortran on the mainframe--they have
> > an updated Fortran but only on the Power architecture.
>
> Since they sell the mainframes at high prices for the amount of CPU power they
> provide, basically to database users who need high reliability and security, they
> feel that few people would buy a mainframe to run Fortran on it. This is not
> unreasonable, although it shows an abandonment of the old idea of a 360-degree
> computer.

True, I doubt these days someone would go out and buy a Z series just to
run Fortran on it. I presume the few who need a supercomputer would get
a specialty machine designed for that, not a Z series.

However, as you say, there was great value in a "360-degree" computer.
I could see a great many companies having a research or engineering
unit that might have existing programs to run in Fortran on the mainframe,
and wished to continue using their existing arrangement.
Universities could easily need both business for administration and
science for research.

hanc...@bbs.cpcn.com

unread,
Nov 29, 2017, 2:57:45 PM11/29/17
to
On Wednesday, November 29, 2017 at 11:34:19 AM UTC-5, Quadibloc wrote:

> I remember from contemporary articles in Datamation that IBM's virtual memory
> was seen as... a way to sell more core memory. It let users make use of programs
> with big memory requirements, getting them hooked, but since the performance
> would not be acceptable without adequate core memory, they would end up buying
> it. So it's clear enough that IBM's early VM had its problems.

I am definitely not a technical expert on this, but I always
questioned virtual memory--as compared to other methods or choices.
I wondered if it was a marketing gimmick.

In the old days, programmers broke programs down into tiny parts
to fit on a limited size machine. They might have done their
own "virtualization" by calling subroutines in and out from disk
or using SEGMENTATION in COBOL. Either way, performance suffered
since a might have to have been processed multiple times, or code
imported from disk.

IBM's virtual storage _supposedly_ made that all unnecessary. But
veterans of the early S/370 told me one had to code very carefully
(there are guides on coding for virtual storage) so that paging
works efficiently, and, one could "squeeze in" only so lest the
computer get into thrashing.

I also wondered if the cost in machine and operating system overhead
to support virtual storage might have been better spent just getting
more real memory. In other words, virtual storage required more
instructions (higher cost) and more complexity in the operating system
to manage it (higher cost), and of course, more high speed disk space.
I guess I'm saying they could've taken a S/360-40 with DOS Rel 26 and
rebuilt it with faster circuits and memory (I would've added whatever
_application_ instruction enhancements S/370, but left out Dynamic
Address Translation.)

I don't know how virtual storage and CICS worked together in the 1980s
in terms of performance and machine efficiency.

Indeed, I see storage memory as two issues: 1) big enough to
accommodate some extremely large complex programs and 2) big
enough to run multi-programming and fit several programs in the
machine at the same time.

Today, I think a Z series comes with terabytes of memory, so do
they even need to bother with virtual storage?



Bill Findlay

unread,
Nov 29, 2017, 6:40:59 PM11/29/17
to
Quadibloc <jsa...@ecn.ab.ca> wrote:
> On Monday, November 27, 2017 at 1:33:24 PM UTC-7, Bill Findlay wrote:
>
>> Give evidence or retract.
>
> We do have a post by John Levine in this thread:
>
> "The paging worked but the performance was pretty bad. We didn't
> understand paging algorithms like working set and LRU until the mid
> 1960s."

He provided no evidence for his claim, so I can rebut it to the same
standard:
the Atlas paging algorithm was NOT poor.

> In the case of the Atlas, its single-level store (virtual memory) system
> embraced three physical forms of storage: core storage, drum storage, and
> random-access tape storage. While core and drum storage were jointly referred to
> as "central storage", blocks of memory had to be transferred from the drum to
> the core before use.
>
> Even had the algorithms on the Atlas worked very well, using data from
> tape would have been much slower than getting it from disk ....

Atlas did NOT page to/from tape.
Where on earth did you get that idea?

--
Bill Findlay

Quadibloc

unread,
Nov 29, 2017, 9:13:45 PM11/29/17
to
An original paper describing the Atlas single-level store. There's a copy online at Chilton Computing.

J. Clarke

unread,
Nov 29, 2017, 9:23:21 PM11/29/17
to
We don't have the mainframe to run Fortran. We have the mainframe to
run our production loads some modules of which are written in Fortran.

J. Clarke

unread,
Nov 29, 2017, 9:25:14 PM11/29/17
to
Get "scientific" out of your heads. Contrary to popular belief, in
the real world real financial people with real half-trillion-dollar
portfolios use Fortran to do a wide range of calculations involving
those portfolios. Further, they do those calculations using floating
point.

Bill Findlay

unread,
Nov 29, 2017, 9:31:19 PM11/29/17
to
Quadibloc wrote:
[claiming that Atlas paged to/from magnetic tape is stated in]:
> An original paper describing the Atlas single-level store. There's a copy
> online at Chilton Computing.

You have suffered a major failure of comprehension.

--
Bill Findlay

John Levine

unread,
Nov 29, 2017, 10:16:35 PM11/29/17
to
In article <19282b39-33f8-4a81...@googlegroups.com>,
<hanc...@bbs.cpcn.com> wrote:
>I am definitely not a technical expert on this, but I always
>questioned virtual memory--as compared to other methods or choices.
>I wondered if it was a marketing gimmick.

Yes and no. VM was originally a way to automate what we did manually
and painfully with overlay loaders. Once there were decent paging
algorithms you could dispense with the overlays. Despite some wishful
thinking at the time, you couldn't run a program in less memory than
it needed, that was what thrashing was.

>Today, I think a Z series comes with terabytes of memory, so do
>they even need to bother with virtual storage?

Yes, definitely. In the ensuing half century we've learned to use VM
hardware to manage disk I/O (map the whole file into the address space
and use page faults to trigger the actaul I/O.) It also lets you use
shared libraries and shared data segments with the VM hardware
managing what's shared among what processes. Even if everything's in
RAM all the time, the VM is still useful for mangaging what's in what
logical place in each address space, and for keeping programs from
stomping and/or spying on each others.

R's,
John

Bob Eager

unread,
Nov 30, 2017, 3:30:27 AM11/30/17
to
On Thu, 30 Nov 2017 03:16:34 +0000, John Levine wrote:

>>Today, I think a Z series comes with terabytes of memory, so do they
>>even need to bother with virtual storage?
>
> Yes, definitely. In the ensuing half century we've learned to use VM
> hardware to manage disk I/O (map the whole file into the address space
> and use page faults to trigger the actaul I/O.)

Hmmm. Multics and EMAS both did that in the 1960s.

--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org

Peter Flass

unread,
Nov 30, 2017, 8:05:25 AM11/30/17
to
I think that, or the reverse, is true of a lot of shops. This is where IBM
is missing the boat by not upgrading FORTRAN. PPPOE was almost all COBOL,
but someone had a simulation written in FIV that he ran occasionally.
Likewise I understand that many DEC-10 shops might have an application or
two written in COBOL which they would run alongside their mainly FORTRAN
(or other) workload.

--
Pete

jmfbahciv

unread,
Nov 30, 2017, 10:30:22 AM11/30/17
to
I thought drums were slower than disks. DEC's swapping algorithms
picked disk first, then would resort to drums if the swap sapce on the
disk was in active use. (note this is about 4S72 running on a 48K
KA-10).

/BAH

jmfbahciv

unread,
Nov 30, 2017, 10:30:22 AM11/30/17
to
Virtual storage and virtual memory are two different things (at least
w.r.t. DEC's definitions).

Virtual memory on PDP-10s allowed a compilation of a program larger
than physical memory.


/BAH

Joe Pfeiffer

unread,
Nov 30, 2017, 11:07:38 AM11/30/17
to
hanc...@bbs.cpcn.com writes:
>
> Indeed, I see storage memory as two issues: 1) big enough to
> accommodate some extremely large complex programs and 2) big
> enough to run multi-programming and fit several programs in the
> machine at the same time.
>
> Today, I think a Z series comes with terabytes of memory, so do
> they even need to bother with virtual storage?

"Making memory look bigger" was always something of a red herring. The
real advantage virtual memory gives you is the page table lets you
control access to different parts of your program, lets you get away
without relocatable code, and makes sharing memory between programs
practical. That you could have the parts of your program marked as
inaccessible not exist, and you could swap parts of your program to disk
if your computer was underprovisioned, are just bonuses.

Anne & Lynn Wheeler

unread,
Nov 30, 2017, 11:27:44 AM11/30/17
to
John Levine <jo...@iecc.com> writes:
> Yes, definitely. In the ensuing half century we've learned to use VM
> hardware to manage disk I/O (map the whole file into the address space
> and use page faults to trigger the actaul I/O.) It also lets you use
> shared libraries and shared data segments with the VM hardware
> managing what's shared among what processes. Even if everything's in
> RAM all the time, the VM is still useful for mangaging what's in what
> logical place in each address space, and for keeping programs from
> stomping and/or spying on each others.

re:
http://www.garlic.com/~lynn/2017j.html#78 thrashing, was Re: A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#79 thrashing, was Re: A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#90 thrashing, was Re: A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#91 thrashing, was Re: A Computer That Never Was: the IBM 7095

I didn't find out these guys were using a lot of stuff I did as
undergraduate until much later ... ref. gone 404, but lives on wayback
machine.
http://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

we also had some rivalry between MIT project mac on 5th floor doing
multics and science center on 4th floor doing virtual machines.
http://www.garlic.com/~lynn/2017j.html#71 A Computer That Never Was: the IBM 7095
posts mentioning science center on 4th floor (of 545 tech sq)
http://www.garlic.com/~lynn/subtopic.html#545tech

also USAF data systems (and some other gov. agencies) was big Multics customer
http://multicians.org/sites.html
so it was small feather when AFDS was getting 210 vm/4341s
http://www.garlic.com/~lynn/2017j.html#93 It's 1983: What computer would you buy?

as to TSS/360 for single level store, synchronous, blocking paging
faults and then Future System single level store with similar enormous
impact on throughput. Folklore is that with the implosion of FS, some of
the FS people retreated to Rochester and did simplified version for
S/38. However in the S/38 market place ... the poor throughput of single
level store synchronous blocking paging faults was much less of an
issue. FS posts
http://www.garlic.com/~lynn/submain.html#futuresys
other FS reference
http://www.jfsowa.com/computer/memo125.htm

This is separate from the thrashing issue with number of concurrent
tasks simultaneously contending for real storage and driving up page
fault rates.

Higher throughput filesystem models provide for contiguous allocation
for larger multi-block transfers and concurrent, overlapped execution
with multi-buffered transfers with read-ahead and write-behind. Part of
my claim that I learned what not to do from TSS/360 single-level store
when I was doing CMS paged-mapped filesystem ... posts
http://www.garlic.com/~lynn/submain.html#mmap

Note that there is another kind of problem shows up with running a LRU
under a LRU. This shows up in early 70s with running one of the batch
mainframe systems supporting virtual memory ... and some flavor of
least-recently used replacement algorith (page that hasn't for used for
the longest time is a page that is considered least likely to be used in
the future) in a virtual machine that CP/67 was managing with a LRU
algorithm (least recently used virtual machine page is considered to be
the least likely to be used in the future). The LRU algorithm running in
virtual machine is going to pick a real storage slot that contains a
low-useage virtual page for replacement ... making it the next most
likely to be used page. However, the virtual machine manager that is
also running an LRU algorith, is likely to pick the least recently used
virtual machine page (which appears as real page to the system running
in the virtual machine). In any case an LRU algorithm running virtually
under an LRU algorithm ... can invert the assumptions about least
recently used is the least likely to be used in the future.

This then shows up for DBMS that have a cache of records (in operating
system LRU managed stored) ... where the DBMS manages its own cache with
LRU algorithm ... the least recently used cache record ... which the
DBMS wants to use next ... is also the candidate that the operating
system wants to replace next. This shows up in the mid-70s with the
development of the original sql/relational (system/r) on 370/145 vm370
system at san jose research ... past posts
http://www.garlic.com/~lynn/submain.html#systemr
other history of original sql/relational
http://www.mcjones.org/System_R/

Something similar was seen with initial port of apl\360 to CMS (as
CMS\APL) ... which used strategy that allocated new storage for every
assignment ... is was guaranteed to quickly and repeatedly use every
available virtual page. Major part of CMS\APL was redoing the APL
storage managment (and garbage collection) to minimize unnecessary using
large number of different virtual pages. recent reference
http://www.garlic.com/~lynn/2017j.html#86 VS/Repack

a common strategy ... for large DBMS running in operating system LRU
managed virtual memory ... is to have special pool of real storage for
DBMS virtual memory cache that is fixed or semi-fixed ... biased to be
very low as candidates for replacement. some past posts mentioning
running DBMS LRU managed cache and/or operating system LRU managed store
running in virtual machine ... running under operating system doing LRU
managed store ... makes 2nd level LRU look like MRU (least recently used
page is mostly likely to be used next, rather than least likely).
http://www.garlic.com/~lynn/2004l.html#66 Lock-free algorithms
http://www.garlic.com/~lynn/2005k.html#38 Determining processor status without IPIs
http://www.garlic.com/~lynn/2006.html#17 {SPAM?} DCSS as SWAP disk for z/Linux
http://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
http://www.garlic.com/~lynn/2006l.html#9 virtual memory
http://www.garlic.com/~lynn/2007m.html#47 Capacity and Relational Database
http://www.garlic.com/~lynn/2011i.html#63 Before the PC: IBM invents virtualisation (Cambridge skunkworks)
http://www.garlic.com/~lynn/2012c.html#17 5 Byte Device Addresses?
http://www.garlic.com/~lynn/2014k.html#13 Question concerning running z/OS LPARs under z/VM

Nuwen

unread,
Nov 30, 2017, 11:40:28 AM11/30/17
to
Your bot is definitely thrashing now, it's descended into utter
incoherence.

nuwen

Anne & Lynn Wheeler

unread,
Nov 30, 2017, 11:54:39 AM11/30/17
to
Joe Pfeiffer <pfei...@cs.nmsu.edu> writes:
> "Making memory look bigger" was always something of a red herring. The
> real advantage virtual memory gives you is the page table lets you
> control access to different parts of your program, lets you get away
> without relocatable code, and makes sharing memory between programs
> practical. That you could have the parts of your program marked as
> inaccessible not exist, and you could swap parts of your program to disk
> if your computer was underprovisioned, are just bonuses.

as previously mentioned, the justification for moving all IBM mainframe
systems to virtual memory ... was the poor MVT storage management ...
application regions had to be four times larger than would be used ...
limiting typical 370/165 1mbyte machine to four application
regions. Remapping MVT virtual memory ... could increase number of
regions by factor of four times ... with little or no paging operations
(increasing number of concurrent applications executing and therefor
throughput).
http://www.garlic.com/~lynn/2017j.html#84 VS/Repack
and
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

of course, 370/165 1mbyte configurations was relatively quickly replaced
by 370/168 4mbyte configurations ... but VS2/SVS (MVT layed out in
single 16mbyte virtual address space) was also replaced by VS2/MVS
... giving each application its own virtual address space ... allowing
much larger number of concurrent tasks ... in part trying to keep
increasing the throughput of larger systems.

In mid-70s, I was starting to point out that systems were increasingly
becoming I/O bottleneck ... to maintain aggregate system throughput
required increasing the number of concurrent operations. Early 80s, I
was saying that relative system disk throughput had declined by and
order of magnitude over a period of 15yrs (disk throughput increase 3-5
times, but large system increased 40-50 times). Some disk division
executive took exception with the comments and assigned the division
performance group to refute my comments. However, after a few weeks they
came back and said that I had slightly understated the problem. This is
eventually respun for SHARE user group presentation about optimizing
disk configurations for improved system throughput. Old references
to Share B874 presentation
http://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)
http://www.garlic.com/~lynn/2002i.html#18 AS/400 and MVS - clarification please
http://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s

posts refencing getting to play disk engineer in blgs 14 (division
disk engineering) and 15 (division disk product test) ... across
the street from bldg 28 (san jose research)
http://www.garlic.com/~lynn/subtopic.html#disk

John Levine

unread,
Nov 30, 2017, 2:06:31 PM11/30/17
to
In article <PM00055F3...@aca418aa.ipt.aol.com>,
jmfbahciv <See....@aol.com> wrote:
>I thought drums were slower than disks. DEC's swapping algorithms
>picked disk first, then would resort to drums if the swap sapce on the
>disk was in active use. (note this is about 4S72 running on a 48K
>KA-10).

Drums were head per track so they were generally faster than disks
since there was no seek time, just rotation time. The rotation time
varied but could be very fast, e.g., the IBM 650 drum which spun at
15000 RPM.

I didn't even know you could put a drum on a KA-10. They tended to be
quite small so you'd be more likely to find one on a PDP-8.

R's,
John

Quadibloc

unread,
Nov 30, 2017, 2:09:21 PM11/30/17
to
On Thursday, November 30, 2017 at 12:06:31 PM UTC-7, John Levine wrote:

> I didn't even know you could put a drum on a KA-10. They tended to be
> quite small so you'd be more likely to find one on a PDP-8.

There was a 32K head-per-track disk for the PDP-8. There was also a drum for
the PDP-5 and the early PDP-8.

John Savard

Peter Flass

unread,
Nov 30, 2017, 2:56:48 PM11/30/17
to
This is the opposite of everybody else.

--
Pete

hanc...@bbs.cpcn.com

unread,
Nov 30, 2017, 5:54:01 PM11/30/17
to
On Wednesday, November 29, 2017 at 9:25:14 PM UTC-5, J. Clarke wrote:

> >However, as you say, there was great value in a "360-degree" computer.
> >I could see a great many companies having a research or engineering
> >unit that might have existing programs to run in Fortran on the mainframe,
> >and wished to continue using their existing arrangement.
> >Universities could easily need both business for administration and
> >science for research.
>
> Get "scientific" out of your heads. Contrary to popular belief, in
> the real world real financial people with real half-trillion-dollar
> portfolios use Fortran to do a wide range of calculations involving
> those portfolios. Further, they do those calculations using floating
> point.

Could you elaborate on this? My memory is a little fuzzy on the
specifics of "floating point" as you refer to it. I thought "floating
point" was scientific notation, e.g. n.nnn X 10^y. It was used to
handle very large or very tiny numbers (e.g. Avogadro's number,
6.02214X×10^23.) Also, could you give some examples of the calculations?
Many years ago I did some financial analysis stuff, but it was all
easily done in COBOL with a compute statement and maybe a PERFORM loop.
COMP-3 (packed decimal) handled it just fine. IIRC, COMP-3 has
more than enough resolution to handle trillions of dollars.
So I'm obviously missing something here. Thanks.

Dan Espen

unread,
Nov 30, 2017, 7:50:38 PM11/30/17
to
If I recall correctly COMP-1 and COMP-2 are floats.

I once consulted on a large COBOL program doing some extensive
calculations. The thing was dog slow so the client got advice from IBM
and made optimizations.

The IBM advice was:

all fields COMP-3 (packed) all fields the same size. (something like
s9(7)v9(5)) name all constants: (77 ONE PIC S9(7)V9(5) VALUE 1.)
COMPUTE RESULT = (FACTORX * FACTORY) / ONE.

When I took a look at the generated Assembler, even the most basic
calculations involved converting to FLOAT, doing the calc, then
converting back to packed.

COBOL will resort to floating point when tell it or when it thinks it
has to.

Just getting rid of the constants (like ONE) helped stop it from using
floating point.

I've done a lot of business development.
Never needed or intentionally used floating point.

--
Dan Espen

John Levine

unread,
Nov 30, 2017, 7:51:12 PM11/30/17
to
In article <340bb55c-cedd-46ce...@googlegroups.com>,
<hanc...@bbs.cpcn.com> wrote:
>> Get "scientific" out of your heads. Contrary to popular belief, in
>> the real world real financial people with real half-trillion-dollar
>> portfolios use Fortran to do a wide range of calculations involving
>> those portfolios. Further, they do those calculations using floating
>> point.
>
>Could you elaborate on this? My memory is a little fuzzy on the
>specifics of "floating point" as you refer to it. I thought "floating
>point" was scientific notation, e.g. n.nnn X 10^y. It was used to
>handle very large or very tiny numbers (e.g. Avogadro's number,
>6.02214X×10^23.)

That's basically it. The legacy IBM floating point is actually base
16, while IEEE floating point is base 2. What makes floating point
interesting is that the results are approximate -- it does the
operation with as much precision as it has, which is often not as many
as you might want.

For reasons lost in the mists of history, the design of the legacy
floating point was botched so that it lost almost a full decimal digit
of precision on each operation compared to what a better designed
format (the previous 7094 or the modern IEEE) would do. So there was
a lot of work involved in writing code that would produce sufficiently
accurate results with IBM floating point. The simplest approach was
to turn what used to be single precision operations into double
precision, which mostly worked but doubled the storage and made things
a lot slower.

So anyway, if you are doing financial modelling as opposed to
accounting, there's plenty of places where floating point would be
what you want to use. You'd rather use IEEE than IBM floating point
because it's a lot easier to get stable accurate results.

R's,
John

PS: Then there's also the newish decimal floating point. It's
theoretically cool but I don't know anyone who uses it.


John Levine

unread,
Nov 30, 2017, 7:52:07 PM11/30/17
to
In article <10ef4db1-bfb5-4e5b...@googlegroups.com>,
Right, I used the 32K disk at one point. It seemed a little odd to have a disk
that was no bigger than the core.



Charlie Gibbs

unread,
Nov 30, 2017, 8:07:00 PM11/30/17
to
On 2017-12-01, Dan Espen <dan1...@gmail.com> wrote:

> If I recall correctly COMP-1 and COMP-2 are floats.
>
> I once consulted on a large COBOL program doing some extensive
> calculations. The thing was dog slow so the client got advice from IBM
> and made optimizations.
>
> The IBM advice was:
>
> all fields COMP-3 (packed) all fields the same size. (something like
> s9(7)v9(5)) name all constants: (77 ONE PIC S9(7)V9(5) VALUE 1.)
> COMPUTE RESULT = (FACTORX * FACTORY) / ONE.
>
> When I took a look at the generated Assembler, even the most basic
> calculations involved converting to FLOAT, doing the calc, then
> converting back to packed.
>
> COBOL will resort to floating point when tell it or when it thinks it
> has to.
>
> Just getting rid of the constants (like ONE) helped stop it from using
> floating point.

The large slow COBOL program I consulted on used subscripts heavily -
and all subscripts were declared as COMP-3. Changing them to COMP-4
(binary) knocked 30% off the execution time.

> I've done a lot of business development.
> Never needed or intentionally used floating point.

I have, but I can count the number of times on the fingers of one hand.

--
/~\ cgi...@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!

Quadibloc

unread,
Nov 30, 2017, 10:15:59 PM11/30/17
to
On Thursday, November 30, 2017 at 5:51:12 PM UTC-7, John Levine wrote:

> PS: Then there's also the newish decimal floating point. It's
> theoretically cool but I don't know anyone who uses it.

This is kind of a pity. It's the perfect thing to use for... a
spreadsheet.

I don't know of any reason someone would want to use it inside any kind
of a program of the usual kind, though. But a spreadsheet lets users
operate directly on numbers who are a bit naive about computers, and
decimal floating point makes a good stab at behaving as people expect.

John Savard

J. Clarke

unread,
Nov 30, 2017, 11:06:49 PM11/30/17
to
That's actually something I'm looking forward to trying out when I get
a chance. It should be lovely for the kind of calculations we do--the
question in my mind is how great a performance penalty it entails.
>
>John Savard

Charles Richmond

unread,
Nov 30, 2017, 11:12:44 PM11/30/17
to
On 11/30/2017 7:04 PM, Charlie Gibbs wrote:
> On 2017-12-01, Dan Espen <dan1...@gmail.com> wrote:
>
> [snip...] [snip...] [snip...]
>
>> I've done a lot of business development.
>> Never needed or intentionally used floating point.
>
> I have, but I can count the number of times on the fingers of one hand.
>

That's why people use octal... because a person has 8 fingers!!! :-)
Of course, if you cheat and use your thumbs...

--
numerist at aquaporin4 dot com

Quadibloc

unread,
Dec 1, 2017, 4:40:24 AM12/1/17
to
Because Intel uses a form of DFP that requires multiplications and
divisions instead of shifts, it has a significant penalty; whereas IBM's
version, requiring BCD arithmetic, has a large hardware cost for a high-
speed implementation.

I can't fault Intel for concluding that decimal arithmetic is simply not
worth that big a share of the available die area, despite being
disappointed.

Dan Espen

unread,
Dec 1, 2017, 8:35:01 AM12/1/17
to
Charlie Gibbs <cgi...@kltpzyxm.invalid> writes:

> On 2017-12-01, Dan Espen <dan1...@gmail.com> wrote:
>
>> If I recall correctly COMP-1 and COMP-2 are floats.
>>
>> I once consulted on a large COBOL program doing some extensive
>> calculations. The thing was dog slow so the client got advice from IBM
>> and made optimizations.
>>
>> The IBM advice was:
>>
>> all fields COMP-3 (packed) all fields the same size. (something like
>> s9(7)v9(5)) name all constants: (77 ONE PIC S9(7)V9(5) VALUE 1.)
>> COMPUTE RESULT = (FACTORX * FACTORY) / ONE.
>>
>> When I took a look at the generated Assembler, even the most basic
>> calculations involved converting to FLOAT, doing the calc, then
>> converting back to packed.
>>
>> COBOL will resort to floating point when tell it or when it thinks it
>> has to.
>>
>> Just getting rid of the constants (like ONE) helped stop it from using
>> floating point.
>
> The large slow COBOL program I consulted on used subscripts heavily -
> and all subscripts were declared as COMP-3. Changing them to COMP-4
> (binary) knocked 30% off the execution time.

I'm guessing, not IBM COBOL.
IBM COBOL would be best with USAGE IS INDEX.

--
Dan Espen

Dan Espen

unread,
Dec 1, 2017, 8:42:05 AM12/1/17
to
Any base conversion pays a huge penalty.
With IBM hardware using CVB/CVD (packed to/from binary)
there was a 10 to 1 penalty when I did a simple performance test.
You need to do at least 20 binary operations on your numbers
before showing results to break even.

--
Dan Espen

jmfbahciv

unread,
Dec 1, 2017, 10:39:00 AM12/1/17
to
Nuwen wrote:
> Anne & Lynn Wheeler <ly...@garlic.com> writes:

<snip>

> Your bot is definitely thrashing now, it's descended into utter
> incoherence.
>
> nuwen

Why don't you just accept the fact that you'll never understand
what Lynn writes and stop whinging.

/BAH

jmfbahciv

unread,
Dec 1, 2017, 10:39:00 AM12/1/17
to
One thumb is used for negative and the other is a parity bit.

/BAH

jmfbahciv

unread,
Dec 1, 2017, 10:39:00 AM12/1/17
to
Shit. I fucked up. It was a fixed-head disk which was the second
choice. Sorry.

/BAH

John Levine

unread,
Dec 1, 2017, 12:09:07 PM12/1/17
to
In article <ovq91v$1jiv$1...@gal.iecc.com>, John Levine <jo...@iecc.com> wrote:
>PS: Then there's also the newish decimal floating point. It's
>theoretically cool but I don't know anyone who uses it.

I should note that although the implementation and details of DFP are
new, the idea is ancient. JOSS used decimal floating point.

R's,
John

Charlie Gibbs

unread,
Dec 1, 2017, 12:41:09 PM12/1/17
to
The compiler supported USAGE IS INDEX, but I always found the syntax
too cumbersome.

I wish I could have had a few days with that program - it was a dog's
breakfast, but I bet I could have overhauled it and made it fly.

Charlie Gibbs

unread,
Dec 1, 2017, 12:41:09 PM12/1/17
to
"Base 8 is just like base 10, really... if you're missing two fingers."
-- Tom Lehrer: New Math

Quadibloc

unread,
Dec 1, 2017, 12:45:20 PM12/1/17
to
On Friday, December 1, 2017 at 10:09:07 AM UTC-7, John Levine wrote:

> I should note that although the implementation and details of DFP are
> new, the idea is ancient. JOSS used decimal floating point.

In fact, JOSS used the kind of decimal floating point implemented in
binary that Intel uses on its chips.

John Savard

Ahem A Rivet's Shot

unread,
Dec 1, 2017, 2:00:06 PM12/1/17
to
On 1 Dec 2017 15:38:38 GMT
jmfbahciv <See....@aol.com> wrote:

> Charles Richmond wrote:

> > That's why people use octal... because a person has 8 fingers!!! :-)
> > Of course, if you cheat and use your thumbs...
> >
> One thumb is used for negative and the other is a parity bit.

Do not duct tape them together.

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/

hanc...@bbs.cpcn.com

unread,
Dec 1, 2017, 3:15:01 PM12/1/17
to
I remember being frustrated in my young days, when using Fortran
and expecting an answer of exactly 3.0 and getting 2.999999. We
had to do some rounding tricks.



> So anyway, if you are doing financial modelling as opposed to
> accounting, there's plenty of places where floating point would be
> what you want to use. You'd rather use IEEE than IBM floating point
> because it's a lot easier to get stable accurate results.

But returning to Mr. Clarke's point, could someone elaborate on
what kind of financial modelling would require floating point
instead of COMP-3 packed decimal on an IBM mainframe?



hanc...@bbs.cpcn.com

unread,
Dec 1, 2017, 3:20:52 PM12/1/17
to
On Friday, December 1, 2017 at 8:42:05 AM UTC-5, Dan Espen wrote:

> Any base conversion pays a huge penalty.
> With IBM hardware using CVB/CVD (packed to/from binary)
> there was a 10 to 1 penalty when I did a simple performance test.
> You need to do at least 20 binary operations on your numbers
> before showing results to break even.

I don't know about today, but years ago I ran some tests when I had
the machine to myself and confirmed the above. The base conversion
indeed had a huge penalty.

COMP-3 was best for arithmetic fields.

For subscripts, USAGE IS INDEXED (or INDEXED BY) worked best,
but a binary COMP SYNC field worked nearly as well. Indeed, a
binary field had the advantage of being displayable in the event
debugging was necessary, while an indexed field could not.

As to serious number crunching, I remember a fellow who had
some medical research. He did a _lot_ of hairy calculations
for each input record, and there weren't that many input
records. In short, an ideal application for Fortran. It
actually ran well on the 1130 since it didn't use much of
the 1130's horrendously slow I/O.

That's why I'm curious as to the financial modelling referred to
above. Does it have a lot or little I/O? Is there a lot of
number crunching involved?

Anne & Lynn Wheeler

unread,
Dec 1, 2017, 3:45:53 PM12/1/17
to

recent posts mentioning single-level store, blocking, synchronous page
fault handling
http://www.garlic.com/~lynn/2017e.html#3 TSS/8, was A Whirlwind History of the Computer
http://www.garlic.com/~lynn/2017f.html#39 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#50 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#87 How a few yellow dots burned the Intercept's NSA leaker
http://www.garlic.com/~lynn/2017g.html#56 What is the most epic computer glitch you have ever seen?
http://www.garlic.com/~lynn/2017g.html#66 Is AMD Dooomed? A Silly Suggestion!
http://www.garlic.com/~lynn/2017g.html#102 SEX
http://www.garlic.com/~lynn/2017j.html#34 Tech: we didn't mean for it to turn out like this
http://www.garlic.com/~lynn/2017j.html#90 thrashing, was Re: A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#95 why VM, was thrashing


OS/360 filesystem paradigm is large multi-block, multi-buffered,
contiguous transfers overlapped with execution, including read-ahead and
write-behind. related issues to blocking, synchronous page fault
handling ... is large multithreaded DBMS ... whatever transaction
responsible for a page fault (with blocking, synchronous page fault
handling) could block the execution of overlapped execution of other
transactions.

In the virtual machine environment, something similar applies to large
multitasking virtual guest operating systems.

For 370 135/145 following, 138/148 they were doing microcode
enhancements. I was told they microcode ran avg. ten native instruction
for every emulated 370 instruction. They had 6kbytes of available
microcode space and instructions would convert on about byte for byte
basis. I needed to identify that highest executed 6kbytes of vm370
kernel for dropping into microcode for 10:1 performance improvement. The
following ordered kernel instruction execution frequency ... 6kbytes
was nearly 80% of kernel cpu
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

Endicott then wanted to ship vm370 as part of every machine ordered
(sort of like LPAR in current machines). However this was in period that
POK was working on getting corporate to kill vm370 product and transfer
all vm370 development to POK to support MVS/XA development.

Then there is issue with guest operating systen being blocked for any
page fault. VS1 "handshaking" was developed that if VS1 was enabled for
interrupts and virtual machine took a page fault, VM370 would reflect a
psuedo pagefault to VS1 ... allowing it to switch tasks (overlapped
while VM370 handled the page fault). When page fault handling was
complete, then a psuedo page I/O complete interrupt would be reflecting
to VS1 (allowing to switch back to the faulted task). This created
tighter integration between guest VS1 running in virtual machine while
letting vm370 do the page fault handling.

also contributing, at this time, vm370 was running my page replacement
and page i/o that was significantly better and shorter pathlength than
VS1 ... helping contribute to VS1 running faster in virtual machine
under VM370 ... than on stand-alone native machine.

page replacement, page i/o, working set, thrasing management
http://www.garlic.com/~lynn/subtopic.html#wsclock
Future System posts
http://www.garlic.com/~lynn/submain.html#futuresys

recent posts mentioning POK working to kill VM370 product and
transfer VM370 development to support MVS/XA development
http://www.garlic.com/~lynn/2017.html#79 VM370 Development
http://www.garlic.com/~lynn/2017b.html#32 Virtualization's Past Helps Explain Its Current Importance
http://www.garlic.com/~lynn/2017b.html#36 IBM LinuxONE Rockhopper
http://www.garlic.com/~lynn/2017d.html#51 CPU Timerons/Seconds vs Wall-clock Time
http://www.garlic.com/~lynn/2017f.html#25 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#35 Hitachi to Deliver New Mainframe Based on IBM z Systems in Japan
http://www.garlic.com/~lynn/2017g.html#34 Programmers Who Use Spaces Paid More
http://www.garlic.com/~lynn/2017g.html#68 MULTICS & VM370 History
http://www.garlic.com/~lynn/2017g.html#69 48-year-old Multics Operating System Resurrected

Dan Espen

unread,
Dec 1, 2017, 4:48:46 PM12/1/17
to
Hmm, just use SET instead of ADD/SUBTRACT if I remember right.
Not a big deal for me.

> I wish I could have had a few days with that program - it was a dog's
> breakfast, but I bet I could have overhauled it and made it fly.

Yep, I got to do a complete job on the COBOL monster mentioned above.
A lot of satisfaction fixing it up.

My first or second day an operator sent a note back about the job
not working right because it finished in minutes instead of hours.

--
Dan Espen

Dan Espen

unread,
Dec 1, 2017, 4:51:52 PM12/1/17
to
It would have to involve numbers too big for packed decimal.
The large program I mentioned was a financial model for an
insurance company.

--
Dan Espen

Dan Espen

unread,
Dec 1, 2017, 4:55:31 PM12/1/17
to
Even massive calculations tend to be combined with printing
of details about the calculation.

The bottom line result might involve billions of calculations
that could benefit from binary or float, but doing conversions
for each printed detail line will kill performance.

--
Dan Espen

Peter Flass

unread,
Dec 1, 2017, 6:04:58 PM12/1/17
to
I think I stopped using COBOL before USAGE IS INDEX came along.


>
>> I wish I could have had a few days with that program - it was a dog's
>> breakfast, but I bet I could have overhauled it and made it fly.
>
> Yep, I got to do a complete job on the COBOL monster mentioned above.
> A lot of satisfaction fixing it up.
>
> My first or second day an operator sent a note back about the job
> not working right because it finished in minutes instead of hours.
>



--
Pete
It is loading more messages.
0 new messages