Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

the legacy of Seymour Cray

397 views
Skip to first unread message

RS Wood

unread,
Oct 2, 2015, 4:17:34 PM10/2/15
to
Nice and surprisingly long article at the Register lauding the
genius-like work done by Seymour Cray. I'm annoyed they spend three
pages comparing him to Steve Jobs, but maybe that's what gets the
Millenials to read articles like this. If you can get past that, it's
a very complimentary article.

http://www.theregister.co.uk/2015/10/02/seymour_cray_90_anniversary/?page=1

//--clip
Before Steve Jobs, there was Seymour Cray – father of the supercomputer
and regarded as something close to a God in the circles he moved in.
Jobs’ Apple Computer is reputed to have bought one of Seymour’s massive
machines back in the day: a Cray, to design the brand-new Macintosh
personal computer.
This would have been a significant moment for a man of Jobs' character,
not prone to flattering the inventions or ideas of others. In return,
Cray is said to have quipped that he'd bought a Mac to design the next
Cray.
Cray – who would have been 90 years old this week – was the engineering
brain behind a family of systems that broke ground in architecture and
performance. They broke new ground on price, too: Cray’s computers cost
hundreds of thousands of dollars, with the Cray-1 weighing in at $8.8m
– an estimated $34m in today’s money.
This meant Crays could be afforded only by the top elite boffins in the
arenas of science and the military, who used them to crunch vast models
on weather prediction and the potential fallout from nuclear bombs.
Such systems were out of the price range of even the biggest private
firms.
Irony, indeed, given Jobs’ collaborator Steve Wozniak built the
Macintosh in order to democratise computing, as he told your
correspondent back in 2011.
Remarkably, and despite the price, it was Cray’s computers which
dominated high-performance computing (HPC) for 30 years, seeing off
even the mighty IBM. Buying a Cray became like buying IBM elsewhere: it
wouldn't get you fired.
It wasn’t until the 1990s and 2000s that rivals began loosening Cray's
grip. It was IBM that drove home the nail with the watershed Roadrunner
super, which was sold, significantly, to Cray’s very first customer –
Los Alamos National Laboratory.
Cray died following injuries sustained in a car crash in 1997, but he
remains one of the most influential figures of 20th and 21st century
computing. And far from being consigned to a dwindling historical niche
or trampled by commodity cluster servers marching in lockstep, his
supercomputer legacy is thriving.
The US is now so worried that it’s slipping behind in the international
supercomputer arms race that president Barack Obama issued an executive
order in July telling his nation’s technologists to build the world’s
fastest and most powerful supercomputer – one exaflop – by 2015.
Executive orders carry the weight of law and move quickly, because they
bypass US Congress.
That’s because one nation has consistently topped the list of world’s
biggest supers for the last two-and-a-half years: China, with its
Tianhe-2, developed by the National University of Defense Technology
and capable of 33.86 petaflops.
So concerned is the US about China’s ascendance, that it stopped Intel
shipping its high-end Xeon processors to China for use in the successor
to that super – the Tianhe-2A, which is expected to hit 100 petaflops.

//--clip (two more pages follow)

hanc...@bbs.cpcn.com

unread,
Oct 2, 2015, 4:41:08 PM10/2/15
to
On Friday, October 2, 2015 at 4:17:34 PM UTC-4, RS Wood wrote:
> Before Steve Jobs, there was Seymour Cray - father of the supercomputer ...

While I certainly don't want to deny Cray his genius and accomplishments, I would not agree with the lead sentence.

I would credit the 'father of the supercomputer' with the inventors of Univac's LARC and IBM's Stretch. Both of those machines were intended to be the fastest number-crunchers of their day, used for the same applications as the Cray computers were later used for.

Rich

unread,
Oct 2, 2015, 4:54:32 PM10/2/15
to
In comp.misc RS Wood <r...@therandymon.com> wrote:
> Nice and surprisingly long article at the Register lauding the
> genius-like work done by Seymour Cray. I'm annoyed they spend three
> pages comparing him to Steve Jobs,

This EE here is more than annoyed - disgusted actually. Cray was a
visionary in the computer architecture arena.

Jobs was a business guy, never designed any hardware ever (hardware was
Wozniak's arena).

The only comparison possible is then apples to oranges.

> but maybe that's what gets the Millenials to read articles like this.

Sadly, you may be right. Or it may just be yet one more example of how
the press is sadly technology inept.

But overall, yes, it is complementary to Cray.


A bit of trivia - I actually wrote programs for a Control Data Cyber
7600 for my assembly language CS class as part of my EE degree in
college. The 7600 (and 8600 series) were the architectual parents to
the Cray-1. The Cray-1 on the inside looked exactly like Seymor had
taken his 7600/8600 CPU design and attached the signature 'vector'
units that the Cray line of supercomputers were known for.

Here's the architectural block diagram of the Cray-1 CPU:

http://www.extremetech.com/wp-content/uploads/2014/10/cray_architecture-640x889.gif

All the parts below the words "Scalar Registers" in the diagram are
basically a Cyber 7600/8600 CPU.

Al Kossow

unread,
Oct 2, 2015, 6:36:18 PM10/2/15
to


> The US is now so worried that it’s slipping behind in the international
> supercomputer arms race that president Barack Obama issued an executive
> order in July telling his nation’s technologists to build the world’s
> fastest and most powerful supercomputer – one exaflop – by 2015.
>

That sentence doesn't make any sense.

https://www.whitehouse.gov/blog/2015/07/29/advancing-us-leadership-high-performance-computing

"the President has signed the National Strategic Computer Initiative, an
executive order that sets the lofty goal of producing an American
supercomputer with an exaflop of processing power."

It doesn't say WHEN.


Joe Morris

unread,
Oct 2, 2015, 7:05:47 PM10/2/15
to
"Rich" <ri...@example.invalid> wrote:

> A bit of trivia - I actually wrote programs for a Control Data Cyber
> 7600 for my assembly language CS class as part of my EE degree in
> college. The 7600 (and 8600 series) were the architectual parents to
> the Cray-1. The Cray-1 on the inside looked exactly like Seymor had
> taken his 7600/8600 CPU design and attached the signature 'vector'
> units that the Cray line of supercomputers were known for.

If you want to see a couple of Cray boxes - an XMP and YMP; neither of them
operational - and be encouraged to touch them (and sit on the cushions of
the XMP base) go to the National Cryptologic Museum at Ft. Meade, Maryland.
(You can also use an Enigma to encode and decode messages.)

They also have a Storage Tech tape silo that has been set up to continuously
move tapes from one slot to another...although it has a reputation for
hardware problems.

https://www.nsa.gov/about/cryptologic_heritage/museum

The museum is free and open to the public, and while it's at the NSA it's an
uncontrolled area outside the security perimeter, with a big sign just
inside the front door reminding visitors to put their badges away. Just be
sure to follow the directions to get there; a wrong turn would put you into
the NSA security entry queue.

The building isn't particularly large, so the exhibits represent only a part
of what the museum owns, but the staff there is familiar with a lot of what
isn't being displayed.

Joe Morris


Quadibloc

unread,
Oct 2, 2015, 8:19:46 PM10/2/15
to
On Friday, October 2, 2015 at 2:41:08 PM UTC-6, hanc...@bbs.cpcn.com wrote:

> While I certainly don't want to deny Cray his genius and accomplishments, I
> would not agree with the lead sentence.

> I would credit the 'father of the supercomputer' with the inventors of Univac's
> LARC and IBM's Stretch. Both of those machines were intended to be the fastest
> number-crunchers of their day, used for the same applications as the Cray
> computers were later used for.

I would agree that Seymour Cray was not the inventor of the big, fast computer.

As soon as computers were invented, the idea of making new ones more powerful
than the old ones was obvious.

However, Seymour Cray did invent the Cray I, and its vector register design,
notably more successful than such machines as the Illiac IV, the Texas
Instruments Advanced Scientific Computer, and Control Data's successors to his
6600, and so on... was copied by many companies. DEC, for the VAX, Univac, for
the 1100 series, IBM, for the later 370s, all made add-ons to their computers
that followed his principles, and several other companies made vector
supercomputers of similar design.

For a few years following the Cray I, there was much talk about how the era of
the supercomputer had begun - and then, by "supercomputer", they meant vector
machines along the lines of the Cray.

Of _that_, Seymour Cray was the father.

So I will give him the right to the title "father of the supercomputer" for
_that_ meaning of the word supercomputer, by virtue of the Cray I - but I will
also agree with you that the CDC 6600 doesn't entitle him to be called the
"father of the supercomputer" in the other sense, because while it preceded the
360/91, as you rightly point out, the NORC, the LARC, and STRETCH were all
around before.

John Savard

Anne & Lynn Wheeler

unread,
Oct 2, 2015, 8:47:15 PM10/2/15
to

RS Wood <r...@therandymon.com> writes:
> Before Steve Jobs, there was Seymour Cray – father of the
> supercomputer and regarded as something close to a God in the circles
> he moved in.
> Jobs’ Apple Computer is reputed to have bought one of Seymour’s
> massive machines back in the day: a Cray, to design the brand-new
> Macintosh personal computer.

former co-worker at IBM, left and for a time in the mid-80s had job
programming the Cray for Apple. It had a cray 100mbyte channel attached
high resolution display ... used to simulate screens & response times
(studying human factors).

other trivia ... former co-worker was also member of san jose astronomy
club and told stories of lucas bringing early starwar drafts for the
members to review.

more trivia ... my brother was regional apple marketing rep (largest
conus physical area region) and i would sometimes get invited to
business dinners and even argue macintosh design with mac engineers
(before it was announced).

and ... George Michael ... periodically referred to as grandfather
of supercomputing
http://archive.computerhistory.org/resources/access/text/2012/10/102702236-05-01-acc.pdf

for other drift ... past refs to thorton ... cray & thorton did cdc6600
http://www.garlic.com/~lynn/2012m.html#11 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
http://www.garlic.com/~lynn/2012o.html#27 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
http://www.garlic.com/~lynn/2013g.html#6 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2014c.html#80 11 Years to Catch Up with Seymour
http://www.garlic.com/~lynn/2014g.html#75 non-IBM: SONY new tape storage - 185 Terabytes on a tape

and past refs. to Chen ... did x-mp & y-mp
http://www.garlic.com/~lynn/2001n.html#68 CM-5 Thinking Machines, Supercomputers
http://www.garlic.com/~lynn/2001n.html#70 CM-5 Thinking Machines, Supercomputers
http://www.garlic.com/~lynn/2002h.html#42 Looking for Software/Documentation for an Opus 32032 Card
http://www.garlic.com/~lynn/2003d.html#57 Another light on the map going out
http://www.garlic.com/~lynn/2004b.html#19 Worst case scenario?
http://www.garlic.com/~lynn/2006q.html#9 Is no one reading the article?
http://www.garlic.com/~lynn/2006v.html#12 Steve Chen Making China's Supercomputer Grid
http://www.garlic.com/~lynn/2006y.html#38 Wanted: info on old Unisys boxen
http://www.garlic.com/~lynn/2007n.html#1 Is Parallel Programming Just Too Hard?
http://www.garlic.com/~lynn/2008e.html#4 Migration from Mainframe to othre platforms - the othe bell?
http://www.garlic.com/~lynn/2009.html#5 Is SUN going to become x86'ed ??
http://www.garlic.com/~lynn/2009e.html#7 IBM in Talks to Buy Sun
http://www.garlic.com/~lynn/2009o.html#29 Justice Department probing allegations of abuse by IBM in mainframe computer market
http://www.garlic.com/~lynn/2009p.html#55 MasPar compiler and simulator
http://www.garlic.com/~lynn/2009p.html#58 MasPar compiler and simulator
http://www.garlic.com/~lynn/2009s.html#5 While watching Biography about Bill Gates on CNBC last Night
http://www.garlic.com/~lynn/2009s.html#42 Larrabee delayed: anyone know what's happening?
http://www.garlic.com/~lynn/2009s.html#59 Problem with XP scheduler?
http://www.garlic.com/~lynn/2010b.html#71 Happy DEC-10 Day
http://www.garlic.com/~lynn/2010e.html#42 search engine history, was Happy DEC-10 Day
http://www.garlic.com/~lynn/2010e.html#68 Entry point for a Mainframe?
http://www.garlic.com/~lynn/2010e.html#70 Entry point for a Mainframe?
http://www.garlic.com/~lynn/2010f.html#47 Nonlinear systems and nonlocal supercomputing
http://www.garlic.com/~lynn/2010f.html#48 Nonlinear systems and nonlocal supercomputing
http://www.garlic.com/~lynn/2010i.html#61 IBM to announce new MF's this year
http://www.garlic.com/~lynn/2011c.html#24 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011d.html#7 IBM Watson's Ancestors: A Look at Supercomputers of the Past
http://www.garlic.com/~lynn/2011o.html#79 Why are organizations sticking with mainframes?
http://www.garlic.com/~lynn/2012p.html#13 AMC proposes 1980s computer TV series Halt & Catch Fire
http://www.garlic.com/~lynn/2013c.html#65 What Makes an Architecture Bizarre?
http://www.garlic.com/~lynn/2013f.html#73 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013h.html#6 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013n.html#50 'Free Unix!': The world-changing proclamation made30yearsagotoday
http://www.garlic.com/~lynn/2014.html#71 the suckage of MS-DOS, was Re: 'Free Unix!
http://www.garlic.com/~lynn/2014c.html#72 11 Years to Catch Up with Seymour
http://www.garlic.com/~lynn/2015g.html#18 Miniskirts and mainframes
http://www.garlic.com/~lynn/2015g.html#74 100 boxes of computer books on the wall

--
virtualization experience starting Jan1968, online at home since Mar1970

philo

unread,
Oct 3, 2015, 6:37:14 AM10/3/15
to
I have a friend who was on the design team and was fortunate enough to
get a tour of the place somewhere around 1980!

Quadibloc

unread,
Oct 3, 2015, 7:55:28 AM10/3/15
to
On Friday, October 2, 2015 at 2:17:34 PM UTC-6, RS Wood quoted, in part:

> Jobs' Apple Computer is reputed to have bought one of Seymour's massive
> machines back in the day: a Cray, to design the brand-new Macintosh
> personal computer.
> This would have been a significant moment for a man of Jobs' character,
> not prone to flattering the inventions or ideas of others. In return,
> Cray is said to have quipped that he'd bought a Mac to design the next
> Cray.

As I pointed out in the comments section of the site, Cray's retort isn't
really valid as a criticism of Apple.

Apple sold Macintoshes by the million, so if a Cray could optimize the design
to shave off a tenth of a cent here or there in manufacturing costs, it would
be well worth it.

Seymour Cray, on the other hand, needed simply to check out the logic design of
his computers and the thermal design, to make sure that they worked. They were
being built in small quantities, and thus an elaborate effort to perfect the
design to reduce manufacturing cost to the absolute minimum for the desired
level of quality would have been a waste.

So each party had gotten exactly the right computer for the job at hand. They weren't being silly and wasteful at Apple, using a Cray when a Timex-Sinclair
1000 would have served adequately, being proportionate in power.

John Savard

Anne & Lynn Wheeler

unread,
Oct 3, 2015, 11:47:16 AM10/3/15
to
"Joe Morris" <j.c.m...@verizon.net> writes:
> The museum is free and open to the public, and while it's at the NSA it's an
> uncontrolled area outside the security perimeter, with a big sign just
> inside the front door reminding visitors to put their badges away. Just be
> sure to follow the directions to get there; a wrong turn would put you into
> the NSA security entry queue.
>
> The building isn't particularly large, so the exhibits represent only a part
> of what the museum owns, but the staff there is familiar with a lot of what
> isn't being displayed.

re:
http://www.garlic.com/~lynn/2015h.html#10 the legacy of Seymour Cray

folklore is that it use to be motel ... but because listening devices
were constantly being found that they took it over ... turning into
museum.
http://cryptologicfoundation.org/visit/museum/museum_history.html

they also have IBM Harvest display
https://en.wikipedia.org/wiki/IBM_7950_Harvest

the 1st time I visited, they had MLS (multi-level security) display
(next to STK tape library) ... i tried to con them into letting me have
copy of the MLS video ... I had some thot of doing voice over parody of
MLS.

Google has been doing something wierd recently (coming back with nothing
found) ... i tried search on the nsa museum folklore and it came back
with nothing found. I tried the same search on other search engines
... and they came back with with loads of NSA related references
... including the above that i cited.

Bill Evans

unread,
Oct 3, 2015, 12:04:58 PM10/3/15
to
Anne & Lynn Wheeler <ly...@garlic.com> wrote:
> Google has been doing something wierd recently

My favorite way of putting it is: "Do you google with Listerine?"

--
Bill Evans / Box 1224 / Mariposa, CA 95338 / (209)742-4720
Mail-To: w...@acm.org -- PGP encrypted mail preferred. --
pgpkey.mariposabill.com for public key. Key #: 8D8B521B
PGPprint: 0A9C 3545 8FFF 7501 6265 1519 40FF 76F9 8D8B 521B

Quadibloc

unread,
Oct 3, 2015, 12:58:25 PM10/3/15
to
On Saturday, October 3, 2015 at 9:47:16 AM UTC-6, Anne & Lynn Wheeler wrote:

> Google has been doing something wierd recently (coming back with nothing
> found) ... i tried search on the nsa museum folklore and it came back
> with nothing found.

Interesting. I get results, but a lot of them are for trimt-nsa.gov.tw, the Museum of Saisiat Folklore.

John Savard

hanc...@bbs.cpcn.com

unread,
Oct 3, 2015, 3:22:27 PM10/3/15
to
On Friday, October 2, 2015 at 7:05:47 PM UTC-4, Joe Morris wrote:
> go to the National Cryptologic Museum at Ft. Meade, Maryland.

thanks for the reference. seems like an interesting place to visit.


hanc...@bbs.cpcn.com

unread,
Oct 3, 2015, 3:39:19 PM10/3/15
to
On Saturday, October 3, 2015 at 7:55:28 AM UTC-4, Quadibloc wrote:

> So each party had gotten exactly the right computer for the job at hand. They weren't being silly and wasteful at Apple, using a Cray when a Timex-Sinclair
> 1000 would have served adequately, being proportionate in power.

In his memoir, Tom Watson wrote about his frustration that CDC beat IBM to the market with a powerful high end computer. Watson said that later he realized that such high end computers were specialty items, like expensive limited edition high performance sports cars, and that IBM should focus more on the conventional market.

As we know, IBM lost serious money on STRETCH, although the R&D for STRETCH contributed greatly to later IBM work. I wonder if IBM's other high end machines, like the 85, 91, 95, and 195, also lost money due to a limited customer base of those wanting fast floating point arithmetic. Due to limited demand, my guess is that those high end machines were largely hand-built.

The IBM S/360 history generally does not tell how many machines of each model were sold, nor cost and revenues of them. On the other hand, the R&D for the high end machines may have contributed toward later developments, as did STRETCH. (However, some may have pushed SLT to the limit, which was replaced by monolithic circuits.)





Quadibloc

unread,
Oct 3, 2015, 3:56:27 PM10/3/15
to
On Saturday, October 3, 2015 at 1:39:19 PM UTC-6, hanc...@bbs.cpcn.com wrote:
> I wonder if IBM's other high end machines, like the 85, 91, 95, and 195, also
> lost money due to a limited customer base of those wanting fast floating point
> arithmetic.

The 85 did lose money - but that's because when it came out, there was a
recession.

It was not like the others. The 91 was the first pipelined machine with the
Tomasulo algorithm for full modern out-of-order operation; the 95 was a 91 with
thin-film memory of which a few were made, all for NASA, and then the 195 added
cache.

But the 85, the machine that introduced cache, was microcoded. It was big, but
it was intended for IBM's business customers. And the design got re-used, with
improvements, in the 360/165 and 360/168, and then the 3033. So the 85 wasn't a
scientific supercomputer - it was just the top of the line of the machines IBM
continued to make as part of its core business, _not_ what IBM abandoned after
the 195.

John Savard

John Levine

unread,
Oct 3, 2015, 4:29:40 PM10/3/15
to
>As we know, IBM lost serious money on STRETCH, although the R&D for STRETCH contributed greatly to later IBM work. I wonder if IBM's other high
>end machines, like the 85, 91, 95, and 195, also lost money due to a limited customer base of those wanting fast floating point arithmetic. Due
>to limited demand, my guess is that those high end machines were largely hand-built.

They /91. 95, 195, which were all versions of the same design were
definitely money losers. I don't think they sold many of the /85 but
the cache paid for it a zillion times over. I gather that the cache
made the /85 as fast as the much more complex /91 for all but the most
float intensive jobs.

Anne & Lynn Wheeler

unread,
Oct 3, 2015, 4:51:57 PM10/3/15
to
hanc...@bbs.cpcn.com writes:
> In his memoir, Tom Watson wrote about his frustration that CDC beat
> IBM to the market with a powerful high end computer. Watson said that
> later he realized that such high end computers were specialty items,
> like expensive limited edition high performance sports cars, and that
> IBM should focus more on the conventional market.
>
> As we know, IBM lost serious money on STRETCH, although the R&D for
> STRETCH contributed greatly to later IBM work. I wonder if IBM's
> other high end machines, like the 85, 91, 95, and 195, also lost money
> due to a limited customer base of those wanting fast floating point
> arithmetic. Due to limited demand, my guess is that those high end
> machines were largely hand-built.
>
> The IBM S/360 history generally does not tell how many machines of
> each model were sold, nor cost and revenues of them. On the other
> hand, the R&D for the high end machines may have contributed toward
> later developments, as did STRETCH. (However, some may have pushed
> SLT to the limit, which was replaced by monolithic circuits.)

AES END
http://people.cs.clemson.edu/~mark/acs_end.html

As the quote above indicates, the ACS-1 design was very much an
out-of-the-ordinary design for IBM in the latter part of the 1960s. In
his book, Data Processing Technology and Economics, Montgomery Phister,
Jr., reports that as of 1968:

Of the 26,000 IBM computer systems in use, 16,000 were S/360 models
(that is, over 60%). [Fig. 1.311.2]

Of the general-purpose systems having the largest fraction of total
installed value, the IBM S/360 Model 30 was ranked first with 12%
(rising to 17% in 1969). The S/360 Model 40 was ranked second with 11%
(rising to almost 15% in 1970). [Figs. 2.10.4 and 2.10.5]

Of the number of operations per second in use, the IBM S/360 Model 65
ranked first with 23%. The Univac 1108 ranked second with slightly over
14%, and the CDC 6600 ranked third with 10%. [Figs. 2.10.6 and 2.10.7]

... snip ...

I mentioned while still undergraduate, summer of '69 i was brought into
Boeing to help with setting up Boeing Computer Services (BCS)
... consolidate all data processing in independent business unit to
better monetize the investment (a little like cloud computing today).

At the time, I thot the renton datacenter was possibly largest in the
world ... all that summer, 360/65s were arriving in renton faster than
they could be installed. Claim was Renton datacenter had something like
$300M (in '69 dollars) of IBM equipment.

Later I would sponsor John Boyd briefings at IBM. His biography has him
in charge of spook base about the time I was at Boeing ... supposedly
was a $2.5B "windfall" for IBM (nearly ten times Renton datacenter).
Old spook base ref (gone 404, but lives on at wayback machine).
http://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html

Boyd posts & refs
http://www.garlic.com/~lynn/subboyd.html

Data processing technology and economics
https://books.google.com/books/about/Data_processing_technology_and_economics.html?id=MwMpAQAAMAAJ

Quadibloc

unread,
Oct 3, 2015, 11:43:11 PM10/3/15
to
On Saturday, October 3, 2015 at 2:29:40 PM UTC-6, John Levine wrote:
> I gather that the cache
> made the /85 as fast as the much more complex /91 for all but the most
> float intensive jobs.

Yes, that's correct, so IBM knew it had a winner on its hands - and added cache
to the 91 to make the 195.

John Savard

Jon Elson

unread,
Oct 5, 2015, 1:02:59 PM10/5/15
to
hanc...@bbs.cpcn.com wrote:


>
> As we know, IBM lost serious money on STRETCH, although the R&D for
> STRETCH contributed greatly to later IBM work. I wonder if IBM's other
> high end machines, like the 85, 91, 95, and 195
It is almost guaranteed that IBM lost money on the high-end machines, I
think especially on the 85 and 195. there were VERY few 195's sold.
I don't know how many /85's were built, but alos a fairly small number.


> , also lost money due to a
> limited customer base of those wanting fast floating point arithmetic.
> Due to limited demand, my guess is that those high end machines were
> largely hand-built.
>
Well, actually, ALL 360 and 370 machines were, to a large extent, hand
built! There are pictures of guys in white shirts and ties wiring up 360
mainframes. Certainly, the SLT modules were machine-made, and the SLT cards
and boards (boards are what most call backplanes) were machine made. But,
all the higher-level interconnect was done with great wads of TLC
(transmission line cable) which were a special sort of ribbon cable with 3
wires per signal. They went gnd-signal-gnd-gnd-signal-gnd etc. and had 18
signals per cable, which was about 1.5" wide. The ends of the cables were
soldered to paddle cards which had the same connectors on them as SLT cards.
These cabels were routed in stacks in the spaces between the boards, there
were formed sheet metal cable trays running between the boards for this.
> The IBM S/360 history generally does not tell how many machines of each
> model were sold, nor cost and revenues of them. On the other hand, the
> R&D for the high end machines may have contributed toward later
> developments, as did STRETCH. (However, some may have pushed SLT to the
> limit, which was replaced by monolithic circuits.)
The 360/85 was later re-implemented to become the 370/165. If you look at
the timings and other detailed specs, it is pretty clear that a great deal
of the /85 was carried over to the /165. The /85 was apparently built with
ASLT, which was a discrete transistor implementation of ECL, and the 195 and
155 and 165 were then built using the integrated MST technology.

The superscaler pipelining of the model 195 was certainly used as a guide
when building later machines with faster architectures. Once IBM moved to
higher levels of integration than MST in the TCM era, the insane complexity
of the model 195 became possible in a much smaller box.

Jon

Jon Elson

unread,
Oct 5, 2015, 1:10:21 PM10/5/15
to
Anne & Lynn Wheeler wrote:


> Of the number of operations per second in use, the IBM S/360 Model 65
> ranked first with 23%. The Univac 1108 ranked second with slightly over
> 14%, and the CDC 6600 ranked third with 10%. [Figs. 2.10.6 and 2.10.7]
Yes, the /65 was an amazing machine. If you knew to avoid LCS and only use
the internal 2365 memory, and could afford enough to have it interleaved,
the /65 really gave you a lot of bang for the money. The 2365 limited the
amount of memory you could have, but the LCS was TEN TIMES slower!

We ran our 65 at Washington University until IBM would no longer support it,
they had a government mandate to seize all the cards to keep the FAA traffic
control system running. We got two 370/145's to supplant the /65, mostly
for dial-up use, but the /65 ran RINGS around BOTH 145's combined!

Jon


Charles Richmond

unread,
Oct 5, 2015, 5:16:43 PM10/5/15
to

"Al Kossow" <a...@bitsavers.org> wrote in message
news:mun0p9$8im$1...@dont-email.me...
Just get busy, Al!!! I'm going to think that we need this "right away"!!!
;-)

--

numerist at aquaporin4 dot com

Charles Richmond

unread,
Oct 5, 2015, 5:30:03 PM10/5/15
to
"Quadibloc" <jsa...@ecn.ab.ca> wrote in message
news:c1dd9e31-443c-4df3...@googlegroups.com...
>
> [snip...] [snip...]
> [snip...]
No one is saying tha Seymour Cray actually put his DNA into the computer by
mating with any eletronic devices. (At least I don't think so.) But
"Father of the Supercomputer" (whatever it means) is a great "sound bite"
and great public relations. The news media had some reporting of the Cray
supercomputer and that elevated Cray somewhat in the eyes of the public.

And Mr. Cray being connected in the public mind with the supercomputer... is
better than being famous for *nothing* at all like Paris Hilton or the
Kardashians!!! IMHO.

It is reported that Albert Einstein makes more money every year than any
other dead person around... something like $5 million US every year.
Companies using Einstein's name or image have to pay royalties. Einstein
left his estate to Hebrew Univesity, who receives this money each year. (At
least, this is the way I heard it...) If Seymour Cray had been a little
more famous, his estate might also still be making money.

Charles Richmond

unread,
Oct 5, 2015, 5:38:44 PM10/5/15
to
"Jon Elson" <el...@pico-systems.com> wrote in message
news:tIydnbD6NIVhM4_L...@giganews.com...
Hey, yeah... about that!!! When is the government of the US going to stop
using the Gestapo tactics of siezing other prople's property (I mean in this
limited case, *not* generally) and re-code the air traffic control system to
be run on something more modern??? Then the FAA could trade all their IBM
360's for Raspberry Pi's and save a buttload of money!!! With the execution
speed of modern processors, the system could be largely coded in a higher
level language and be much more supportable.

At the very least, the FAA could start a "baby" project to write a new air
traffic control system *and* a traffic simulator... and thoroughly test out
a new system--even if they *never* use the new system!!!

Charles Richmond

unread,
Oct 5, 2015, 5:46:13 PM10/5/15
to
<hanc...@bbs.cpcn.com> wrote in message
news:15142ae3-2aa4-499a...@googlegroups.com...
On Saturday, October 3, 2015 at 7:55:28 AM UTC-4, Quadibloc wrote:

>> So each party had gotten exactly the right computer for the job at hand.
>> They weren't being silly and wasteful at Apple, using a Cray when a
>> Timex-Sinclair
>> 1000 would have served adequately, being proportionate in power.
>
>In his memoir, Tom Watson wrote about his frustration that CDC beat IBM to
>the market with a powerful >high end computer. Watson said that later he
>realized that such high end computers were specialty items, like >expensive
>limited edition high performance sports cars, and that IBM should focus
>more on the conventional >market.


Are you refering to the "...including the janitor" memo???

http://www.computerhistory.org/revolution/supercomputers/10/33/62

Jon Elson

unread,
Oct 5, 2015, 5:57:10 PM10/5/15
to
Charles Richmond wrote:


> Hey, yeah... about that!!! When is the government of the US going to stop
> using the Gestapo tactics of siezing other prople's property (I mean in
> this limited case, *not* generally) and re-code the air traffic control
> system to be run on something more modern??? Then the FAA could trade all
> their IBM
> 360's for Raspberry Pi's and save a buttload of money!!!
They did, FINALLY! They kept with the unreliable legacy gear way longer
than they should have, but there were several failed projects. One was to
replace all the 360's (really FAA 9020D and 9020E computing elements) with
some bit-slice emulators. I have no idea why that project failed, it seems
like they could hardly go wrong. How could a couple boards with modern TTL
MSI-LSI parts possible be LESS reliable than a roomfull of 360's with
thousands of PC boards? Obviously, somebody must have had no IDEA what they
were doing to botch that job.

There were at least TWO other projects to completely replace the whole
traffic control system that also failed. This was a lot more ambitious, as
they were also replacing software that had been refined over many years.
These catastrophes showed up in the regular newspapers as well as technical
magazines from time to time.

> With the
> execution speed of modern processors, the system could be largely coded in
> a higher level language and be much more supportable.
Well, I'm sure the system now in place IS written in an HLL, I'd sort of
guess it might be Ada, but don't know.

And, of course, they are FAR from done! The next system NexGen is supposed
to be implemented soon, but it is apparently far from ready. But, a lot of
the infrastructure needed (mostly ADDS-B) is coming together and that part
IS working.

Jon

Jon Elson

unread,
Oct 5, 2015, 6:17:41 PM10/5/15
to
Jon Elson wrote:

Here's one article from 2002 that details one of the crashed projects.

http://www.baselinemag.com/c/a/Projects-Processes/The-Ugly-History-of-Tool-
Development-at-the-FAA

Jon

Quadibloc

unread,
Oct 5, 2015, 8:24:42 PM10/5/15
to
On Monday, October 5, 2015 at 3:30:03 PM UTC-6, Charles Richmond wrote:

> No one is saying tha Seymour Cray actually put his DNA into the computer by
> mating with any eletronic devices.

Well, remember, I was replying to someone who had said flatly that he wasn't
the father of the supercomputer, because there were other supercomputers before
the CDC 6600. So I wasn't arguing against his paternity: I was arguing in favor
of it, while admitting some validity to the post to which I was replying, by
pointing out how the Cray I, which he designed, and which had a significant and
novel feature, began an era when supercomputers came to the public notice as a
transformative technology.

So I was saying that, yes, he _was_ the father of the (70's-style)
supercomputer.

John Savard

Quadibloc

unread,
Oct 5, 2015, 8:26:46 PM10/5/15
to
On Monday, October 5, 2015 at 3:38:44 PM UTC-6, Charles Richmond wrote:
> Then the FAA could trade all their IBM
> 360's for Raspberry Pi's and save a buttload of money!!!

They did eventually recode it, long before anyone came out with anything remotely
resembling the Raspberry Pi.

I guess IBM didn't want to make a newer version of the 9020 using 370-style
monolithic technology, or the Federal Government didn't want to pay for one.

John Savard

Quadibloc

unread,
Oct 5, 2015, 8:31:32 PM10/5/15
to
On Monday, October 5, 2015 at 3:57:10 PM UTC-6, Jon Elson wrote:

> They did, FINALLY! They kept with the unreliable legacy gear way longer
> than they should have, but there were several failed projects. One was to
> replace all the 360's (really FAA 9020D and 9020E computing elements) with
> some bit-slice emulators. I have no idea why that project failed, it seems
> like they could hardly go wrong. How could a couple boards with modern TTL
> MSI-LSI parts possible be LESS reliable than a roomfull of 360's with
> thousands of PC boards? Obviously, somebody must have had no IDEA what they
> were doing to botch that job.

Well, IBM did have a *lot* of engineers working for them to design their IBM
360 mainframes.

> There were at least TWO other projects to completely replace the whole
> traffic control system that also failed. This was a lot more ambitious, as
> they were also replacing software that had been refined over many years.
> These catastrophes showed up in the regular newspapers as well as technical
> magazines from time to time.

How did the thing get written *the first time* if nobody could rewrite it?

My guess is that it was written the first time by IBM, and the FAA tried doing
the rewrite itself because it was such a bear even for IBM that it was a case
of 'you couldn't pay us enough to try this again'.

John Savard

Anne & Lynn Wheeler

unread,
Oct 5, 2015, 9:43:10 PM10/5/15
to
Quadibloc <jsa...@ecn.ab.ca> writes:
> They did eventually recode it, long before anyone came out with anything remotely
> resembling the Raspberry Pi.
>
> I guess IBM didn't want to make a newer version of the 9020 using 370-style
> monolithic technology, or the Federal Government didn't want to pay for one.

when we were doing HA/CMP ... some past posts
http://www.garlic.com/~lynn/subtopic.html#hacmp

IBM bid triple-redundant rs/6000 running custom software. application
implementors were told that they didn't have to worry about outages &/or
failures and recovery ... because system software would mask all (FAA
hardware) failures.

turns out that review of FAA failure modes found some number at the
application/business (flight control) level ... wasn't simple matter of
low-level hardware outages ... but more complex FAA operational
.... which required rework of the application level design and
implementation ... which never completed.

before leaving ibm in 1992, we would periodically visit the technical
assistant to Federal System Division president ... he was doing double
duty as TA 1st shift, and spent 2nd shift programming ADA for the FAA
effort.

from 1993

Flying In Place: The Faa's Air Control Fiasco
http://www.bloomberg.com/bw/stories/1993-04-25/flying-in-place-the-faas-air-control-fiasco

That FAA effort was just one of growing list of failed
gov. dataprocessing modernization efforts.

misc posts mentioning FAA & (failed) gov. dataprocessing modernization
efforts
http://www.garlic.com/~lynn/2002g.html#16 Why are Mainframe Computers really still in use at all?
http://www.garlic.com/~lynn/2003l.html#14 Cost of patching "unsustainable"
http://www.garlic.com/~lynn/2007e.html#52 US Air computers delay psgrs
http://www.garlic.com/~lynn/2007o.html#43 Flying Was: Fission products
http://www.garlic.com/~lynn/2008m.html#45 IBM--disposition of clock business
http://www.garlic.com/~lynn/2009q.html#29 Check out Computer glitch to cause flight delays across U.S. - MarketWatch
http://www.garlic.com/~lynn/2009q.html#31 Check out Computer glitch to cause flight delays across U.S. - MarketWatch
http://www.garlic.com/~lynn/2012g.html#89 FAA air traffic facility consolidation effort already late
http://www.garlic.com/~lynn/2012i.html#42 Simulated PDP-11 Blinkenlight front panel for SimH
http://www.garlic.com/~lynn/2013f.html#74 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013n.html#76 A Little More on the Computer
http://www.garlic.com/~lynn/2013o.html#56 Early !BM multiprocessors (renamed from Curiosity: TCB mapping macro name - why IKJTCB?)
http://www.garlic.com/~lynn/2015b.html#59 A-10

increasingly part of growing "success of failure" culture
http://www.garlic.com/~lynn/submisc.html#success.of.failure

Jon Elson

unread,
Oct 6, 2015, 12:26:15 AM10/6/15
to
Quadibloc wrote:


>
> Well, IBM did have a *lot* of engineers working for them to design their
> IBM 360 mainframes.
>
Yes, but even publicly available documents were extremely detailed in
describing the architecture. And, a number of companies successfully cloned
the 360. I don't know how many plug-compatible mainframe companies there
were, but it was at least a dozen!
>> There were at least TWO other projects to completely replace the whole
>> traffic control system that also failed. This was a lot more ambitious,
>> as they were also replacing software that had been refined over many
>> years. These catastrophes showed up in the regular newspapers as well as
>> technical magazines from time to time.
>
> How did the thing get written *the first time* if nobody could rewrite it?
>
> My guess is that it was written the first time by IBM, and the FAA tried
> doing the rewrite itself because it was such a bear even for IBM that it
> was a case of 'you couldn't pay us enough to try this again'.
The first version was definitely IBM's Federal Systems Division, but then
the FAA, or some contractor(s), started to embellish it. The system grew
like topsy, and then, LATER, when somebody tried to step in an make a new
system that did everything the way the old system did it, they ran into
trouble. I have heard THAT story a few times before!

Jon

Jon Elson

unread,
Oct 6, 2015, 12:33:58 AM10/6/15
to
Well, actually, they DID! The 9020 system was just a network of 360/50 and
360/65 CPUs with a VERY SLIGHT bit of added microcode (I think they added
something like TWO instructions to the standard set), plus a bunch of
channel-connected custom hardware to communicate between centers and radars,
and run the displays. they also had banks of shared memory between the
systems and other devices.

In 1989 (GASP! They kept 360's running until 1989!!??!!) they replaced the
360 hardware with IBM 3083's (probably didn't need as many of these) and
then replaced those in 1999 with 9672 multiprocessors. So, they kept
running the original 360 machine code for a LONG time, just upgrading CPUs
when the old stuff got creaky.

See https://en.wikipedia.org/wiki/IBM_9020 for where I got this from.

Jon

Anne & Lynn Wheeler

unread,
Oct 6, 2015, 1:32:31 AM10/6/15
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:
> from 1993
>
> Flying In Place: The Faa's Air Control Fiasco
> http://www.bloomberg.com/bw/stories/1993-04-25/flying-in-place-the-faas-air-control-fiasco

re:
http://www.garlic.com/~lynn/2015h.html#10 the legacy of Seymour Cray
http://www.garlic.com/~lynn/2015h.html#11 the legacy of Seymour Cray
http://www.garlic.com/~lynn/2015h.html#13 the legacy of Seymour Cray
http://www.garlic.com/~lynn/2015h.html#17 the legacy of Seymour Cray

last item in the (1993) Fiasco timeline:

1993

IBM says the system will not be complete until well after 2000.

... snip ...

IBM goes into the red in 1992 and was being reorganized into
the 13 Baby Blues in preparation for breaking up the company.
While the board brought in a new CEO to resurrect the company
and reverse the breakup ... there were still major changes.
The Federal Systems Division (which was responsible for the
FAA modernization contract) was sold off to Loral
http://www.nytimes.com/1993/12/14/business/ibm-to-sell-its-military-unit-to-loral.html

Mr. Schwartz also said he regarded Federal Systems' air-traffic control
software as a "hidden asset." Federal Systems is currently leading an
overhaul of the F.A.A. system, a project plagued by cost
overruns. Indeed, as the I.B.M.-Loral deal was being announced
yesterday, the F.A.A.'s chief, David Hinson, ordered a review of the
overhaul project.

The new air-traffic control system, whose cost was estimated at $2.5
billion when it was planned in 1983, is now expected to cost more than
$5 billion. But Mr. Schwartz predicted that the Federal Systems
technology would not only satisfy the F.A.A. but also find markets
abroad.

... snip ...

Status of FAA's Modernization Program RCED-94-167FS: Published: Apr 15,
1994.
http://www.gao.gov/products/RCED-94-167FS
Status of FAA's Modernization Program RCED-95-175FS: Published: May 26,
1995.
http://www.gao.gov/products/RCED-95-175FS
Status of FAA's Modernization Program RCED-99-25: Published: Dec 3, 1998
http://www.gao.gov/products/RCED-99-25

2012

ATC Program Costs, Schedules Unreliable, GAO Says
http://www.ainonline.com/aviation-news/air-transport/2012-02-23/atc-program-costs-schedules-unreliable-gao-says

Fifteen of the 30 ATC programs have experienced delays ranging from two
months at the low end to more than 14 years for Waas. Completion targets
for Eram, considered a NextGen "backbone" system, now specify August
2014, nearly four years late.

FAA "NextGen" modernization
http://www.fiercegovernmentit.com/story/transformational-faa-modernization-programs-slipping-schedule/2012-04-25

2013

What's keeping FAA's NextGen air traffic control on the runway?
http://gcn.com/articles/2013/07/22/faa-next-generation-air-transportation-system.aspx

Ten years into the program, new technology for the Federal Aviation
Administration's Next Generation Air Transportation System is gradually
coming online. But non-technical issues are delaying many of the
promised benefits and creating skepticism in the airline industry.

2014

Air Traffic Control System: Selected Stakeholders' Perspectives on
Operations, Modernization, and Structure GAO-14-770: Published: Sep 12,
2014.
http://gao.gov/products/GAO-14-770


Next Generation Air Transportation System
https://en.wikipedia.org/wiki/Next_Generation_Air_Transportation_System

The New Other Guy

unread,
Oct 6, 2015, 3:55:24 AM10/6/15
to
On Mon, 05 Oct 2015 23:26:15 -0500, Jon Elson <el...@pico-systems.com>
wrote:

>Quadibloc wrote:
>
>
>>
>> Well, IBM did have a *lot* of engineers working for them to design their
>> IBM 360 mainframes.
>>
>Yes, but even publicly available documents were extremely detailed in
>describing the architecture. And, a number of companies successfully cloned
>the 360. I don't know how many plug-compatible mainframe companies there
>were, but it was at least a dozen!

I worked in 'quality control' at National Semiconductor in the late 70s,
on the IBM clone they were building.

Having BEEN inside, it's NO surprise to me that they failed miserably
to make inroads.






Quadibloc

unread,
Oct 6, 2015, 10:44:04 AM10/6/15
to
I think I remember seeing advertisements for it; it was sold through ITEL.

John Savard

Quadibloc

unread,
Oct 6, 2015, 11:03:42 AM10/6/15
to
On Monday, October 5, 2015 at 10:33:58 PM UTC-6, Jon Elson wrote:

> Well, actually, they DID! The 9020 system was just a network of 360/50 and
> 360/65 CPUs with a VERY SLIGHT bit of added microcode (I think they added
> something like TWO instructions to the standard set),

Actually, fourteen, at least.

And, in addition, they assigned functions to two unused bit positions in 360/65
microcode.

I just learned this from the following site:

http://www.ibm360.info/

with a lot of information on the 9020. (Bitsavers doesn't appear to have coverage of this machine.)

John Savard

Charlie Gibbs

unread,
Oct 6, 2015, 12:59:30 PM10/6/15
to
Wow, thse guys sound as bad as our provincial government, which has had
more than its own share of computing disasters. They sank $200 million
into their new Compass card system, which required riders to "tap on"
and "tap off" when boarding and disembarking. Poor response time and
flaky results turned the system into a nightmare. Yesterday I rode a
bus across all three fare zones at a single-zone rate because they
finally gave up even trying to handle a multizone system.

--
/~\ cgi...@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!

Al Kossow

unread,
Oct 6, 2015, 2:44:25 PM10/6/15
to
On 10/5/15 2:58 PM, Jon Elson wrote:

> They did, FINALLY! They kept with the unreliable legacy gear way longer
> than they should have, but there were several failed projects. One was to
> replace all the 360's (really FAA 9020D and 9020E computing elements) with
> some bit-slice emulators. I have no idea why that project failed, it seems
> like they could hardly go wrong. How could a couple boards with modern TTL
> MSI-LSI parts possible be LESS reliable than a roomfull of 360's with
> thousands of PC boards? Obviously, somebody must have had no IDEA what they
> were doing to botch that job.
>
> There were at least TWO other projects to completely replace the whole
> traffic control system that also failed. This was a lot more ambitious, as
> they were also replacing software that had been refined over many years.
> These catastrophes showed up in the regular newspapers as well as technical
> magazines from time to time.
>
>> With the
>> execution speed of modern processors, the system could be largely coded in
>> a higher level language and be much more supportable.
> Well, I'm sure the system now in place IS written in an HLL, I'd sort of
> guess it might be Ada, but don't know.
>
> And, of course, they are FAR from done! The next system NexGen is supposed
> to be implemented soon, but it is apparently far from ready. But, a lot of
> the infrastructure needed (mostly ADDS-B) is coming together and that part
> IS working.
>
> Jon
>

If you can find an a.f.c archive that hasn't been fscked up ((&*#%^# YOU GOOGLE)
Read the a.f.c. posts from John Varela circa 2010 on the subject.

hanc...@bbs.cpcn.com

unread,
Oct 6, 2015, 2:58:08 PM10/6/15
to
On Tuesday, October 6, 2015 at 12:59:30 PM UTC-4, Charlie Gibbs wrote:
> Wow, thse guys sound as bad as our provincial government, which has had
> more than its own share of computing disasters. They sank $200 million
> into their new Compass card system, which required riders to "tap on"
> and "tap off" when boarding and disembarking. Poor response time and
> flaky results turned the system into a nightmare. Yesterday I rode a
> bus across all three fare zones at a single-zone rate because they
> finally gave up even trying to handle a multizone system.

SEPTA (Phila) is planning a new fare collection system; the present one is very basic--paper and tokens.

I question the cost/benefit of the new system. It will cost about $300 million. But will it generate $300 worth of new business (riders liking the new system), or will it save $300 million in costs? I doubt it.


read about their proposals here. will they be more or less convenient for riders?
http://www.septa.org/key/index.html


Side note: Control Data Corp, Cray's legacy, supplied Amtrak ticketing equipment.

hanc...@bbs.cpcn.com

unread,
Oct 6, 2015, 3:07:03 PM10/6/15
to
On Friday, October 2, 2015 at 8:19:46 PM UTC-4, Quadibloc wrote:

> I would agree that Seymour Cray was not the inventor of the big, fast computer.
> As soon as computers were invented, the idea of making new ones more powerful
> than the old ones was obvious.
> However, Seymour Cray did invent the Cray I, and its vector register design,
> notably more successful than such machines as the Illiac IV, the Texas
> Instruments Advanced Scientific Computer, and Control Data's successors to his
> 6600, and so on... was copied by many companies. DEC, for the VAX, Univac, for
> the 1100 series, IBM, for the later 370s, all made add-ons to their computers
> that followed his principles, and several other companies made vector
> supercomputers of similar design.
> For a few years following the Cray I, there was much talk about how the era of
> the supercomputer had begun - and then, by "supercomputer", they meant vector
> machines along the lines of the Cray.
> Of _that_, Seymour Cray was the father.

[snip]

You make a good point. If the term 'supercomputer' is defined to be Cray's invention, then there is no argument.

I was just thinking that STRETCH and LARC were more than just the next computer in line. AFAIK, they both were specially developed to be high powered machines with special features, the maximum available, not merely a production line computer for large organizations. In other words, while the IBM 7090 was a powerful computer and IBM's best regular product of its time, it still wasn't as powerful as STRETCH.

IMHO, the developers of STRETCH and LARC deserve more historical credit than they have gotten. Perhaps their work wasn't as ground-breaking as Cray's, but they did push the envelope to the benefit of the industry.

Al Kossow

unread,
Oct 6, 2015, 3:12:04 PM10/6/15
to
On 10/6/15 12:07 PM, hanc...@bbs.cpcn.com wrote:
> they did push the envelope to the benefit of the industry.

Yup, where would we be today without tunnel diode memories.



Jon Elson

unread,
Oct 6, 2015, 3:58:24 PM10/6/15
to
The New Other Guy wrote:

As far as I know, most of the plug-compatible clones of 360's DID actually
work. But, yes, I agree, other than Amdahl, nobody really was a big success
at the cloned 360's. That was largely due to marketing, IBM was a truly
masterful PURVEYOR of computers. I thought most of their hardware was
actually quite awful, but they sure knew how to sell them! The 360/30 had
8-bit datapath, 8-bit (plus parity) main storage, and the local store
(registers) were in a separately-addresses section of main store. Depending
on the instruction mix, most people rated it at between 30K and 40K
instructions per second. We had 12-bit minicomputers that fit the complete
system in a 5' relay rack and plugged into the 120 V wall socket that ran
about 150K instructions/second, and were made before most 360/30's. So, the
/30 (and it's sort-of clones, models 22 and 25) and /40 were really
ridiculously slow. The model 50 was just a little better, mostly saddled
with horribly slow main storage. The model 65 was the lowest model that
really performed at a reasonable level.

But, IBM really dominated the computer industry, from the late 1960's
through the mid 80's or so.

Jon

Charles Richmond

unread,
Oct 6, 2015, 5:16:44 PM10/6/15
to
"Quadibloc" <jsa...@ecn.ab.ca> wrote in message
news:178009e0-4880-43b4...@googlegroups.com...
Quadibloc: Yes, I understand you are defending Seymour Cray's paternity in
regards to supercomputers. Although responding to your post, most of my
assertions were aimed at those who challenge Mr. Cray's parental rights in
regard to the supercomputer.

Also I want to make the point... that "father of" this or that is largely
public relations. Seymour Cray is identified by the popular media, more
than any other computer designer, with the Cray supercomputer (usually
thinking about the Cray I). IMO Seymour Cray was probably *the* premier
designer of computers in the 1960's and likely the 1970's also. Software
was another thing altogether. The software Mr. Cray wrote was spotty and a
bit slap-dash, but he was working "outside his element".

Charles Richmond

unread,
Oct 6, 2015, 5:25:01 PM10/6/15
to
"Al Kossow" <a...@bitsavers.org> wrote in message
news:mv14md$nuu$1...@dont-email.me...
> On 10/5/15 2:58 PM, Jon Elson wrote:
>
> [snip...] [snip...]
> [snip...]
>
> If you can find an a.f.c archive that hasn't been fscked up ((&*#%^# YOU
> GOOGLE)
> Read the a.f.c. posts from John Varela circa 2010 on the subject.
>

Mr. Kossow, ISTM that you detest incompetence. Unfortunately, incompetence
seems to be the hallmark of the 21st century world. Poor us!!! (I am *not*
being facetious here!!!)

Charles Richmond

unread,
Oct 6, 2015, 5:30:49 PM10/6/15
to
"Quadibloc" <jsa...@ecn.ab.ca> wrote in message
news:a1a9d3b5-1d0f-4c2d...@googlegroups.com...
Besides the software maintenance nightmare, one other *big* problem is
hardware failure. With the aging 360 hardware, system outages of a day or
more are *not* uncommon. I'd like to think that the regional radar is going
to be helpful to the air traffic controllers... especially if I am on a
plane that wants to land!!!

When the air traffic control system is out, it is similar to when a traffic
light is *not* working. The aircraft increase their separation by a few
miles and at best everything gets jammed up and slowed down. At worst,
there could be a collision or multiple collisions. The air traffic
controller is confronted with two dozen or so aircraft, each going about 200
mph... and is keeping up with it all using a pencil and paper. Keeping up...
yeah, right!

Charles Richmond

unread,
Oct 6, 2015, 5:36:13 PM10/6/15
to
"Jon Elson" <el...@pico-systems.com> wrote in message
news:bLWdnZTs9rer0o7L...@giganews.com...
>
> [snip...] [snip...]
> [snip...]
>
> In 1989 (GASP! They kept 360's running until 1989!!??!!) they replaced
> the
> 360 hardware with IBM 3083's (probably didn't need as many of these) and
> then replaced those in 1999 with 9672 multiprocessors. So, they kept
> running the original 360 machine code for a LONG time, just upgrading CPUs
> when the old stuff got creaky.
>

Couldn't we just replace it all with triple-redundant Raspberry Pi's running
in "hot standby"??? At worst, just run a bunch of 360/50 emulators on the
ARM chips and keep plugging away at the original 360 code that was written
in hieroglyphics. Total cost might be $15,000 US per airport. We can
pocket the rest of the $5 billion!!! ;-)

hanc...@bbs.cpcn.com

unread,
Oct 6, 2015, 5:47:29 PM10/6/15
to
On Tuesday, October 6, 2015 at 3:58:24 PM UTC-4, Jon Elson wrote:

> As far as I know, most of the plug-compatible clones of 360's DID actually
> work. But, yes, I agree, other than Amdahl, nobody really was a big success
> at the cloned 360's. That was largely due to marketing, IBM was a truly
> masterful PURVEYOR of computers.

I think the 370 clone vendors, e.g. Amdahl, Hitachi, etc, sold a fair amount of machines. My guess the problem was economies of scale, making production costs high. Also, I think once the clones came out, IBM had fully unbundled, so one still had to buy system software regardless of who made the hardware. The book by Campbell-Kelly says IBM makes a lot of money from renting CICS, for example.



I thought most of their hardware was
> actually quite awful, but they sure knew how to sell them! The 360/30 had
> 8-bit datapath, 8-bit (plus parity) main storage, and the local store
> (registers) were in a separately-addresses section of main store. Depending
> on the instruction mix, most people rated it at between 30K and 40K
> instructions per second. We had 12-bit minicomputers that fit the complete
> system in a 5' relay rack and plugged into the 120 V wall socket that ran
> about 150K instructions/second, and were made before most 360/30's.


I don't think there were that many mini-computers on the market before the S/360 came along, and I'd be surprised if their _throughput_ could compare to a model 30. What mini could drive a 1,000 card/minute reader simultaneously with a 1,000 line a minute printer? Did such mini's have up to 64k in memory, disks, and tapes?




> So, the
> /30 (and it's sort-of clones, models 22 and 25) and /40 were really
> ridiculously slow. The model 50 was just a little better, mostly saddled
> with horribly slow main storage. The model 65 was the lowest model that
> really performed at a reasonable level.

The 30 was slow machine, but I think it was cost effective for small installations. It was certainly popular enough. While many customers may have been former 1401 users (about 10,000), there were another 15,000 other customers.

We had a 40 and did quite a bit of work on it (especially after installing a spooler). We even handled a few online terminals. I think the 50 was a reasonably powered machine.


> But, IBM really dominated the computer industry, from the late 1960's
> through the mid 80's or so.

Part of IBM's dominance certainly did come from momentum; it had the tab industry and early computers, and was easy to evolve into S/360 and S/370. But having worked with mainframes from other vendors, I think IBM provided good machines and support.

Jon Elson

unread,
Oct 6, 2015, 7:02:44 PM10/6/15
to
hanc...@bbs.cpcn.com wrote:

> On Tuesday, October 6, 2015 at 3:58:24 PM UTC-4, Jon Elson wrote:

>> I thought most of their hardware was
>> actually quite awful, but they sure knew how to sell them! The 360/30
>> had 8-bit datapath, 8-bit (plus parity) main storage, and the local store
>> (registers) were in a separately-addresses section of main store.
>> Depending on the instruction mix, most people rated it at between 30K and
>> 40K
>> instructions per second. We had 12-bit minicomputers that fit the
>> complete system in a 5' relay rack and plugged into the 120 V wall socket
>> that ran about 150K instructions/second, and were made before most
>> 360/30's.
>
>
> I don't think there were that many mini-computers on the market before the
> S/360 came along, and I'd be surprised if their _throughput_ could compare
> to a model 30. What mini could drive a 1,000 card/minute reader
> simultaneously with a 1,000 line a minute printer? Did such mini's have
> up to 64k in memory, disks, and tapes?
>
>
Well, architectures were so different that comparisons are hard. Absolutely
NO DOUBT I/O througput of a 360/30 was way better than a LINC or PDP-8. You
could get high performance peripherals on 12-bit minis, if you really wanted
them, but I/O was pretty much all program-controlled, although PDP-8's did
have a DMA capability.

The LINC was developed at MIT and the first were delivered in 1962, all
discrete transistors, built out of DEC "system building blocks". The PDP-5
(predecessor to the PDP-8) came out in 1964. The PDP-5 was a BIT on the
slow side.

>
>
>> So, the
>> /30 (and it's sort-of clones, models 22 and 25) and /40 were really
>> ridiculously slow. The model 50 was just a little better, mostly saddled
>> with horribly slow main storage. The model 65 was the lowest model that
>> really performed at a reasonable level.
>
> The 30 was slow machine, but I think it was cost effective for small
> installations. It was certainly popular enough. While many customers may
> have been former 1401 users (about 10,000), there were another 15,000
> other customers.
>
Yes, sure, if you were moving from an ALL CARD 1401 shop, a 360/30 was HOT
STUFF! The card machines and printer were basically the same, so unless you
had tapes and or disks, the 360 couldn't do anything faster, anyway.
> We had a 40 and did quite a bit of work on it (especially after installing
> a spooler). We even handled a few online terminals. I think the 50 was a
> reasonably powered machine.
>
We tried to run a whole university with 4000+ students and employees on ONE
360/50. Batch turnaround times ranged from 4 hours (which was bad) to 8
hours, which was just AWFUL. Byt the time you got your printout, you
totally had forgotten what the last change was supposed to fix. That was a
pretty awful experience, maybe get 2 chances to edit your program a day,
what with having classes, eating, etc. I've never run a 360 without HASP-
II, although I have heard what it was like. Spooling was certainly a major
improvement on the larger machines with serious multiprogramming.
>
>> But, IBM really dominated the computer industry, from the late 1960's
>> through the mid 80's or so.
>
> Part of IBM's dominance certainly did come from momentum; it had the tab
> industry and early computers, and was easy to evolve into S/360 and S/370.
> But having worked with mainframes from other vendors, I think IBM
> provided good machines and support.
Oh, their support was LEGENDARY! No matter what sort of problem you had,
they could probably dig up somebody who could at least help you understand
what you were doing wrong. Their manuals could teach you computer science.
Anything you could possibly want to know except the machine schematic were
in the manual library, and if you really needed to look at schematics, they
would show you those, too. (We had a guy that connected several custom
devices to the channel bus on our 360/50.)

They wouldn't sell you a box and leave, they would make sure you had the
tools to develop what you needed to get the job done. (Of course, there
were some legendary failures where either the customer of IBM got in way
over their heads on large projects.)

I DO give credit to IBM for making computers serviceable, distilling them
down to the minimum, and then being able to market them well.

Jon

terry+go...@tmk.com

unread,
Oct 6, 2015, 7:22:59 PM10/6/15
to
On Tuesday, October 6, 2015 at 7:02:44 PM UTC-4, Jon Elson wrote:
> Yes, sure, if you were moving from an ALL CARD 1401 shop, a 360/30 was HOT
> STUFF! The card machines and printer were basically the same, so unless you
> had tapes and or disks, the 360 couldn't do anything faster, anyway.

With a 360, you could get the 2560 MFCM (Mother ******* Card Mangler) which had 2 input hoppers, 5 output hoppers, and a non-deterministic path between them which included 2 complete 180-degree turns and a flip-over.

> I've never run a 360 without HASP-
> II, although I have heard what it was like. Spooling was certainly a major
> improvement on the larger machines with serious multiprogramming.

One of the "features" of the 2560 was that the hopper numbering changed if you were spooling or not. We had many cases of students punching their output on the next student's source deck.

> Anything you could possibly want to know except the machine schematic were
> in the manual library, and if you really needed to look at schematics, they
> would show you those, too. (We had a guy that connected several custom
> devices to the channel bus on our 360/50.)

I'm pretty sure that all of our 370s had a complete schematic set that lived with the CPU (rolling multi-level cart full of tall dark blue binders). I don't recall ever looking anything up in there myself, but I do remember the wiring lists between the modules talking up a large amount of space in the manuals. The schematic set was for that specific CPU (serial number on the CPU and manuals had to match).

I don't know if our 360s had the cart of blue manuals - I wasn't in charge of things yet at that point.

By the time the 43xx and 9370 systems came through, IBM didn't provide a manual set any more. And the legendary support had also gone by the wayside by the time of the 9370 - we never accepted the 9370 and after 6 months or so of trying to make it work (and more effort spent lobbying us to sign the acceptance and then they'd "work it out") they finally took it back. That was the end of IBM for academic computing at that site.

Joe Morris

unread,
Oct 6, 2015, 7:45:14 PM10/6/15
to
"Charles Richmond" <nume...@aquaporin4.com> wrote:

> When the air traffic control system is out, it is similar to when a
> traffic light is *not* working.

Um...no. One can wish that it was that simple, but it isn't.

> The aircraft increase their separation by a few
> miles

...with no way for the controllers to be certain that they know where
everyone is in the airspace owned by the failed enroute center. Without
that knowledge the controllers can't tell the pilots (and the pilots don't
have the information) how to set up the desired separation. ADSB-OUT
certainly can help, but a controller is still necessary.

> and at best everything gets jammed up and slowed down. At worst,
> there could be a collision or multiple collisions.

The very explicit primary duty of Air Traffic Control is to provide
separation. If that cannot be assured, the airspace is closed.

> The air traffic
> controller is confronted with two dozen or so aircraft, each going about
> 200 mph...

more likely to be much closer to 500 KT than 200 MPH.

> and is keeping up with it all using a pencil and paper. Keeping
> up... yeah, right!

Earlier this year the Washington Air Route Traffic Control Center (aka ZDC)
experienced a problem with a new computer system (no, not a Rasberry Pi, and
not the 9020 or its descendents) that caused the FAA to declare ATC ZERO at
the facility - that's the term for a 100% shutdown. Aircraft under visual
flight rules could still operate, but no instrument flight rules flights
were permitted...meaning among other things no airline flights. A graphic
from "Flight Radar 24" (a good smartphone app, btw) shows lots of aircraft
elsewhere in the US, but none in the area owned by ZDC.

http://forums.jetcareers.com/attachments/image-jpeg.32442/


I've not yet seen any of the formal reports about the problem - most of what
I know is from news stories - so I won't speculate on how the new system
managed to bring down the entire house of cards.

A long-term outage - such as the arson fire at the Chicago Center (ZAU) last
year - is handled by having a large fraction of the controllers TDY at
adjacent centers; several of the ones who had been on duty at ZAU
immediately jumped into their cars and drove to the Indianapolis center
(ZID) without even bothering to stop off at their homes to pick up a
suitcase...but until they were in place at an adjacent center the ZAU
airspace was closed - again, except for visual traffic.

Joe


Quadibloc

unread,
Oct 6, 2015, 8:18:07 PM10/6/15
to
Tunnel diodes were tricky to manufacture, so to get usable ones for high-speed
computer logic, one had to manufacture them and then test and sort them to get
matching ones. Obviously, that isn't an option with integrated circuits, so
that basically killed the usefulness of the technology.

It's too bad, as if there were a way to use them, computer speeds could perhaps
be improved.

Of course, someone _did_ figure out a way to revive the Josephson junction
after even IBM gave up on it... but I haven't heard much about that project
(involving a Russian inventor, and the use of pulses instead of steady
currents) since the initial excitement.

John Savard

Anne & Lynn Wheeler

unread,
Oct 6, 2015, 8:32:27 PM10/6/15
to

hanc...@bbs.cpcn.com writes:
> I think the 370 clone vendors, e.g. Amdahl, Hitachi, etc, sold a fair
> amount of machines. My guess the problem was economies of scale,
> making production costs high. Also, I think once the clones came out,
> IBM had fully unbundled, so one still had to buy system software
> regardless of who made the hardware. The book by Campbell-Kelly says
> IBM makes a lot of money from renting CICS, for example.

2012 numbers was only about 4% of IBM revenue was from mainframe
processors ... but mainframe division accounted for total 25% of IBM
revenue (and 40% of profit) ... mainframe software and services.

processor revenue seems to have dropped off since then ... but software
and services continue to be quite big part of revenue.

Charlie Gibbs

unread,
Oct 6, 2015, 9:18:27 PM10/6/15
to
On 2015-10-06, hanc...@bbs.cpcn.com <hanc...@bbs.cpcn.com> wrote:

> On Tuesday, October 6, 2015 at 12:59:30 PM UTC-4, Charlie Gibbs wrote:
>
>> Wow, thse guys sound as bad as our provincial government, which has had
>> more than its own share of computing disasters. They sank $200 million
>> into their new Compass card system, which required riders to "tap on"
>> and "tap off" when boarding and disembarking. Poor response time and
>> flaky results turned the system into a nightmare. Yesterday I rode a
>> bus across all three fare zones at a single-zone rate because they
>> finally gave up even trying to handle a multizone system.
>
> SEPTA (Phila) is planning a new fare collection system; the present one
> is very basic--paper and tokens.
>
> I question the cost/benefit of the new system. It will cost about $300
> million. But will it generate $300 worth of new business (riders liking
> the new system),

It probably will. Three hundred bucks, that is.

> or will it save $300 million in costs? I doubt it.

But it'll be so _sexy_!

Charlie Gibbs

unread,
Oct 6, 2015, 9:18:27 PM10/6/15
to
> On Tuesday, October 6, 2015 at 3:58:24 PM UTC-4, Jon Elson wrote:
>
>> As far as I know, most of the plug-compatible clones of 360's DID actually
>> work. But, yes, I agree, other than Amdahl, nobody really was a big success
>> at the cloned 360's. That was largely due to marketing, IBM was a truly
>> masterful PURVEYOR of computers.

That they were - you had to give them that. And they stood behind their
machines.

> I think the 370 clone vendors, e.g. Amdahl, Hitachi, etc, sold a fair amount
> of machines. My guess the problem was economies of scale, making production
> costs high. Also, I think once the clones came out, IBM had fully unbundled,
> so one still had to buy system software regardless of who made the hardware.
> The book by Campbell-Kelly says IBM makes a lot of money from renting CICS,
> for example.

At least it gave them revenue when the clones came along.

>> I thought most of their hardware was actually quite awful, but they
>> sure knew how to sell them! The 360/30 had 8-bit datapath, 8-bit
>> (plus parity) main storage, and the local store (registers) were
>> in a separately-addresses section of main store. Depending on the
>> instruction mix, most people rated it at between 30K and 40K instructions
>> per second. We had 12-bit minicomputers that fit the complete system in
>> a 5' relay rack and plugged into the 120 V wall socket that ran about 150K
>> instructions/second, and were made before most 360/30's.

Mind you, many of that model 30's 30K instructions were things like MVC or AP,
which a mini could only do with groups of instructions running in loops.

> I don't think there were that many mini-computers on the market before
> the S/360 came along, and I'd be surprised if their _throughput_ could
> compare to a model 30. What mini could drive a 1,000 card/minute reader
> simultaneously with a 1,000 line a minute printer? Did such mini's have
> up to 64k in memory, disks, and tapes?
>
>> So, the /30 (and it's sort-of clones, models 22 and 25) and /40 were really
>> ridiculously slow. The model 50 was just a little better, mostly saddled
>> with horribly slow main storage. The model 65 was the lowest model that
>> really performed at a reasonable level.
>
> The 30 was slow machine, but I think it was cost effective for small
> installations. It was certainly popular enough. While many customers
> may have been former 1401 users (about 10,000), there were another 15,000
> other customers.

It depends on what you mean by slow. As you said, it could drive a 1,000-cpm
card reader and an 1,100-lpm printer. And many of the commercial applications
running on those machines were still I/O-bound.

As for that 64K of memory, that was more an IBM marketing strategy than a
physical limitation. At a PPOE we once wanted to run a job that was too
big for our machine, so we rented time on a 360/30 with 128K of memory.
(I still have the SYSRES pack.) There were some switches and indicators
mounted on an unused portion of the panel to handle the extra address bit -
it looked kind of homebrewed, but it worked. I read that Greyhound Leasing
bought up a bunch of old /30s, refurbished them, and hung up to 512K on them.

> We had a 40 and did quite a bit of work on it (especially after installing
> a spooler). We even handled a few online terminals. I think the 50 was a
> reasonably powered machine.
>
>> But, IBM really dominated the computer industry, from the late 1960's
>> through the mid 80's or so.
>
> Part of IBM's dominance certainly did come from momentum; it had the tab
> industry and early computers, and was easy to evolve into S/360 and S/370.
> But having worked with mainframes from other vendors, I think IBM provided
> good machines and support.

Yup.

Jon Elson

unread,
Oct 7, 2015, 12:19:10 AM10/7/15
to
The New Other Guy wrote:


>
> I worked in 'quality control' at National Semiconductor in the late 70s,
> on the IBM clone they were building.
>
> Having BEEN inside, it's NO surprise to me that they failed miserably
> to make inroads.
Actually, National Semi had a long string of pretty good products that they
had very poor luck marketing. I thought the 16032 CPU (later renamed the
32016) was actually quite good. Yes, it was no match for a VAX 11/780, but
it was on ONLY 5 CHIPS! The instruction set looked quite good, compared to
the VAX, very well organized and orthogonal. (not saying the 16032 was
better than the VAX, just that it was nearly as good, as the VAX was very
fine, if you liked CISC architectures.)

I built a multiprocessor array with 7 of them attached to a 780 for I/O.
But, they never were able to get anybody to build a significant number of
systems with them.

Before that, there was the 8500 graphics chip set.

Seems like there were some other major busts, too.

Jon

Jon Elson

unread,
Oct 7, 2015, 12:22:52 AM10/7/15
to
terry+go...@tmk.com wrote:


>
> I'm pretty sure that all of our 370s had a complete schematic set that
> lived with the CPU (rolling multi-level cart full of tall dark blue
> binders). I don't recall ever looking anything up in there myself, but I
> do remember the wiring lists between the modules talking up a large
> amount of space in the manuals. The schematic set was for that specific
> CPU (serial number on the CPU and manuals had to match).
>
> I don't know if our 360s had the cart of blue manuals - I wasn't in
> charge of things yet at that point.
>
Yup, the ALDs and related documents were definitely present on 360s as well.
> By the time the 43xx and 9370 systems came through, IBM didn't provide a
> manual set any more. And the legendary support had also gone by the
> wayside by the time of the 9370 - we never accepted the 9370 and after 6
> months or so of trying to make it work (and more effort spent lobbying
> us to sign the acceptance and then they'd "work it out") they finally
> took it back. That was the end of IBM for academic computing at that
> site.
Wow, a sad tale! How far they have fallen!

Jon

Jon Elson

unread,
Oct 7, 2015, 12:29:54 AM10/7/15
to
Charlie Gibbs wrote:


>
> Mind you, many of that model 30's 30K instructions were things like MVC or
> AP, which a mini could only do with groups of instructions running in
> loops.
>
Well, consider that the /30 had ONE BYTE wide memory, at 2.5 us per cycle.
Note that the registers (local storage) were in an extension of main
storage, so there was no overlap of main store and local store. So, one of
the simplest instructions on the 360 was something like add, register to
register. This instruction was 16-bits long, so it took two memory cycles.
Then, it had to read a 4-byte source operand and read-modify-write a 4-byte
destination. So, that is 10 memory cycles at 2.5 us = 25 us total execution
time. 25 us = 40K instructions/second.

True, character manipulation and decimal arithmetic were greatly accelerated
by doing in microcode what would be a subroutine on a traditional mini.

Jon

hanc...@bbs.cpcn.com

unread,
Oct 7, 2015, 10:54:44 AM10/7/15
to
On Tuesday, October 6, 2015 at 7:02:44 PM UTC-4, Jon Elson wrote:
> Yes, sure, if you were moving from an ALL CARD 1401 shop, a 360/30 was HOT
> STUFF! The card machines and printer were basically the same, so unless you
> had tapes and or disks, the 360 couldn't do anything faster, anyway.

I believe the speeds on the 1401 were 600 lines or cards per minute, and somewhat program dependent, so the effective rate could be slower. However, I believe the 360/30 was always 1,000 lines or cards per minute (unless the program was doing an awful lot between each card or line.) From what people told me, the 360/30 was a significant improvement over their prior 1401, even when running 1401 emulation, and certainly with native code.

I don't know how the 360/30 compared to similar machines (same throughput) in terms of price. I think other vendors were usually cheaper, but they didn't have the hardware, system software, or application software support that IBM provided. For example, IBM provided a whole suite of hospital-oriented accounting applications free, which I don't think other vendors could offer. The price dropped significantly if one leased a machine from a third party dealer instead of directly from IBM.

Back in the 1950s and 1960s, the general computer literature made comparisons on various timings--instruction, core access, etc. But these didn't tell the whole story. Fortunately, later on they did comparisons using wall clock time to compare the throughput of the same program running on different machines. The customer, especially management, could care less about internal CPU speed; rather, they wanted to know how long it would take to get out the payroll.

[snip]

hanc...@bbs.cpcn.com

unread,
Oct 7, 2015, 11:13:27 AM10/7/15
to
On Wednesday, October 7, 2015 at 12:29:54 AM UTC-4, Jon Elson wrote:

> Well, consider that the /30 had ONE BYTE wide memory, at 2.5 us per cycle.

But many of the jobs run on a 360/30 were I/O bound. It was amazing how much work one could do between reading a card and printing out results and keeping the reader and printer running at full speed. While some of the I/O channel's work was done by stolen CPU cycles, other work was done by the printer itself and its associated control unit.

Jon Elson

unread,
Oct 7, 2015, 3:22:39 PM10/7/15
to
Quadibloc wrote:

> On Tuesday, October 6, 2015 at 1:12:04 PM UTC-6, Al Kossow wrote:
>> On 10/6/15 12:07 PM, hanc...@bbs.cpcn.com wrote:
>
>> > they did push the envelope to the benefit of the industry.
>
>> Yup, where would we be today without tunnel diode memories.
>
> Tunnel diodes were tricky to manufacture, so to get usable ones for
> high-speed computer logic, one had to manufacture them and then test and
> sort them to get matching ones. Obviously, that isn't an option with
> integrated circuits, so that basically killed the usefulness of the
> technology.
Right, the threshold current would come out with a random scatter due to
where the crystal defects were, and you couldn't adjust the device. if you
just had one tunned diode, you could set the current right where you wanted
it, but if you tried to make a big array, you couldn't. Also, having a big
array of tunnel diodes, all drawing current continuously, would make a
pretty power-hungry memory device.

Jon

Jon Elson

unread,
Oct 7, 2015, 3:32:28 PM10/7/15
to
hanc...@bbs.cpcn.com wrote:


> From what people told me, the 360/30 was a significant improvement over
> their prior 1401, even when running 1401 emulation, and certainly with
> native code.
>
Well, actually, it seems the original 360/30 was a much BETTER 1401 than a
360. Memory and data paths were all 8-bit wide! Much closer to the old
1401 and other character machines than a 360 and other word machines.
The model 30 had significantly faster memory than the 1401, and that largely
set the speedup right there.

Many shops bought the 360/30 and ran them ONLY in 1401 mode, and never even
considered converting their business DP programs over to run as 360
programs. Really, on the /30, there was little benefit, as small /30
systems did not support multiprogramming. That was due to the tight memory,
and often some of these systems were diskless. You loaded each program from
cards, so obviously only one program at a time in memory.

If you put enough memory on a /30, you could run DOS/360 on it, and that
gave you multiprocessing.

Jon

hanc...@bbs.cpcn.com

unread,
Oct 7, 2015, 5:08:35 PM10/7/15
to
On Wednesday, October 7, 2015 at 3:32:28 PM UTC-4, Jon Elson wrote:
> > From what people told me, the 360/30 was a significant improvement over
> > their prior 1401, even when running 1401 emulation, and certainly with
> > native code.
> >
> Well, actually, it seems the original 360/30 was a much BETTER 1401 than a
> 360. Memory and data paths were all 8-bit wide! Much closer to the old
> 1401 and other character machines than a 360 and other word machines.
> The model 30 had significantly faster memory than the 1401, and that largely
> set the speedup right there.

I suspect a big gain in performance was the faster card reader and printer, significant in business applications. I think the 360 versions of tape and disk (2311 vs. 1311) were faster than the 1401 as well, also speeding throughput.


> Many shops bought the 360/30 and ran them ONLY in 1401 mode, and never even
> considered converting their business DP programs over to run as 360
> programs. Really, on the /30, there was little benefit, as small /30
> systems did not support multiprogramming. That was due to the tight memory,
> and often some of these systems were diskless. You loaded each program from
> cards, so obviously only one program at a time in memory.

Indeed, many places ran 1401 code into the 1990s.

However, there were certain advantages to going native mode as opposed to emulation. Native mode ran faster, and could support larger more complex programs and data files. Most companies were growing and needed to expand basic systems to do more things and handle more volume. They certainly could do so in 1401 mode--and many did--but native mode gave them more options.


> If you put enough memory on a /30, you could run DOS/360 on it, and that
> gave you multiprocessing.

Our DOS required only 14k, so a 32k /30 user had 18k left over, (maybe even more as a simpler DOS would take up less room.) I don't think too many /30 users ran multiprogramming. But DOS would allow a stacked job capability instead of manually loading in object decks, speeding operator productivity.

The other thing is that more memory would allow bigger buffers for file I/O. A 1401 programmer might not want to waste 10k on large tape blocks (5k on each side), but a native mode programmer could make excellent use of it and significantly speed the processing of a tape file. If the file was large, it would save on mounts, too.

A 64k machine would allow more complex programs, which could reduce the overall number of program steps within a job, saving wall clock time.

Jon Elson

unread,
Oct 7, 2015, 5:33:23 PM10/7/15
to
Jon Elson wrote:


> If you put enough memory on a /30, you could run DOS/360 on it, and that
> gave you multiprocessing.
ACK!! Multiprogramming, NOT multiprocessing!

Jon

Jon Elson

unread,
Oct 7, 2015, 5:50:50 PM10/7/15
to
hanc...@bbs.cpcn.com wrote:

> I suspect a big gain in performance was the faster card reader and
> printer, significant in business applications. I think the 360 versions
> of tape and disk (2311 vs. 1311) were faster than the 1401 as well, also
> speeding throughput.

Oh, yes, I think the guys who just STAYED in 1401 mode were running a nearly
all-card or card and tape shop. Once you had disks on the machine, the
advantage of 360 mode just had to be a huge reason to update the system.
The original 360/30 was very limited in high speed peripheral support, due
to the single byte-wide memory. You could only get 300 K bytes/second
throughput for ALL I/O combined, and that would stall the CPU until it was
done. Fast tapes at 800 BPI couldn't come close to that, but even
relatively slow disks could. You had to be very careful when designing the
system to select peripherals that could not ever overload the memory
bandwidth. We even had this on our 360/50. You would get these ghastly
tapes that had gaps of a couple character times right in the middle of the
records. Somebody had misconfigured something and it allowed too many
devices to be transferring at the same time.


> Indeed, many places ran 1401 code into the 1990s.
>

Ugh, how horrid! 1990's??? Geez, you could do whatever you wanted on a
network-connected PC by then! Even in Cobol, if you must! I can't IMAGINE
the hair-pulling hassle of trying to debug a program bug in emulated 1401 on
a 360/30 from the console switches! YIKES, what a horror movie that would
be, compared to decent debug facilities on modern OS's.

Obviously, they weren't running 1401 emulation on a 360/30 in the 1990's.


Yes, of course, there were huge advanges in moving up to larger machines
with serious OS support, better languages, comms, big disks, etc. but some
of that could NOT be done on the /30, due to some of its limitations. The
models /22 and /25 went up to 16-bit memory, and relieved some of these
bottlenecks.

Jon

Morten Reistad

unread,
Oct 7, 2015, 5:55:58 PM10/7/15
to
In article <uuCdnXjiH73W7ojL...@giganews.com>,
The sheer amount of old 1401 code we had to port at a PPOE in 1984-85
witnessed a lot of that mentality.

-- mrr

Peter Flass

unread,
Oct 7, 2015, 6:33:50 PM10/7/15
to
I'm not sure of the specifics, but as I recall the story a large customer
lost their 360 due to a fire. IBM pulled the next one off the production
line and a large group of FEs worked all weekend to get it installed. They
were back up and running Monday morning.


--
Pete

Peter Flass

unread,
Oct 7, 2015, 6:33:50 PM10/7/15
to
We had them, as I recall, thru System z.

--
Pete

Dan Espen

unread,
Oct 7, 2015, 7:30:08 PM10/7/15
to
Jon Elson <jme...@wustl.edu> writes:

> I can't IMAGINE
> the hair-pulling hassle of trying to debug a program bug in emulated 1401 on
> a 360/30 from the console switches! YIKES, what a horror movie that would
> be, compared to decent debug facilities on modern OS's.

No need to imagine something that didn't happen.

I wrote plenty of 1401 code and debugged plenty under emulation.

I never debugged 1401 code using console switches on a 360.
Maybe some shop did, but I can't imagine why.

On a related note, the S/360 and it's successors were never noted
for their debug facilities. I don't consider TSO TEST, Inspect
and it's successors "decent". COBOL did have a "gonum" feature
that was useful but the more elaborate debug stuff was best ignored.

--
Dan Espen

Dan Espen

unread,
Oct 7, 2015, 7:38:17 PM10/7/15
to
Multiprogramming:

the running of two or more programs or sequences of instructions
simultaneously by a computer with more than one central processor.

Multiprocessing:

is the use of two or more central processing units (CPUs) within a
single computer system. The term also refers to the ability of a
system to support more than one processor and/or the ability to
allocate tasks between them.

At least those are current definitions (from Google).

With DOS, if you had enough memory, you got more than one batch stream
going at a time. It's been so long, I can't think of a defining term
for that. Maybe it was one of the above.

On a 30, the best I saw was one batch partition and one of the FG
partitions running a BTAM multi-tasking application supporting
some terminals.

--
Dan Espen

Joe Pfeiffer

unread,
Oct 7, 2015, 8:58:09 PM10/7/15
to
Dan Espen <des...@verizon.net> writes:

> Jon Elson <jme...@wustl.edu> writes:
>
>> I can't IMAGINE
>> the hair-pulling hassle of trying to debug a program bug in emulated 1401 on
>> a 360/30 from the console switches! YIKES, what a horror movie that would
>> be, compared to decent debug facilities on modern OS's.
>
> No need to imagine something that didn't happen.
>
> I wrote plenty of 1401 code and debugged plenty under emulation.
>
> I never debugged 1401 code using console switches on a 360.
> Maybe some shop did, but I can't imagine why.

I was never in an IBM shop, so this question comes from complete
ignorance: the emulated 1401 used the 360's physical console switches?

> On a related note, the S/360 and it's successors were never noted
> for their debug facilities. I don't consider TSO TEST, Inspect
> and it's successors "decent". COBOL did have a "gonum" feature
> that was useful but the more elaborate debug stuff was best ignored.

--
"Erwin, have you seen the cat?" -- Mrs. Shrödinger

hanc...@bbs.cpcn.com

unread,
Oct 7, 2015, 10:17:08 PM10/7/15
to
On Wednesday, October 7, 2015 at 7:38:17 PM UTC-4, D_J_E wrote:

> Multiprogramming:
> Multiprocessing:

> With DOS, if you had enough memory, you got more than one batch stream
> going at a time. It's been so long, I can't think of a defining term
> for that. Maybe it was one of the above.

In everyday practice, people confused the two terms, and threw in others as well. (ie. multi-tasking).

hanc...@bbs.cpcn.com

unread,
Oct 7, 2015, 10:25:14 PM10/7/15
to
On Wednesday, October 7, 2015 at 8:58:09 PM UTC-4, Joe Pfeiffer wrote:

> I was never in an IBM shop, so this question comes from complete
> ignorance: the emulated 1401 used the 360's physical console switches?

The shops I worked at never did. Computer time was too valuable to allow an application programmer debug a program 'live'. Indeed, the only people who used the console switches were C/E's checking out hardware.

IIRC, one could get a core dump of memory in 1401 mode. There were debugging utilities. Presumably, these utilities remained available in 1401 emulation, regardless of the host machine.

I often wondered when the last 1401 emulator was shut down, if indeed no more are running. I suspect Y2k needs killed off whatever was remaining.



hanc...@bbs.cpcn.com

unread,
Oct 7, 2015, 10:40:02 PM10/7/15
to
On Wednesday, October 7, 2015 at 5:50:50 PM UTC-4, Jon Elson wrote:

> Oh, yes, I think the guys who just STAYED in 1401 mode were running a nearly
> all-card or card and tape shop. Once you had disks on the machine, the
> advantage of 360 mode just had to be a huge reason to update the system.

The 1401 had disks, and emulation included emulating a 1401 disk on a 360 disk. Not as efficient as native mode, and wasted space. However, it meant 1401 programs could still run unchanged (more below).



> > Indeed, many places ran 1401 code into the 1990s.

> Ugh, how horrid! 1990's??? Geez, you could do whatever you wanted on a
> network-connected PC by then! Even in Cobol, if you must! I can't IMAGINE
> the hair-pulling hassle of trying to debug a program bug in emulated 1401 on
> a 360/30 from the console switches! YIKES, what a horror movie that would
> be, compared to decent debug facilities on modern OS's.

The issue was the high cost of rewriting an existing system that was working. If the existing system meant the user's needs, as many did, the user would seriously question spending money to rewrite it.

Remember, on Z series, there are plenty of 30-40 year old COBOL systems still in service; a _single_ large application could cost millions of dollars to rewrite.


> Obviously, they weren't running 1401 emulation on a 360/30 in the 1990's.
Well, not on a /30, but on whatever model IBM mainframe was in use at that time.


> Yes, of course, there were huge advanges in moving up to larger machines
> with serious OS support, better languages, comms, big disks, etc. but some
> of that could NOT be done on the /30, due to some of its limitations. The
> models /22 and /25 went up to 16-bit memory, and relieved some of these
> bottlenecks.

Well, obviously is an organization's needs have grown, then it would time to trade in the 360/30 for a larger machine. Many places did just that (ours went from a /30 to a /40 after two years as more applications were added to it.) That was a key design feature of S/360--allowing upgrades without rewriting code--not as common in the pre-360 era. For instance, large machines had a different addressing structure than small machines, in S/360 addressing was universal.


Quadibloc

unread,
Oct 7, 2015, 11:27:54 PM10/7/15
to
On Wednesday, October 7, 2015 at 6:58:09 PM UTC-6, Joe Pfeiffer wrote:

> I was never in an IBM shop, so this question comes from complete
> ignorance: the emulated 1401 used the 360's physical console switches?

My ignorance is not complete, but I haven't got direct experience of this.

Basically, _emulation_ was distinct from _simulation_ - it was *not* software
only. It was an add-on hardware feature.

So you could buy a 360/30 for extra money that had extra instructions that
would switch it into 1401 mode.

And, naturally, it used the same lights and switches no matter what it was
doing - it was the same computer, after all. You had to read the manual to find
out how 1401 memory locations were assigned to 360 memory locations.

Actually, that wasn't likely to be *too* complicated; one manual on Al Kossow's
site about 7070/7074 emulation on some 370 models, though, shows that decimal
addresses were assigned to 370 binary addresses *using Chen-Ho encoding* which
would be rather hard to keep straight.

John Savard

Charlie Gibbs

unread,
Oct 8, 2015, 1:15:28 AM10/8/15
to
On 2015-10-08, hanc...@bbs.cpcn.com <hanc...@bbs.cpcn.com> wrote:

> On Wednesday, October 7, 2015 at 5:50:50 PM UTC-4, Jon Elson wrote:
>
>> Ugh, how horrid! 1990's??? Geez, you could do whatever you wanted on a
>> network-connected PC by then! Even in Cobol, if you must! I can't IMAGINE
>> the hair-pulling hassle of trying to debug a program bug in emulated 1401 on
>> a 360/30 from the console switches! YIKES, what a horror movie that would
>> be, compared to decent debug facilities on modern OS's.
>
> The issue was the high cost of rewriting an existing system that was working.
> If the existing system meant the user's needs, as many did, the user would
> seriously question spending money to rewrite it.

This is where commercial and consumer mindsets differ. In a commercial shop
(at least one which hasn't been infected by consumer thinking), "if it ain't
broke, don't fix it." In the consumer environment, on the other hand, people
are persuaded to throw away their apps - and even their OS - every year.
The replacements all have learning curves that must be climbed, and might
not even be as capable as the old systems. But the pictures sure are pretty...

One of my favourite sayings:
"This system is a great improvement on its successors."

Charlie Gibbs

unread,
Oct 8, 2015, 1:15:28 AM10/8/15
to
On 2015-10-07, Jon Elson <jme...@wustl.edu> wrote:

> hanc...@bbs.cpcn.com wrote:
>
>> I suspect a big gain in performance was the faster card reader and
>> printer, significant in business applications. I think the 360 versions
>> of tape and disk (2311 vs. 1311) were faster than the 1401 as well, also
>> speeding throughput.
>
> Oh, yes, I think the guys who just STAYED in 1401 mode were running a nearly
> all-card or card and tape shop. Once you had disks on the machine, the
> advantage of 360 mode just had to be a huge reason to update the system.
> The original 360/30 was very limited in high speed peripheral support, due
> to the single byte-wide memory. You could only get 300 K bytes/second
> throughput for ALL I/O combined, and that would stall the CPU until it
> was done. Fast tapes at 800 BPI couldn't come close to that,

Not at the time, anyway. (Later tape drives, doing 200 ips at 6250 bpi,
changed the picture, and had to be hung on the selector channel.) The
Univac 9300, which resembled a 360/20 but with the speed of a /30, was
quite a capable tape system. Mind you, the stock UNISERVO VI-C drives,
running at 34.16KB/s, weren't overly fast themselves. But they could
put through a lot of work.

> but even relatively slow disks could.

A 2314 did 312KB/s. Anything that could saturate a machine's I/O capability
wasn't "relatively slow". For that matter, a 2314 wasn't exactly slow in
any respect, at least until the 3330 came along. Not many small machines
had drums.

> You had to be very careful when designing the system to select peripherals
> that could not ever overload the memory bandwidth. We even had this
> on our 360/50. You would get these ghastly tapes that had gaps of a
> couple character times right in the middle of the records. Somebody
> had misconfigured something and it allowed too many devices to be
> transferring at the same time.

Ouch.

The 9300's internal printer used main memory as a buffer, and ate about 40% of
available memory cycles while it was scanning a line to see when the hammers
lined up. The monitor was smart enough to avoid scheduling disk I/O when the
printer was actually printing a line. (Other peripherals, being slower, were
OK to run alongside the printer.) I wrote an experimental spooler and learned
the hard way what happens when you're not careful about this; I got red lights
that I had never seen before, and triggered that printer status bit that the
programming manual described as "memory overload".

Charlie Gibbs

unread,
Oct 8, 2015, 1:15:28 AM10/8/15
to
On 2015-10-07, Dan Espen <des...@verizon.net> wrote:

> Jon Elson <jme...@wustl.edu> writes:
>
>> Jon Elson wrote:
>>
>>> If you put enough memory on a /30, you could run DOS/360 on it, and that
>>> gave you multiprocessing.
>>
>> ACK!! Multiprogramming, NOT multiprocessing!

I was wondering whether you'd catch that.

> Multiprogramming:
>
> the running of two or more programs or sequences of instructions
> simultaneously by a computer with more than one central processor.
>
> Multiprocessing:
>
> is the use of two or more central processing units (CPUs) within a
> single computer system. The term also refers to the ability of a
> system to support more than one processor and/or the ability to
> allocate tasks between them.
>
> At least those are current definitions (from Google).

On the other tentacle, I've heard people use "multiprocessing" that way
before. They come from a background that defines a "process" as a form
of heavyweight task, hence they think of "multiprocessing" as "running
multiple processes" rather than "running on multiple processors".

> With DOS, if you had enough memory, you got more than one batch stream
> going at a time. It's been so long, I can't think of a defining term
> for that. Maybe it was one of the above.
>
> On a 30, the best I saw was one batch partition and one of the FG
> partitions running a BTAM multi-tasking application supporting
> some terminals.

Not too shabby. And the BG task would be more multiprogramming.

Morten Reistad

unread,
Oct 8, 2015, 2:50:37 AM10/8/15
to
In article <60111e8e-e130-43e5...@googlegroups.com>,
<hanc...@bbs.cpcn.com> wrote:
>On Wednesday, October 7, 2015 at 5:50:50 PM UTC-4, Jon Elson wrote:
>
>> Oh, yes, I think the guys who just STAYED in 1401 mode were running a nearly
>> all-card or card and tape shop. Once you had disks on the machine, the
>> advantage of 360 mode just had to be a huge reason to update the system.
>
>The 1401 had disks, and emulation included emulating a 1401 disk on a 360 disk. Not as efficient as native mode, and wasted
>space. However, it meant 1401 programs could still run unchanged (more below).
>
>
>
>> > Indeed, many places ran 1401 code into the 1990s.
>
>> Ugh, how horrid! 1990's??? Geez, you could do whatever you wanted on a
>> network-connected PC by then! Even in Cobol, if you must! I can't IMAGINE
>> the hair-pulling hassle of trying to debug a program bug in emulated 1401 on
>> a 360/30 from the console switches! YIKES, what a horror movie that would
>> be, compared to decent debug facilities on modern OS's.
>
>The issue was the high cost of rewriting an existing system that was working. If the existing system meant the user's needs,
>as many did, the user would seriously question spending money to rewrite it.
>
>Remember, on Z series, there are plenty of 30-40 year old COBOL systems still in service; a _single_ large application could
>cost millions of dollars to rewrite.

The transaction system for national interbanking money handling in .no
still use the ~1975 core made by IDA, expanded and maintained, for a
few more months. They got the rewrite going into production on the third
attempt this summer, and the two are running in parallell for another
year after that; with the new system feeding the old, and the results
compared for correctness.

These systems grew transaction links over networks to more than 100
different systems, initially as tapes but running spooled batches over
comms links for the last 30 years. They all need to be kept in production.

We see direct benefits from this already, The daily batch runs that
updated banking across banks became 4 times a day two years ago, and now
they are run every 15 minutes.

>> Obviously, they weren't running 1401 emulation on a 360/30 in the 1990's.
>Well, not on a /30, but on whatever model IBM mainframe was in use at that time.
>
>
>> Yes, of course, there were huge advanges in moving up to larger machines
>> with serious OS support, better languages, comms, big disks, etc. but some
>> of that could NOT be done on the /30, due to some of its limitations. The
>> models /22 and /25 went up to 16-bit memory, and relieved some of these
>> bottlenecks.
>
>Well, obviously is an organization's needs have grown, then it would time to trade in the 360/30 for a larger machine. Many
>places did just that (ours went from a /30 to a /40 after two years as more applications were added to it.) That was a key
>design feature of S/360--allowing upgrades without rewriting code--not as common in the pre-360 era. For instance, large
>machines had a different addressing structure than small machines, in S/360 addressing was universal.

For the IDA systems that worked with at least 5 different hardware generations.

-- mrr


Simon Turner

unread,
Oct 8, 2015, 6:39:36 AM10/8/15
to
On Tuesday, in article <mv1e3i$un9$1...@dont-email.me>
nume...@aquaporin4.com "Charles Richmond" wrote:

> "Al Kossow" <a...@bitsavers.org> wrote in message
> news:mv14md$nuu$1...@dont-email.me...
> > On 10/5/15 2:58 PM, Jon Elson wrote:
> >
> > [snip...] [snip...]
> > [snip...]
> >
> > If you can find an a.f.c archive that hasn't been fscked up ((&*#%^# YOU
> > GOOGLE)
> > Read the a.f.c. posts from John Varela circa 2010 on the subject.
> >
>
> Mr. Kossow, ISTM that you detest incompetence. Unfortunately, incompetence
> seems to be the hallmark of the 21st century world. Poor us!!! (I am *not*
> being facetious here!!!)

I disagree. I don't think it's incompetence (not being able to do a
good job no matter how hard you try), but rather carelessness and
laziness (not bothering to do a good job because, like, who cares?) And
anybody who *does* care about things being done properly is derided as a
sad stuck-in-the-past luddite who needs to get a life.

The work ethic of "do the best job you can" has, almost universally IME,
been replaced by "do the quickest and sloppiest job you can get away
with".

--
Simon Turner DoD #0461
si...@twoplaces.co.uk
Trust me -- I know what I'm doing! -- Sledge Hammer

Morten Reistad

unread,
Oct 8, 2015, 7:13:33 AM10/8/15
to
In article <20151008.10...@twoplaces.co.uk>,
Not so around here.

I attribute this to the marginalisation of the US middle and upper
working classes. Like in the Soviet Union; "we pretend to work, and
they pretend to pay us".

I see lots of specialist consultancies grow up. Small, but expert, and
where the large corporations have to go when the brown stuff hits the
large rotating object.

They generally rake in money when they have clients, and work on a
"hurry up and wait" schedule.

-- mrr

Dan Espen

unread,
Oct 8, 2015, 9:00:38 AM10/8/15
to
Joe Pfeiffer <pfei...@cs.nmsu.edu> writes:

> Dan Espen <des...@verizon.net> writes:
>
>> Jon Elson <jme...@wustl.edu> writes:
>>
>>> I can't IMAGINE
>>> the hair-pulling hassle of trying to debug a program bug in emulated 1401 on
>>> a 360/30 from the console switches! YIKES, what a horror movie that would
>>> be, compared to decent debug facilities on modern OS's.
>>
>> No need to imagine something that didn't happen.
>>
>> I wrote plenty of 1401 code and debugged plenty under emulation.
>>
>> I never debugged 1401 code using console switches on a 360.
>> Maybe some shop did, but I can't imagine why.
>
> I was never in an IBM shop, so this question comes from complete
> ignorance: the emulated 1401 used the 360's physical console switches?

No.

Well, yes, you could sit at a 360/30 console and single step through
execution, reading instruction addresses and displaying data
using console lights.

But the /30 was too valuable for that kind of slow exercise.
You ran a test and read the printed output.

The 1401 had some physical switches on the console a program
could read for job options, (the UPSI switches). When running
in emulation, a JCL statement allowed you to simulate the switch
settings.

The console was covered with switches.
The only one commonly used was LOAD to start an IPL.

--
Dan Espen

hanc...@bbs.cpcn.com

unread,
Oct 8, 2015, 10:47:39 AM10/8/15
to
On Thursday, October 8, 2015 at 1:15:28 AM UTC-4, Charlie Gibbs wrote:

> This is where commercial and consumer mindsets differ. In a commercial shop
> (at least one which hasn't been infected by consumer thinking), "if it ain't
> broke, don't fix it." In the consumer environment, on the other hand, people
> are persuaded to throw away their apps - and even their OS - every year.
> The replacements all have learning curves that must be climbed, and might
> not even be as capable as the old systems. But the pictures sure are pretty...

Well, 1401 programs could have a lifespan of up to 40 years (1960-1999). Not bad. Undoubtedly, there are some S/360 COBOL/CICS systems still in service approaching 40 years of service. There may even be some S/360 batch systems or pieces that go back even further.

But the 'modern' consumer industry learned very well from Alfred Sloan's GM annual model change and planned obsolescence. Some years ago, I remember reading an article in a PC magazine questioning if waiting three years before replacing a PC was too _long_ -- and this was back when a PC was still well over a thousand bucks. I also remember people bragging that they had a new 386 compared to others 286 or even 8086, then 486, then their Pentiums.

The computer industry still has its customers, even businesses, brainwashed to upgrade their operating systems, web browsers, word processor/spreadsheets, etc, every few years. Oracle puts out new versions, forcing its customers to upgrade.

Joe Pfeiffer

unread,
Oct 8, 2015, 10:57:22 AM10/8/15
to
Morten Reistad <fi...@last.name.invalid> writes:
>
> We see direct benefits from this already, The daily batch runs that
> updated banking across banks became 4 times a day two years ago, and now
> they are run every 15 minutes.

I remember many years ago when a friend of mine wanted to buy a used
car, the seller wanted cash, it was a Saturday, and his daily ATM
withdrawal limit wasn't high enough. We drove around to all the nearby
bank branches and used the ATMs to get enough cash for the car...
wouldn't work today!

Stephen Wolstenholme

unread,
Oct 8, 2015, 11:03:46 AM10/8/15
to
On Thu, 8 Oct 2015 07:47:38 -0700 (PDT), hanc...@bbs.cpcn.com wrote:

>The computer industry still has its customers, even businesses, brainwashed to upgrade their operating systems, web browsers, word processor/spreadsheets, etc, every few years. Oracle puts out new versions, forcing its customers to upgrade.

It's age related! When I was young, working on mainframes, I remember
a frequent dedicated session testing and fixing all the hardware and
software. Now I'm getting on a bit I don't bother to get the latest &
greatest of everything. Real problems are obvious.

Steve

--
Neural Network Software for Windows http://www.npsnn.com

Peter Flass

unread,
Oct 8, 2015, 12:20:46 PM10/8/15
to
When I was a young'un I used to get them confused too.

Your recollections of DOS on the /30 agree with mine, except that often a
spooler was run in F1, maybe in addition to the TP application in F2 (or do
I have the Fns reversed.)

I knew shops with maybe a /40 that hacked the linkage editor and maybe some
compilers to run in foreground.

--
Pete

Peter Flass

unread,
Oct 8, 2015, 12:20:46 PM10/8/15
to
I was amazed when I watched a CE debug a bad ROS card from the console.

--
Pete

Peter Flass

unread,
Oct 8, 2015, 12:20:47 PM10/8/15
to
I believe that the emulation program took over the whole computer, so if
the 1401 program used console switches and lights the emulator would also.

--
Pete

Peter Flass

unread,
Oct 8, 2015, 12:20:48 PM10/8/15
to
"If it's important to someone, it's important." - Leroy Jethro Gibbs.

--
Pete

hanc...@bbs.cpcn.com

unread,
Oct 8, 2015, 12:45:43 PM10/8/15
to
On Thursday, October 8, 2015 at 12:20:46 PM UTC-4, Peter Flass wrote:

> Your recollections of DOS on the /30 agree with mine, except that often a
> spooler was run in F1, maybe in addition to the TP application in F2 (or do
> I have the Fns reversed.)

I'm surprised you could do all that on a /30, since I think it only went up to 64K and wasn't a super speed machine.

We did all that on our /40, but we had to add memory, up to 192k, to do so. Adding a spooler practically doubled our throughput.

hanc...@bbs.cpcn.com

unread,
Oct 8, 2015, 12:47:30 PM10/8/15
to
On Thursday, October 8, 2015 at 12:20:47 PM UTC-4, Peter Flass wrote:

> I believe that the emulation program took over the whole computer, so if
> the 1401 program used console switches and lights the emulator would also.

We ran out spooler and TP at the same time we ran emulation. No problem.

As someone mentioned, we used control cards to simulate the UPSI switch settings.

Dan Espen

unread,
Oct 8, 2015, 2:05:49 PM10/8/15
to
I saw a few spoolers.
In one shop we used GRASP (which was great), for a while, but management
later declared we'd go with POWER (IBMs spooler). POWER worked, but
I could never get over my revulsion for monsters created by IBM.
Another shop had EDOS (more to my liking).
The thing I remember is that EDOS seemed to command chain the printing
of an entire page.

I'd written a train cleaning program using command chaining, so
even though I did not have the source code to EDOS, I recognized
the way the printer reacted. It would print a page keeping the
printer screaming the whole time.

So, I would have mentioned a spooler in F2, but I wasn't 100% sure
that all the spoolers worked that way.

> I knew shops with maybe a /40 that hacked the linkage editor and maybe some
> compilers to run in foreground.

I don't remember any hacking required, but with POWER I remember the
printer and reader JCL being reversible.

You'd punch the // ASSIGN 00C on the front of the card and something
like // ASSIGN 01C on the back. Then just flip the card(s) to run
FG or BG.

--
Dan Espen

Dan Espen

unread,
Oct 8, 2015, 2:10:23 PM10/8/15
to
Nope, don't think so, unless there was a stand-alone emulator.
Ours ran under DOS because, for sure, we were running 1401 batch while
running our online system.

--
Dan Espen

Charlie Gibbs

unread,
Oct 8, 2015, 2:29:21 PM10/8/15
to
On 2015-10-08, Dan Espen <des...@verizon.net> wrote:

> The 1401 had some physical switches on the console a program
> could read for job options, (the UPSI switches). When running
> in emulation, a JCL statement allowed you to simulate the switch
> settings.

The UPSI byte lived on natively in the various 360 operating systems,
as well as its Univac counterparts. A bit of googling even shows it
living in CMS.

Walter Bushell

unread,
Oct 8, 2015, 2:36:34 PM10/8/15
to
In article <59SdnVyY-_ePzonL...@giganews.com>,
Jon Elson <jme...@wustl.edu> wrote:

> We tried to run a whole university with 4000+ students and employees on ONE
> 360/50. Batch turnaround times ranged from 4 hours (which was bad) to 8
> hours, which was just AWFUL. Byt the time you got your printout, you
> totally had forgotten what the last change was supposed to fix. That was a
> pretty awful experience, maybe get 2 chances to edit your program a day,
> what with having classes, eating, etc. I've never run a 360 without HASP-
> II, although I have heard what it was like. Spooling was certainly a major
> improvement on the larger machines with serious multiprogramming.
> >

Ah, but you sweated blood to make your program *RIGHT*. You learned
to concentrate or find another way of making a living. Ach now, you
just put in a change and compile, no biggie if it doesn't work or you
get a compile error.

--
Never attribute to stupidity that which can be explained by greed. Me.

hanc...@bbs.cpcn.com

unread,
Oct 8, 2015, 2:57:28 PM10/8/15
to
On Thursday, October 8, 2015 at 2:36:34 PM UTC-4, Walter Bushell wrote:

> Ah, but you sweated blood to make your program *RIGHT*. You learned
> to concentrate or find another way of making a living. Ach now, you
> just put in a change and compile, no biggie if it doesn't work or you
> get a compile error.

Students had the lowest priority in running jobs; administrative and research work took predecence.

Commuter students were at a disadvantage as they needed to run their jobs during the day when the machine was busiest. Resident students would come back in the evening when the machine was less busy.

I believe Mr. Wheeler indicated they discovered that the overhead in setting up a simple student job was much longer than the job itself, and very inefficient when tons of students were running jobs. I think he said he did an OS modification to resolve that. Some classes 'batched' student jobs together as multiple job sets in a single job which improved efficiency.

One of the difficulties back then was waiting for your printout to be removed from the printer and delivered to your bin. A university computer room could have a large multidue of bins and many jobs, keeping the print operator very busy. It was frustrating to know your job had printed, but was waiting for the print operator to deliver it.

At our university, each student was given a budget to run their jobs; if you ran too many jobs to get your homework done, you ran out of money, and that was a problem. Again, resident students had an advantage over commuters in that they could do their work during evening or weekend shifts when rates were lower.



Dan Espen

unread,
Oct 8, 2015, 3:06:34 PM10/8/15
to
Charlie Gibbs <cgi...@kltpzyxm.invalid> writes:

> On 2015-10-08, Dan Espen <des...@verizon.net> wrote:
>
>> The 1401 had some physical switches on the console a program
>> could read for job options, (the UPSI switches). When running
>> in emulation, a JCL statement allowed you to simulate the switch
>> settings.
>
> The UPSI byte lived on natively in the various 360 operating systems,
> as well as its Univac counterparts. A bit of googling even shows it
> living in CMS.

So, I'm thinking, surely not in z/OS.

Nope, there it is living as an LE option and accessible in COBOL.
Even there in Micro Focus COBOL.

Some things just refuse to die.

I wouldn't use UPSI switches on the 1401 and it's ugly carcass is still festering.

--
Dan Espen

hanc...@bbs.cpcn.com

unread,
Oct 8, 2015, 3:12:13 PM10/8/15
to
On Tuesday, October 6, 2015 at 7:02:44 PM UTC-4, Jon Elson wrote:

> We tried to run a whole university with 4000+ students and employees on ONE
> 360/50. Batch turnaround times ranged from 4 hours (which was bad) to 8
> hours, which was just AWFUL. Byt the time you got your printout, you
> totally had forgotten what the last change was supposed to fix. That was a
> pretty awful experience, maybe get 2 chances to edit your program a day,
> what with having classes, eating, etc. I've never run a 360 without HASP-
> II, although I have heard what it was like. Spooling was certainly a major
> improvement on the larger machines with serious multiprogramming.

Overloaded mainframes were very common in the 1960s and 1970s. Part of it was the manufacturers undersold in order to make the sale, that is, they knew the customer needed a bigger machine but didn't want to scare them away with sticker shock.

But another major problem was that computers were very popular and lots of groups within the organization wanted to computerize. Applications were added to the mix faster than the machine could be physically upgraded. A few fortunate organizations* were wealthy and could afford powerful machines, but most people had to make do with less.

In many organizations, a major hardware upgrade provided great test turnaround, but soon a big application was thrown onto the computer and it was overloaded again.

Anyway, it was common for programmers, either in school or on the job, to have to make do with only a few tests a day. Often programmers worked flex time to get test time in the evening or even late at night.


* I knew some people employed by our local Federal Reserve Bank, and they had a very nice setup--the latest hardware and software, online terminals before anyone else, nice offices, etc. In the 1970s, some offices for programmers were in industrial surroundings, e.g. an old factory building, and not very attractive. Using converted warehouse space wasn't uncommon. Programmers interviewing at a prospective company were warned to visit the actual worksite before accepting a job.

hanc...@bbs.cpcn.com

unread,
Oct 8, 2015, 3:17:59 PM10/8/15
to
On Thursday, October 8, 2015 at 3:06:34 PM UTC-4, D_J_E wrote:
> I wouldn't use UPSI switches on the 1401 and it's ugly carcass is still festering.

If memory serves, it was available in 360-DOS (independent of 1401), but not available in OS-360. I've seen S/360 DOS COBOL applications use them.

I think UPSI switches were an option on the 1401, not a standard item. I believe they are a throwback (a comfort feature) to process switches used on tabulating machines to set certain conditions for processing. On tabulating machines they made sense. However, on a computer, they really aren't necessary as a control card could indicate whatever conditions were needed, and be a lot more flexible.

Of course, from a programming point of view, one had to read in the control card and store it someplace, while UPSI switches on a 1401 need only be tested. So, using a control card means slightly more programming work and some storage.

Jon Elson

unread,
Oct 8, 2015, 3:36:20 PM10/8/15
to
Joe Pfeiffer wrote:

> Dan Espen <des...@verizon.net> writes:

> I was never in an IBM shop, so this question comes from complete
> ignorance: the emulated 1401 used the 360's physical console switches?
>
Well, you COULD do that. But, the general mode on the /30 was you flipped a
front panel switch that set the machine to emulation mode (vs 360 mode), put
an object card deck in the reader and hit load. The card deck would be read
into memory. Then, you would put the data input cards in the reader, or put
tape(s) on the drive(s) and it would take off and run. When the printer
completed printing whatever it was supposed to print, you'd put the next
object deck in the reader and hit load again. I seem to recall the 1401 had
some sense switches on the front panel that some programs used to know when
to start reading the next tape or whatever, and I suppose the 360/30
emulation provided the same capability.

I saw a (real) 1401 run at my university in 1971 or so. As far as I know,
we did NOT use 1401 emulation on our 360's. We DID have 7094 emulation and
used it for some programs that they were loathe to convert. Higher-end
360's had the ability to run emulation under OS/360, so you didn't have to
select emulation from the console switches, and you could load the 14xx/70xx
programs from disk.

I never did know how you debugged such code. If debugging 360 programs was
difficult (mostly plowing through core dumps) I can only imagine debugging
1401 code under emulation might have been even more primitive.

I do remember debugging crashes on 12-bit minis. Since there was no program
exception logic, and a halt instruction was only one code out of 4096, most
crashes left total chaos in memory, so post-mortem dumps were pretty
useless. Ahh, the bad old days, I really don't miss that part of it!

Jon

Morten Reistad

unread,
Oct 8, 2015, 3:43:22 PM10/8/15
to
In article <46dec176-76df-4a09...@googlegroups.com>,
<hanc...@bbs.cpcn.com> wrote:
>On Tuesday, October 6, 2015 at 7:02:44 PM UTC-4, Jon Elson wrote:
>
>> We tried to run a whole university with 4000+ students and employees on ONE
>> 360/50. Batch turnaround times ranged from 4 hours (which was bad) to 8
>> hours, which was just AWFUL. Byt the time you got your printout, you
>> totally had forgotten what the last change was supposed to fix. That was a
>> pretty awful experience, maybe get 2 chances to edit your program a day,
>> what with having classes, eating, etc. I've never run a 360 without HASP-
>> II, although I have heard what it was like. Spooling was certainly a major
>> improvement on the larger machines with serious multiprogramming.
>
>Overloaded mainframes were very common in the 1960s and 1970s. Part of it was the manufacturers
>undersold in order to make the sale, that is, they knew the customer needed a bigger machine but
>didn't want to scare them away with sticker shock.

All machines were underpowered, and subsequently overloaded in the "old days".
It was only with the "risc revolution" ca 1987 that we got ahead somewhat, and
got the Sparc1 and Mips 2000; that was the first real price/performance break
for "supermini/(mini mainframe)" class machines.

And for the PCs it was the Pentium that broke the performance roof.

>But another major problem was that computers were very popular and lots of groups within the
>organization wanted to computerize. Applications were added to the mix faster than the machine
>could be physically upgraded. A few fortunate organizations* were wealthy and could afford
>powerful machines, but most people had to make do with less.
>
>In many organizations, a major hardware upgrade provided great test turnaround, but soon a big
>application was thrown onto the computer and it was overloaded again.
>
>Anyway, it was common for programmers, either in school or on the job, to have to make do with
>only a few tests a day. Often programmers worked flex time to get test time in the evening or
>even late at night.
>
>
>* I knew some people employed by our local Federal Reserve Bank, and they had a very nice
>setup--the latest hardware and software, online terminals before anyone else, nice offices, etc.
>In the 1970s, some offices for programmers were in industrial surroundings, e.g. an old factory
>building, and not very attractive. Using converted warehouse space wasn't uncommon.
>Programmers interviewing at a prospective company were warned to visit the actual worksite
>before accepting a job.

Still valid.

-- mrr



It is loading more messages.
0 new messages