Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

"Death of the mainframe"

325 views
Skip to first unread message

Peter Flass

unread,
Dec 25, 2013, 6:41:03 PM12/25/13
to
For all the times it has been quoted, the (presumably) editorial by
Stewart Alsop in a 1991 issue of /InfoWorld/ that contains the famous
statement "On March 15, 1996, an InfoWorld Reader will unplug the last
mainframe" doesn't appear to exist anywhere online. Does anyone have
the original they could post? Thanks.

--
Pete

Quadibloc

unread,
Dec 26, 2013, 3:18:44 AM12/26/13
to
In

http://books.google.ca/books?id=XTwEAAAAMBAJ

there is a reference from 1993 to that prediction. InfoWorld is on Google Books with full view, so it should be accessible, although it wasn't among my search results.

John Savard

Quadibloc

unread,
Dec 26, 2013, 1:16:23 AM12/26/13
to
Here's a later column referencing that prediction:

http://books.google.ca/books?id=XTwEAAAAMBAJ

InfoWorld is on Google Books with full view, but my search didn't find the original prediction.

John Savard

Quadibloc

unread,
Dec 26, 2013, 1:05:37 AM12/26/13
to
InfoWorld is on Google Books with full view. A search on the words unplug the last mainframe turned up Stewart Alsop referring to the prediction in 1993, but not the prediction itself, so I'm browsing individual issues, starting with the month of March.

John Savard

Peter Flass

unread,
Dec 26, 2013, 7:52:02 AM12/26/13
to
On 12/26/2013 1:05 AM, Quadibloc wrote:
> InfoWorld is on Google Books with full view. A search on the words unplug the last mainframe turned up Stewart Alsop referring to the prediction in 1993, but not the prediction itself, so I'm browsing individual issues, starting with the month of March.
>
> John Savard
>

Google only has a few issues, AFAIK.

--
Pete

Quadibloc

unread,
Dec 26, 2013, 3:22:18 AM12/26/13
to
Ah, here we are.

On page 4 of

http://books.google.ca/books?id=vjsEAAAAMBAJ

it's noted that the prediction in question was made during a panel discussion, not in an InfoWorld column.

John Savard

Anne & Lynn Wheeler

unread,
Dec 26, 2013, 10:34:15 AM12/26/13
to
part of the issue was in the mid-80s, top IBM executives were predicting
that IBM revenue was going to double ... primarily based on mainframe
sales ... and there was huge internal building program to double
mainframe product manufacturing capacity ... even tho that business was
already starting to head in the opposite direction ... pointing that out
wasn't exactly career enhancing.

about that time, a senior disk engineer got a talk scheduled at a
communication group annual, world-wide, internal conference
... supposedly on 3174 performance ... but opened the talk with
statement that the communication group was going to be responsible for
the demise of the disk division. the issue was that the communication
group had corporate strategic ownership of everything that crossed the
datacenter walls and were fiercely protecting their dumb terminal
paradigm and install base ... fighting off client/server and distributed
computing. The disk division was seeing the effects with downturn in
disk sales as data was fleeing the datacenters to more distributed
computing friendly platforms. The disk division had come up with several
solutions to reverse the problem, but they were constantly vetoed by the
disk division.

a few years later, the company had gone into the red ... and the same
executives had reorganized the company into the 13 "baby blues" in
preparation for breaking up the company. time magazine stories from
12/28/92 ... including "fall of ibm" article "How IBM Was Left Behind"
http://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html

The board then brings in Gerstner to reverse the breakup and resurrect
the company ... by redirecting the company into services ... including
acquring lots of consulting & services companies ... all hardware
products now only accounts for something like 17% of corporate revenue
... and of that mainfame processor sales has been running only 4-5% of
the total.

after the initial flight from mainframe datacenters, what was left was
small core of mainframe customers with large, very high value legacy
applications ... mostly in the financial industry ... the risk of
convert/migrating them was higher than continuing to pay a large premium
for mainframe hardware

past posts mentioning gerstner
http://www.garlic.com/~lynn/submisc.html#gerstner

one might conjecture that the wallstreet financial industry was
instrumental in getting the board to bring in Gerstner (who came from
the financial industry) ... to keep those mainframe processors
coming. However, the resulting IBM was heavily oriented towards large
compensation for top executives ... heavily loaded with culture similar to
the too-big-to-fail
http://www.garlic.com/~lynn/submisc.html#too-big-to-fail
and private equity
http://www.garlic.com/~lynn/submisc.html#private.equity

recent threads with Gerstner history, too-big-to-fail, private equity
http://www.garlic.com/~lynn/2013i.html#20 Louis V. Gerstner Jr. lays out his post-IBM life
http://www.garlic.com/~lynn/2013i.html#26 Louis V. Gerstner Jr. lays out his post-IBM life
http://www.garlic.com/~lynn/2013l.html#60 Retirement Heist

for other drift there has been discussion in some IBM groups about IBM
using stock buybacks and other measures as propping up share price (and
boosting top executive compensation).
http://www.garlic.com/~lynn/2013l.html#60 Retirement Heist
http://www.garlic.com/~lynn/2013m.html#37 Why is the mainframe so expensive?
http://www.garlic.com/~lynn/2013m.html#84 3Q earnings are becoming the norm at IBM. What is IBM management overlooking?
http://www.garlic.com/~lynn/2013m.html#85 How do you feel about IBM passing off it's retirees to ObamaCare?
http://www.garlic.com/~lynn/2013n.html#1 IBM board OK repurchase of another $15B of stock
http://www.garlic.com/~lynn/2013n.html#60 Bridgestone Sues IBM For $600 Million Over Allegedly 'Defective' System That Plunged The Company Into 'Chaos'
http://www.garlic.com/~lynn/2013o.html#14 Microsoft, IBM lobbying seen killing key anti-patent troll proposal
http://www.garlic.com/~lynn/2013o.html#15 IBM Shrinks - Analysts Hate It
http://www.garlic.com/~lynn/2013o.html#16 IBM Shrinks - Analysts Hate It

Stockman in "The Great Deformation: The Corruption of Capitalism in
America" pg464/loc9995-10000:

IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback contraption on
steroids. During the five years ending in fiscal 2011, the company
spent a staggering $67 billion repurchasing its own shares, a figure
that was equal to 100 percent of its net income.

pg465/10014-17:

Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.

... snip ...

other recent posts mentioning the "baby blues" reorg in preparation
for breaking up IBM:
http://www.garlic.com/~lynn/2013.html#76 mainframe "selling" points
http://www.garlic.com/~lynn/2013c.html#53 What Makes an Architecture Bizarre?
http://www.garlic.com/~lynn/2013d.html#11 relative mainframe speeds, was What Makes an Architecture Bizarre?
http://www.garlic.com/~lynn/2013d.html#20 Y2K hacks
http://www.garlic.com/~lynn/2013d.html#33 IBM Spent A Million Dollars Renovating And Staffing Its Former CEO's Office
http://www.garlic.com/~lynn/2013d.html#35 Ex-Bailout Watchdog: JPMorgan's Actions "Entirely Consistent With Fraud"
http://www.garlic.com/~lynn/2013d.html#76 IBM Spent A Million Dollars Renovating And Staffing Its Former CEO's Office
http://www.garlic.com/~lynn/2013e.html#17 The Big, Bad Bit Stuffers of IBM
http://www.garlic.com/~lynn/2013e.html#79 As an IBM'er just like the Marines only a few good men and women make the cut,
http://www.garlic.com/~lynn/2013f.html#46 As an IBM'er just like the Marines only a few good men and women make the cut,
http://www.garlic.com/~lynn/2013f.html#63 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013g.html#43 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013h.html#40 The Mainframe is "Alive and Kicking"
http://www.garlic.com/~lynn/2013h.html#76 DataPower XML Appliance and RACF
http://www.garlic.com/~lynn/2013h.html#77 IBM going ahead with more U.S. job cuts today
http://www.garlic.com/~lynn/2013i.html#2 IBM commitment to academia
http://www.garlic.com/~lynn/2013i.html#7 IBM commitment to academia
http://www.garlic.com/~lynn/2013i.html#14 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013i.html#17 Should we, as an industry, STOP using the word Mainframe and find (and start using) something more up-to-date
http://www.garlic.com/~lynn/2013k.html#28 Flag bloat
http://www.garlic.com/~lynn/2013k.html#29 The agency problem and how to create a criminogenic environment
http://www.garlic.com/~lynn/2013k.html#31 China mulls probe into IBM, Oracle, EMC after NSA hack claims - report
http://www.garlic.com/~lynn/2013l.html#49 The Original IBM Basic Beliefs for those that have never seen them
http://www.garlic.com/~lynn/2013m.html#6 Voyager 1 just left the solar system using less computing powerthan your iP
http://www.garlic.com/~lynn/2013m.html#35 Why is the mainframe so expensive?
http://www.garlic.com/~lynn/2013m.html#46 50,000 x86 operating system on single mainframe
http://www.garlic.com/~lynn/2013m.html#66 NSA Revelations Kill IBM Hardware Sales In China
http://www.garlic.com/~lynn/2013n.html#17 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
http://www.garlic.com/~lynn/2013n.html#78 wtf ? - was Catalog system for Unix et al


recent posts mentioning prediction that communication group was going
to be responsible for demise of disk division (and major factor in
the whole IBM and mainframe downturn):
http://www.garlic.com/~lynn/2013.html#75 mainframe "selling" points
http://www.garlic.com/~lynn/2013b.html#32 Ethernet at 40: Its daddy reveals its turbulent youth
http://www.garlic.com/~lynn/2013b.html#57 Dualcase vs monocase. Was: Article for the boss
http://www.garlic.com/~lynn/2013c.html#75 Still not convinced about the superiority of mainframe security vs distributed?
http://www.garlic.com/~lynn/2013d.html#76 IBM Spent A Million Dollars Renovating And Staffing Its Former CEO's Office
http://www.garlic.com/~lynn/2013e.html#17 The Big, Bad Bit Stuffers of IBM
http://www.garlic.com/~lynn/2013f.html#57 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013f.html#58 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013f.html#70 How internet can evolve
http://www.garlic.com/~lynn/2013g.html#17 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
http://www.garlic.com/~lynn/2013g.html#34 What Makes code storage management so cool?
http://www.garlic.com/~lynn/2013h.html#10 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013i.html#2 IBM commitment to academia
http://www.garlic.com/~lynn/2013i.html#17 Should we, as an industry, STOP using the word Mainframe and find (and start using) something more up-to-date
http://www.garlic.com/~lynn/2013l.html#44 Teletypewriter Model 33
http://www.garlic.com/~lynn/2013l.html#49 The Original IBM Basic Beliefs for those that have never seen them
http://www.garlic.com/~lynn/2013m.html#5 Voyager 1 just left the solar system using less computing powerthan your iP
http://www.garlic.com/~lynn/2013n.html#78 wtf ? - was Catalog system for Unix et al

--
virtualization experience starting Jan1968, online at home since Mar1970

Anne & Lynn Wheeler

unread,
Dec 26, 2013, 11:08:55 AM12/26/13
to
Quadibloc <jsa...@ecn.ab.ca> writes:
> http://books.google.ca/books?id=vjsEAAAAMBAJ

re:
http://www.garlic.com/~lynn/2013o.html#64 "Death of the mainframe"

issue is 22Feb1993 ... which is only two months after 28Dec1992 article
about IBM reorg into 13 "Baby Blues" in preparation for breaking up the
company.

Gerster isn't brought into IBM until April1993 to reverse the breakup
and resurrect the company
http://en.wikipedia.org/wiki/Louis_V._Gerstner,_Jr.

posts mentioning Gerstner
http://www.garlic.com/~lynn/submisc.html#gerstner

Quadibloc

unread,
Dec 26, 2013, 1:25:31 AM12/26/13
to
Incidentally, on March 22, 1993, Intel came out with the Pentium.

That chip:

* was pipelined

* had an on-chip cache

* was hardwired, not microcoded

* used advanced multiplication and division techniques

Architecturally, therefore, it was a lot like a System 360/195.

At that point, it was no longer possible to achieve gains in performance by building a CPU out of multiple chips.

So the mainframe *did* die on schedule. IBM built reliable servers out of 370 ISA microprocessors and called them mainframes. Maybe that's too much of a generalization, and the term "mainframe" still has a real meaning - but I'm not so sure IBM really proved Stewart Alsop wrong simply because they're still selling things they call mainframes.

John Savard

Quadibloc

unread,
Dec 26, 2013, 12:27:55 PM12/26/13
to
On Thursday, December 26, 2013 5:52:02 AM UTC-7, Peter Flass wrote:

> Google only has a few issues, AFAIK.

Google seems to have nearly all the issues for the period in question. My posts haven't been appearing; one column seemed to imply he made the prediction in a panel discussion, not in InfoWorld's pages, but that appears to refer to a time he repeated his prediction in 1993.

John Savard

hanc...@bbs.cpcn.com

unread,
Dec 26, 2013, 1:46:35 PM12/26/13
to
On Thursday, December 26, 2013 10:34:15 AM UTC-5, Anne & Lynn Wheeler wrote:

> pointing that out
> wasn't exactly career enhancing.


So much for Tom Watson, Jr's statement, "IBM encourages its wild ducks".

IMHO, companies that get too insular, too smart for themselves, too arrogant, are in bad shape. IBM came close to dying back then. The US auto and steel industries suffered severely as a result of those attitudes. To some extent, US railroads suffered.





hanc...@bbs.cpcn.com

unread,
Dec 26, 2013, 1:54:40 PM12/26/13
to
On Thursday, December 26, 2013 1:25:31 AM UTC-5, Quadibloc wrote:

> Incidentally, on March 22, 1993, Intel came out with the Pentium...

> So the mainframe *did* die on schedule. IBM built reliable servers out of 370 ISA microprocessors and called them mainframes. Maybe that's too much of a generalization, and the term "mainframe" still has a real meaning - but I'm not so sure IBM really proved Stewart Alsop wrong simply because they're still selling things they call mainframes.

I don't think the "technology under the hood" is the issue. Today's Pentiums are far, far faster than those of 1993. So are today's mainframes.

Rather, I think the difference between a "mainframe" vs. other computing devices is in terms of functionality. IMHO, the mainframe to this day still offers certain operating features and functions that are significantly superior to that of other computers, which is a big reason they remain in wide use, as described.

On the other hand, the other machines certainly offer advantages, too, which is why they're very widely used as well.

Note that railroads, especially passenger trains, were predicted to be obsolete, but remain a key part of our transportation network.

We should note that when non-mainframes came out, their supporters were most enthusiastic and predicted the death of the mainframe (first was the mini computer, then the PC). But as we've seen over the years, while those machines have their niche, they aren't one-size-fits-all.

We don't use a tractor-trailer to deliver goods when a VW will do, and of course vice-versa. The mainframe has its place among the "tractor trailer" applications.

Anne & Lynn Wheeler

unread,
Dec 26, 2013, 2:14:11 PM12/26/13
to
hanc...@bbs.cpcn.com writes:
> So much for Tom Watson, Jr's statement, "IBM encourages its wild ducks".
>
> IMHO, companies that get too insular, too smart for themselves, too
> arrogant, are in bad shape. IBM came close to dying back then. The
> US auto and steel industries suffered severely as a result of those
> attitudes. To some extent, US railroads suffered.

re:
http://www.garlic.com/~lynn/2013o.html#64 "Death of the mainframe"
http://www.garlic.com/~lynn/2013o.html#65 "Death of the mainframe"

after the failure of FS
http://www.garlic.com/~lynn/submain.html#futuresys

somebody did a post of large number of ducks in formation with caption
"wild ducks are tolerated as long as they fly in formation" ... and
another about "how to stuff a wild duck" (multitude of the antithesis of
"wild duck")
http://www.users.cloud9.net/~bradmcc/GO/wildDuck.html

more recently part of the videos for the IBM 100th anniv ... there was
one about "wild ducks" ... but there was absolutely no reference to
employees ... all about "wild duck" customers (obfuscation and
mis-direction).

past posts mentioning wild ducks:
http://www.garlic.com/~lynn/2007b.html#38 'Innovation' and other crimes
http://www.garlic.com/~lynn/2007h.html#25 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2008h.html#18 IT full of 'ducks'? Declare open season
http://www.garlic.com/~lynn/2011h.html#30 IBM Centennial Film: Wild Ducks
http://www.garlic.com/~lynn/2011h.html#33 Happy 100th Birthday, IBM!
http://www.garlic.com/~lynn/2011i.html#79 Innovation and iconoclasm
http://www.garlic.com/~lynn/2011m.html#1 What is IBM culture?
http://www.garlic.com/~lynn/2011m.html#45 What is IBM culture?
http://www.garlic.com/~lynn/2011n.html#93 John R. Opel, RIP
http://www.garlic.com/~lynn/2011p.html#105 5 ways to keep your rockstar employees happy
http://www.garlic.com/~lynn/2011p.html#121 The Myth of Work-Life Balance
http://www.garlic.com/~lynn/2012b.html#59 Original Thinking Is Hard, Where Good Ideas Come From
http://www.garlic.com/~lynn/2012b.html#72 Original Thinking Is Hard, Where Good Ideas Come From
http://www.garlic.com/~lynn/2012f.html#3 Time to Think ... and to Listen
http://www.garlic.com/~lynn/2012h.html#7 Leadership Trends and Realities: What Does Leadership Look Like Today
http://www.garlic.com/~lynn/2012h.html#17 Hierarchy
http://www.garlic.com/~lynn/2012i.html#26 Top Ten Reasons Why Large Companies Fail To Keep Their Best Talent
http://www.garlic.com/~lynn/2012k.html#19 SnOODAn: Boyd, Snowden, and Resilience
http://www.garlic.com/~lynn/2012k.html#23 How to Stuff a Wild Duck
http://www.garlic.com/~lynn/2012k.html#24 How to Stuff a Wild Duck
http://www.garlic.com/~lynn/2012k.html#26 How to Stuff a Wild Duck
http://www.garlic.com/~lynn/2012k.html#28 How to Stuff a Wild Duck
http://www.garlic.com/~lynn/2012k.html#31 History--punched card transmission over telegraph lines
http://www.garlic.com/~lynn/2012k.html#42 The IBM "Open Door" policy
http://www.garlic.com/~lynn/2012k.html#49 1132 printer history
http://www.garlic.com/~lynn/2012k.html#56 1132 printer history
http://www.garlic.com/~lynn/2012k.html#65 How do you feel about the fact that India has more employees than US?
http://www.garlic.com/~lynn/2012m.html#70 Long Strange Journey: An Intelligence Memoir
http://www.garlic.com/~lynn/2012n.html#15 System/360--50 years--the future?
http://www.garlic.com/~lynn/2012n.html#16 System/360--50 years--the future?
http://www.garlic.com/~lynn/2013.html#12 How do we fight bureaucracy and bureaucrats in IBM?
http://www.garlic.com/~lynn/2013g.html#49 A Complete History Of Mainframe Computing
http://www.garlic.com/~lynn/2013n.html#52 Bridgestone Sues IBM For $600 Million Over Allegedly 'Defective' System That Plunged The Company Into 'Chaos'
http://www.garlic.com/~lynn/2013n.html#72 In Command, but Out Of Control
http://www.garlic.com/~lynn/2013o.html#3 Inside the Box People don't actually like creativity
http://www.garlic.com/~lynn/2013o.html#4 Inside the Box People don't actually like creativity

Anne & Lynn Wheeler

unread,
Dec 26, 2013, 2:33:18 PM12/26/13
to
hanc...@bbs.cpcn.com writes:
> I don't think the "technology under the hood" is the issue. Today's
> Pentiums are far, far faster than those of 1993. So are today's
> mainframes.
>
> Rather, I think the difference between a "mainframe" vs. other
> computing devices is in terms of functionality. IMHO, the mainframe
> to this day still offers certain operating features and functions that
> are significantly superior to that of other computers, which is a big
> reason they remain in wide use, as described.
>
> On the other hand, the other machines certainly offer advantages, too, which is why they're very widely used as well.
>
> Note that railroads, especially passenger trains, were predicted to be obsolete, but remain a key part of our transportation network.
>
> We should note that when non-mainframes came out, their supporters
> were most enthusiastic and predicted the death of the mainframe (first
> was the mini computer, then the PC). But as we've seen over the
> years, while those machines have their niche, they aren't
> one-size-fits-all.
>
> We don't use a tractor-trailer to deliver goods when a VW will do, and
> of course vice-versa. The mainframe has its place among the "tractor
> trailer" applications.

re:
http://www.garlic.com/~lynn/2013o.html#64 "Death of the mainframe"
http://www.garlic.com/~lynn/2013o.html#65 "Death of the mainframe"
http://www.garlic.com/~lynn/2013o.html#68 "Death of the mainframe"

I've pontificated that (v1) e5-2600 blades have processor rating of
400-500 BIPS (and ibm base list price of $1815 or $3.50/BIPs)
.... compared to 50 BIPS for max configured z196 mainframe (and price of
$28M or $560,000/BIPS) and 75 BIPS for the newer max configured EC12
mainframe.

Major mainframe operating system MVS (zOS) still requires CKD DASD,
which hasn't been manufactured for decades ... instead being simulated on
commodity, industry standard disks.
http://www.garlic.com/~lynn/submain.html#dasd

Mainframe I/O channel is FICON, a heavy weight protocol layer that
drastically reduces the throughput of the native industry standard fibre
channel standard. Peak I/O benchmark for z196 is 2M IOPS with 104 FICON
(FICON protocol layer on top of 104 FCS). Recently there was
announcement of a (single) FCS for e5-2600 claiming over million IOPS
(two such would have greater throughput than 104 FICON0.
http://www.garlic.com/~lynn/submisc.html#ficon

large cloud operators are now starting to deploy newer V2 e5-2600 in
their blades ... and for a decade they've been claiming that the build
their own servers for 1/3rd the price of brand name vendors (now
possibly under $1/BIPS). there have been references that chip
manufactures are now shipping more server chips directly to cloud
operators than to brand name vendors.

and for the fun of it ... news about major DBMS throughput increases on
these platforms (using GPUs)

Fast Database Emerges from MIT Class, GPUs and Student's Invention
http://data-informed.com/fast-database-emerges-from-mit-class-gpus-and-students-invention
Red Fox: An Execution Environment for Relational Query Processing on
GPUs
http://gpuocelot.gatech.edu/publications/redfox/

recent posts mentioning e5-2600
http://www.garlic.com/~lynn/2013f.html#35 Reports: IBM may sell x86 server business to Lenovo
http://www.garlic.com/~lynn/2013f.html#37 Where Does the Cloud Cover the Mainframe?
http://www.garlic.com/~lynn/2013f.html#38 Reports: IBM may sell x86 server business to Lenovo
http://www.garlic.com/~lynn/2013f.html#51 Reports: IBM may sell x86 server business to Lenovo
http://www.garlic.com/~lynn/2013f.html#57 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013f.html#64 What Makes an Architecture Bizarre?
http://www.garlic.com/~lynn/2013f.html#72 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013f.html#73 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013f.html#74 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013g.html#2 A Complete History Of Mainframe Computing
http://www.garlic.com/~lynn/2013g.html#4 A Complete History Of Mainframe Computing
http://www.garlic.com/~lynn/2013g.html#5 SAS Deserting the MF?
http://www.garlic.com/~lynn/2013g.html#7 SAS Deserting the MF?
http://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
http://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
http://www.garlic.com/~lynn/2013g.html#43 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013g.html#49 A Complete History Of Mainframe Computing
http://www.garlic.com/~lynn/2013g.html#50 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013g.html#93 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013h.html#3 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013h.html#5 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013h.html#6 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013h.html#40 The Mainframe is "Alive and Kicking"
http://www.garlic.com/~lynn/2013h.html#79 Why does IBM keep saying things like this:
http://www.garlic.com/~lynn/2013h.html#80 Minicomputer Pricing
http://www.garlic.com/~lynn/2013i.html#47 Making mainframe technology hip again
http://www.garlic.com/~lynn/2013i.html#59 Making mainframe technology hip again
http://www.garlic.com/~lynn/2013i.html#60 Making mainframe technology hip again
http://www.garlic.com/~lynn/2013j.html#86 IBM unveils new "mainframe for the rest of us"
http://www.garlic.com/~lynn/2013k.html#53 spacewar
http://www.garlic.com/~lynn/2013l.html#31 model numbers; was re: World's worst programming environment?
http://www.garlic.com/~lynn/2013l.html#50 Mainframe On Cloud
http://www.garlic.com/~lynn/2013l.html#51 Mainframe On Cloud
http://www.garlic.com/~lynn/2013l.html#53 Mainframe On Cloud
http://www.garlic.com/~lynn/2013l.html#54 Mainframe On Cloud
http://www.garlic.com/~lynn/2013l.html#70 50,000 x86 operating system on single mainframe
http://www.garlic.com/~lynn/2013m.html#33 Why is the mainframe so expensive?
http://www.garlic.com/~lynn/2013m.html#35 Why is the mainframe so expensive?
http://www.garlic.com/~lynn/2013m.html#78 'Free Unix!': The world-changing proclamation made 30 years agotoday
http://www.garlic.com/~lynn/2013m.html#94 SHARE Blog: News Flash: The Mainframe (Still) Isn't Dead
http://www.garlic.com/~lynn/2013n.html#38 Making mainframe technology hip again
http://www.garlic.com/~lynn/2013n.html#54 rebuild 1403 printer chain
http://www.garlic.com/~lynn/2013n.html#61 Bet Cloud Computing to Win

Quadibloc

unread,
Dec 26, 2013, 2:31:00 AM12/26/13
to
I've been trying to reply to your post, but I have had difficulties.

InfoWorld is on Google Books with full view. I tried searching, though, and only got a later reference to that statement from 1993.

John Savard

Sam

unread,
Dec 26, 2013, 3:37:46 PM12/26/13
to


<hanc...@bbs.cpcn.com> wrote in message
news:45a436f0-59c1-4788...@googlegroups.com...
But it remains to be seen just how many of those "tractor trailer"
applications there are left now that all of the stuff like amazon
and ebay and google etc are no longer done with mainframes.



hanc...@bbs.cpcn.com

unread,
Dec 26, 2013, 4:48:02 PM12/26/13
to
On Thursday, December 26, 2013 3:37:46 PM UTC-5, Sam wrote:

> But it remains to be seen just how many of those "tractor trailer"
> applications there are left now that all of the stuff like amazon
> and ebay and google etc are no longer done with mainframes.

Good point.

I have no idea of the relative transactions-per-second handled by big mainframes vs. that of modern day servers of large organizations like the above. More importantly, I don't know the accuracy of the functions, that is, how many users are foced to redo their entry due to some sort of glitch.

For what it's worth, my own experience on the systems mentioned above is that often that work on the principle of "fuzzy matches". Sometimes this is helpful in finding related materials to what I'm seeking, but often it is a total waste of time and distraction. I don't know if that is a programming issue (done purposely) or an issue with servers they use.

I don't know what kind of computer systems FedEx and UPS utilize or even if they're an issue, but they've had problems meeting their deliveries this year.

http://usnews.nbcnews.com/_news/2013/12/26/22058674-ups-fedex-scramble-to-deliver-delayed-christmas-packages?lite

Anne & Lynn Wheeler

unread,
Dec 26, 2013, 6:09:22 PM12/26/13
to
"Sam" <sam_...@gmail.nospam.com> writes:
> But it remains to be seen just how many of those "tractor trailer"
> applications there are left now that all of the stuff like amazon
> and ebay and google etc are no longer done with mainframes.

re:
http://www.garlic.com/~lynn/2013o.html#64 "Death of the mainframe"
http://www.garlic.com/~lynn/2013o.html#65 "Death of the mainframe"
http://www.garlic.com/~lynn/2013o.html#68 "Death of the mainframe"
http://www.garlic.com/~lynn/2013o.html#69 "Death of the mainframe"

note that nearly any one of the large cloud "megadatacenters" will have
more processing power than the aggregate all of the mainframes in the
world today.

note that while mainframe processor sales has been running only 4-5% of
total IBM revenue ... its mainframe group has been earning total of
$6.25 for every dollar in processor sales ... aka the calculation that
mainframe is running around $560,000/BIPS ... total mainframe revenue
(with services and software) comes closer to $3.5M/BIPS.

This is compared to (v1) e5-2600 blades around $3.50/BIPS from brand
name vendors (a million times less) and costing large cloud operations
(building their own) possibly closer to dollar/BIPS (three million times
less). The newer v2 e5-2600 may further reduce that by another factor
two.

One of the factors for the big cloud megacenters with the radical drop
in system/computer costs ... is all the other megadatacenter costs are
become a much larger percentage of total costs ... power, cooling, human
administration and maintenance, etc. As a result the big cloud
megadatacenters have been on the bleeding edge of reducing these other
costs.

The radical reduction in system/computer costs have also allowed them to
aggresively move into large amount of "on-demand" services ... as long
as the system power&cooling drop to zero when idle ... but possible to
come up to full operation "on-demand".

recent posts mentioning large cloud megadatacenters:
http://www.garlic.com/~lynn/2013.html#16 From build to buy: American Airlines changes modernization course midflight
http://www.garlic.com/~lynn/2013.html#17 Still think the mainframe is going away soon: Think again. IBM mainframe computer sales are 4% of IBM's revenue; with software, services, and storage it's 25%
http://www.garlic.com/~lynn/2013b.html#7 mainframe "selling" points
http://www.garlic.com/~lynn/2013b.html#8 mainframe "selling" points
http://www.garlic.com/~lynn/2013b.html#10 FW: mainframe "selling" points -- Start up Costs
http://www.garlic.com/~lynn/2013b.html#15 A Private life?
http://www.garlic.com/~lynn/2013b.html#25 Still think the mainframe is going away soon: Think again. IBM mainframe computer sales are 4% of IBM's revenue; with software, services, and storage it's 25%
http://www.garlic.com/~lynn/2013c.html#84 What Makes an Architecture Bizarre?
http://www.garlic.com/~lynn/2013c.html#91 What Makes an Architecture Bizarre?
http://www.garlic.com/~lynn/2013f.html#19 Where Does the Cloud Cover the Mainframe?
http://www.garlic.com/~lynn/2013f.html#28 Reports: IBM may sell x86 server business to Lenovo
http://www.garlic.com/~lynn/2013f.html#35 Reports: IBM may sell x86 server business to Lenovo
http://www.garlic.com/~lynn/2013f.html#37 Where Does the Cloud Cover the Mainframe?
http://www.garlic.com/~lynn/2013f.html#51 Reports: IBM may sell x86 server business to Lenovo
http://www.garlic.com/~lynn/2013f.html#57 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013f.html#61 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013f.html#73 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013f.html#74 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013g.html#12 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013g.html#21 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013g.html#43 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013g.html#45 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013h.html#40 The Mainframe is "Alive and Kicking"
http://www.garlic.com/~lynn/2013i.html#60 Making mainframe technology hip again
http://www.garlic.com/~lynn/2013i.html#66 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013j.html#23 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013j.html#24 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013j.html#32 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013j.html#62 Mainframe vs Server - The Debate Continues
http://www.garlic.com/~lynn/2013j.html#63 Mainframe vs Server - The Debate Continues
http://www.garlic.com/~lynn/2013j.html#70 Internet Mainframe Forums Considered Harmful
http://www.garlic.com/~lynn/2013k.html#53 spacewar
http://www.garlic.com/~lynn/2013k.html#56 spacewar
http://www.garlic.com/~lynn/2013l.html#70 50,000 x86 operating system on single mainframe
http://www.garlic.com/~lynn/2013m.html#33 Why is the mainframe so expensive?
http://www.garlic.com/~lynn/2013m.html#35 Why is the mainframe so expensive?
http://www.garlic.com/~lynn/2013n.html#38 Making mainframe technology hip again

Anne & Lynn Wheeler

unread,
Dec 26, 2013, 6:20:33 PM12/26/13
to
hanc...@bbs.cpcn.com writes:
> I have no idea of the relative transactions-per-second handled by big
> mainframes vs. that of modern day servers of large organizations like
> the above. More importantly, I don't know the accuracy of the
> functions, that is, how many users are foced to redo their entry due
> to some sort of glitch.

http://www.garlic.com/~lynn/2013o.html#71 "Death of the mainframe"

tpc tpc-c benchmark ... top number is 8.5tpmC (trans/min) and
$.55 per trans/min
http://www.tpc.org/tpcc/results/tpcc_perf_results.asp

tpc-c has been recently reworked since there are no cluster tpc-c
benchmarks ... which until recently had a top number aroaund 32tpmC

the GPU references possibly improving that by a factor of seven times.

it has been long, long time since there has been any TPC numbers for
mainframe. there was webpage from a couple years ago that referenced
some estimate for possible peak throughput of 2005 mainframe ... but it
has disappeared. I had tried to run that forward has mainframe less than
current tpc numbers (and possibly 10-100 times more expensive for
trans/min).

for instance, announcement of EC12 (peak 75BIPS with 101processors)
compared to z196 (peak 50BIPS with 80processors) says that can expect
EC12 about 30% more DBMS throughput than z196 (even tho it has 50% more
processor power).

Jon Elson

unread,
Dec 26, 2013, 6:25:29 PM12/26/13
to
hanc...@bbs.cpcn.com wrote:


> I don't know what kind of computer systems FedEx and UPS utilize or even
> if they're an issue, but they've had problems meeting their deliveries
> this year.
The "point of sale" terminals they use in the FedEx stores (formerly
Kinkos) are definitely Windows. And, the local place seems to always be
rebooting the terminal - like 2 out of 3 times I go there they are
either rebooting or have to reboot after checking in my package.
(I ship a couple packages a week from my little business.)
That reveals noting about their back-end servers, of course.

Jon

Peter Flass

unread,
Dec 26, 2013, 7:00:18 PM12/26/13
to
I'm getting them all (it seems), thanks John. Maybe your news server
has blown a gasket. I read the 1993 reference you gave, but I'm still
trying to find out if the original prediction was made in 1991 as has
been widely reported. I'd really like to see his whole rationale
besides just that quote.

For a good laugh see this photo:
http://www.computerhistory.org/revolution/mainframe-computers/7/182/734

I can't recall if I mentioned, I wanted to do a Wiki article on the
controversy:
https://en.wikipedia.org/wiki/Death_of_the_mainframe

--
Pete

Sam

unread,
Dec 26, 2013, 7:26:11 PM12/26/13
to


<hanc...@bbs.cpcn.com> wrote in message
news:c3647b88-ae0d-42ba...@googlegroups.com...
> On Thursday, December 26, 2013 3:37:46 PM UTC-5, Sam wrote:
>
>> But it remains to be seen just how many of those "tractor trailer"
>> applications there are left now that all of the stuff like amazon
>> and ebay and google etc are no longer done with mainframes.
>
> Good point.
>
> I have no idea of the relative transactions-per-second
> handled by big mainframes vs. that of modern day
> servers of large organizations like the above.

But presumably those operations have worked that
out, or more likely worked out the more important
transactions-per-second-per-$1K anyway.

> More importantly, I don't know the accuracy of the functions, that is,
> how many users are foced to redo their entry due to some sort of glitch.

It must be quite low, I have never seen a problem with amazon or ebay or
google.

> For what it's worth, my own experience on the systems mentioned
> above is that often that work on the principle of "fuzzy matches".

Not with the actual transaction, the purchase etc.

> Sometimes this is helpful in finding related materials to what
> I'm seeking, but often it is a total waste of time and distraction.

I hardly ever get that result with google particularly. Some
things are quite hard to search for properly when they don't
have a particularly unique keyword involved, like for example
the different measures of unemployment that the government
does actually calculate routinely, but its surprisingly common
to get a useful hit on the first page quite a bit of the time.

I do find amazon much less useful than ebay for buying
small consumer items, even tho amazon does often have
a much better range of items available, but that is because
amazon doesn't let you sort by price including postage
unless you specify the category and doesn't put the
price of the shipping on the original hit list either.

> I don't know if that is a programming issue (done purposely)

It does appear to be with google, and done deliberately
so that it attempts to show you what you meant rather
than what you actually asked for. That does work pretty
well IMO and I quite often do a domain restricted
advanced search when the site's own search engine
is rather poor, like plenty of them are. It's much better
than wikipedia's too.

> or an issue with servers they use.

I think that is unlikely.

> I don't know what kind of computer systems FedEx and UPS utilize

From the doco on FedEx I saw it doesn't appear to be mainframes
at the lowest level of the stuff being physically moved between
planes and trucks to move stuff around.

> or even if they're an issue, but they've had problems meeting their
> deliveries this year.

> http://usnews.nbcnews.com/_news/2013/12/26/22058674-ups-fedex-scramble-to-deliver-delayed-christmas-packages?lite

That's more likely just due to the number of humans involved physically
moving stuff at a time of much higher than normal demand.

Anne & Lynn Wheeler

unread,
Dec 26, 2013, 11:16:12 PM12/26/13
to
Quadibloc <jsa...@ecn.ab.ca> writes:
> Architecturally, therefore, it was a lot like a System 360/195.
>
> At that point, it was no longer possible to achieve gains in
> performance by building a CPU out of multiple chips.
>
> So the mainframe *did* die on schedule. IBM built reliable servers out
> of 370 ISA microprocessors and called them mainframes. Maybe that's
> too much of a generalization, and the term "mainframe" still has a
> real meaning - but I'm not so sure IBM really proved Stewart Alsop
> wrong simply because they're still selling things they call
> mainframes.

http://www.garlic.com/~lynn/2013o.html#72 "Death of the mainframe"

At the time Alsop statements ... Gerstner had not yet been hired; IBM
had gone into the red, re-orged itno the 13 "baby blues and was on the
verge of breaking up the company ... which also was highly likely to
result in completing the demise of the mainframe. The board then hired
Gerstner to reverse the breakup and resurrect the company ...
misc. posts
http://www.garlic.com/~lynn/submisc.html#gerstner

the company was still in the red in 93 but got slightly out in 94. we
had left in 92 ... but were told folklore that corporate hdqtrs spent
much of 93 shifting expenses from 94 into 93 ... putting 93 further into
the red ... put allowed 94 numbers showing profit ... slightly
different explanation here:
http://articles.sun-sentinel.com/1994-10-21/business/9410200765_1_mainframe-ibm-net-income

In 1980, I was con'ed into doing channel extender for STL ... that was
moving 300 people from the IMS group to off-site bldg. Part of that
support was downloading mainframe channel program to remote channel
emulator and running the (simulated) channel program remotely
... significantly cutting the latency and i/o overhead for the i/o. then
vendor tried to get my support released ... but there was a group in pok
that got that squashed. This group was playing with some serial
fiber-optic stuff ... and they were afraid that if the channel extender
support was in the market, it would make it harder to get their stuff
released.

They finally get their support out a decade later in 1990 with es/9000
as escon ... it is already obsolete. note this article about end of
ACS/360 ... and discusses features of acs/360 showing up in es/9000
more than 20yrs later
http://people.cs.clemson.edu/~mark/acs_end.html

in 1988, i was asked if i could help llnl standardize some serial stuff
they were working with ... this eventually morphs into the fibre channel
standard. Later, some pok channel engineers get involved with FCS and
define a heavyweight protocol for FCS that drastically reduce the native
i/o thruput ... this eventually ships as FICON
http://www.garlic.com/~lynn/submisc.html#ficon

es/9000 1990 w/6processors,
z900 Dec2000 16 processors, 2.5BIPS, 156MIPS/proc
z990 2003 32 processors, 9BIPS, 281MIPS/proc
z9 July2005 54 processors, 18BIPS, 333MIPS/proc
z10 Feb2008 64 processors, 30BIPS, 469MIPS/proc
z196 July2010 80 processors, 50BIPS, 625MIPS/proc
ec12 Aug2012 101 processors, 75BIPS, 743MIPS/proc

during the 90s, ibm mainframe shifts from bipolar to cmos
http://www.cbronline.com/news/ibm_numbers_bipolars_days_with_g5_cmos_mainframes

above mentions $6k/MIP for G5 ... or $6M/BIPS ... z196 at 50BIPS and
$28M or $560,000BIPS ... slightly better than factor of ten improvement
... compared to possibly a dollar or less per BIPS for e5-2600 blade.

For decades, RISC processors have had superscaler, out-of-order, branch
prediction. speculative execution, etc ... and significant performance
advantage over i8. however, the past several generations of i86
processors have gone to RISC cores with hardware layer that translates
i86 instructions into RISC micro-ops ... mitigating the performance
advantage of RISC processors.

note that much of the processor throughput increase from z10 to z196 is
attributable to the introduction of risc-like out-of-order execution.
Some amount of the processor improvement from z196 to ec12 is further
additions of risc-like processor capability. However, the mainframes are
still are at significant throughput disadvantage compared to RISC (and
i86 with RISC cores).

trivia ... 370/195 had out-of-order execution ... but not speculative
execution ... so conditional branches drained the pipeline ... peak
throughput was 10MIPS ... but most codes ran at half that because of
conditional branches. I got asked to help with effort to add
"hyper-threading" 2nd i-stream (but never shipped) .... added 2nd psw,
2nd set of registers, etc ... to simulate two processor operation
... but kept same pipeline and execution units. It assumed a pair of
i-streams ... each operating at 5mips ... would keep machine running at
peak throughput (instructions in pipeline had one bit flag added
indicating which i-stream they belonged to).

recent posts mentioning 370/195
http://www.garlic.com/~lynn/2013.html#58 Was MVS/SE designed to confound Amdahl?
http://www.garlic.com/~lynn/2013b.html#73 One reason for monocase was Re: Dualcase vs monocase. Was: Article for the boss
http://www.garlic.com/~lynn/2013c.html#67 relative speeds, was What Makes an Architecture Bizarre?
http://www.garlic.com/~lynn/2013d.html#22 Query for Destination z article -- mainframes back to the future
http://www.garlic.com/~lynn/2013f.html#29 Delay between idea and implementation
http://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
http://www.garlic.com/~lynn/2013g.html#93 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013h.html#17 Supercomputers face growing resilience problems
http://www.garlic.com/~lynn/2013h.html#35 Some Things Never Die
http://www.garlic.com/~lynn/2013i.html#31 DRAM is the new Bulk Core
http://www.garlic.com/~lynn/2013i.html#33 DRAM is the new Bulk Core
http://www.garlic.com/~lynn/2013k.html#8 OT? IBM licenses POWER architecture to other vendors
http://www.garlic.com/~lynn/2013m.html#51 50,000 x86 operating system on single mainframe

Quadibloc

unread,
Dec 26, 2013, 6:50:58 PM12/26/13
to
On Thursday, December 26, 2013 11:54:40 AM UTC-7, hanc...@bbs.cpcn.com wrote:

> We don't use a tractor-trailer to deliver goods when a VW will do, and of
> course vice-versa. The mainframe has its place among the "tractor trailer"
> applications.

After I posted that, I was reminded of the big difference between a mainframe and a server, even though both use single-chip CPUs. The mainframe has extensive RAS features.

However, Sun claims SPARC chips offer RAS features; Intel claims Itanium chips offer RAS features, and they recently came out with a new series of Xeon chips that brought those features to the x86 architecture. They're not using the word "mainframe" yet, and it may be those features aren't utilized by operating systems for servers made with those chips to anything like the extent that IBM does.

Even so, I still am not sure that "mainframes" today have much in common with mainframes circa, say, 1981. In 1981, there was a gulf between the mainframe and the micro; the capabilities of an IBM PC were making PDP-11s obsolete, not IBM 370s. So that one could argue that the mainframe as it was known then *did* die, but no one noticed because the name got put on a different kind of machine, a high-reliabilty microprocessor-based server.

Of course, even if the "mainframe" died, the 370 ISA lived on continuously; but then there were definitely non-mainframe machines with that ISA.

John Savard

greymausg

unread,
Dec 27, 2013, 8:55:19 AM12/27/13
to
Difference was, I suppose, that if you needed railwaay transport, you had to deal with US railways,
if steel, you can buy it anywhere, or the same for computers.


--
. Maus
.
...

jmfbahciv

unread,
Dec 27, 2013, 9:14:27 AM12/27/13
to
Nowadays, everyone has an SST jet but it's power gets wasted on fluff
instead of real work.

/BAH

hanc...@bbs.cpcn.com

unread,
Dec 27, 2013, 10:09:32 AM12/27/13
to
On Thursday, December 26, 2013 6:09:22 PM UTC-5, Anne & Lynn Wheeler wrote:

> note that while mainframe processor sales has been running only 4-5% of
> total IBM revenue ... its mainframe group has been earning total of
> $6.25 for every dollar in processor sales ... aka the calculation that
> mainframe is running around $560,000/BIPS ... total mainframe revenue
> (with services and software) comes closer to $3.5M/BIPS.

IIRC, the Campbell-Kelly book said system software (like renting CICS) was a significant source of revenue for IBM.

Quadibloc

unread,
Dec 27, 2013, 12:45:56 AM12/27/13
to
On Thursday, December 26, 2013 11:54:40 AM UTC-7, hanc...@bbs.cpcn.com wrote:

> I don't think the "technology under the hood" is the issue. Today's Pentiums
> are far, far faster than those of 1993. So are today's mainframes.

> Rather, I think the difference between a "mainframe" vs. other computing
> devices is in terms of functionality. IMHO, the mainframe to this day still
> offers certain operating features and functions that are significantly
> superior to that of other computers, which is a big reason they remain in
> wide use, as described.

You are doubtless correct - but _in_ 1990, the technology under the hood of mainframes was CPUs that were built from many different chips. So, when people were forecasting the death of the mainframe, they were thinking in terms of discrete logic becoming obsolete.

They didn't quite imagine that a mainframe could be put on a single chip... and that IBM could make chips just about as well as Intel.

This is why I think both sides were right. The microprocessor won, but systems including quality design features such as RAS can still command a price premium. As I've noted, SPARC, Itanium, and now EX-series Xeons have RAS features too, so are we going to call the same kind of computer a mainframe or a micro depending on whether or not it comes from a traditional mainframe vendor?

John Savard

Charlie Gibbs

unread,
Dec 27, 2013, 11:56:48 AM12/27/13
to
In article <6410e9e2-adf7-4821...@googlegroups.com>,
jsa...@ecn.ab.ca (Quadibloc) writes:

> Incidentally, on March 22, 1993, Intel came out with the Pentium.
>
> That chip:
>
> * was pipelined
>
> * had an on-chip cache
>
> * was hardwired, not microcoded
>
> * used advanced multiplication and division techniques

FSVO "advanced".

"I am Pentium of Borg. Division is futile. You will be approximated."

--
/~\ cgi...@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!

Rod Speed

unread,
Dec 27, 2013, 12:59:16 PM12/27/13
to
jmfbahciv <See....@aol.com> wrote
> hanc...@bbs.cpcn.com wrote
Anyone with even half a clue uses its power to use the net
instead of wasting its power running down the library to
check the most basic info or to buy something conveniently.


Anne & Lynn Wheeler

unread,
Dec 27, 2013, 4:31:34 PM12/27/13
to
Quadibloc <jsa...@ecn.ab.ca> writes:
> Even so, I still am not sure that "mainframes" today have much in
> common with mainframes circa, say, 1981. In 1981, there was a gulf
> between the mainframe and the micro; the capabilities of an IBM PC
> were making PDP-11s obsolete, not IBM 370s. So that one could argue
> that the mainframe as it was known then *did* die, but no one noticed
> because the name got put on a different kind of machine, a
> high-reliabilty microprocessor-based server.

http://www.garlic.com/~lynn/2013o.html#73 "Death of the mainframe"

4300s sold into the same mid-range market as vax machine ... and in
similar numbers for orders with small numbers ... big difference for
4300 was the large corporate orders for multiple hundreds at a time
... sort of the leading edge of the distributed computing tsunami. Later
in the 80s, large PCs and workstations moving up into the mid-range took
over that market ... decade of vax numbers sliced & diced by year,
model, US/non-US
http://www.garlic.com/~lynn/2002f.html#0

something similar happened to 4300s ... they were expecting continued
explosion in sales for 4331/4341 follow-on ... the 4361s & 4381s ... but
by that time the mid-range market was already starting to shift to large
PCs and workstations. for a little drift, old email mentioning 4300
http://www.garlic.com/~lynn/lhwemail.html#43xx

as in other posts in this thread ... the communication group stanglehold
more & more isolating the mainframe datacenter had similar effect on the
high-end 370s ... with data fleeing the datacenter to more distributed
computing friendly platforms ... contributing significantly in drop-off
in high-end 370 sales going into the early 90s and the company going
into the red.

as previously mentioned ... this describes the high-end mainframes
moving from bi-polar to cmos in the 90s (but with much smaller market
and sales)
http://www.cbronline.com/news/ibm_numbers_bipolars_days_with_g5_cmos_mainframes

then in the last decade ... i86 chips move to risc cores ... largely
eliminating difference in throughput between i86 chips and risc chips.
Even the last two generations of mainframe cmos have introduced
increasingly amount of features that have been part of risc for decades.

note that jim's early 80s studies details that by then ... availability
was largely shifting from hardware to software and environmental
characteristics. jim's '84 presentation
http://www.garlic.com/~lynn/grayft84.pdf

in ha/cmp in the late 80s and early 90s ...
http://www.garlic.com/~lynn/subtopic.html#hacmp

we were showing that N+1 ha/cmp cluster (with standard hardware) could
provide better "nines" availability than purely hardware fault-tolerate
system
http://www.garlic.com/~lynn/submain.html#available

in the early 90s ... I was asked to write a section for the corporate
continuous availability strategy document ... however both rochester
(as/400) and POK (high-end mainframe) complained that they couldn't meet
the description ... and the section was pulled.

also, i've referenced this post with comment about ha/cmp cluster
scaleup (complimentary to ha/cmp availability) in ellison's conference
room early Jan1992
http://www.garlic.com/~lynn/95.html#13

the mainframe DB2 people were complaining that if I was allowed to go
ahead, I would be a minimum of five yrs ahead of them. The problem was
IBM didn't have a non-mainframe cluster product ... so was working with
various RDBMS vendors that had somewhat common implementation for their
unix and vax/vms platforms (that included vax/cluster support) ... and
was doing an HA/CMP implementation with semantics similar to vax/cluster
to simplify the porting. some recent mentiong
http://www.garlic.com/~lynn/2013m.html#86 'Free Unix!': The world-changing proclamation made 30 yearsagotoday
http://www.garlic.com/~lynn/2013m.html#87 'Free Unix!': The world-changing proclamation made 30 yearsagotoday
http://www.garlic.com/~lynn/2013o.html#44 the suckage of MS-DOS, was Re: 'Free Unix!

and various old email on cluster scalup from the period
http://www.garlic.com/~lynn/lhwemail.html#medusa

and was heavily involved in also working with national labs supporting
scientific and numerical intensive. as previously mentioned ... possibly
only hrs after the last email in above ... the effort was transferred
and we were told we couldn't work on anything with more than four
processors. then almost immediately the cluster scaleup was announced
as supercomputer for scientific and numerical intensive *ONLY* ... press
item from 17Feb1992
http://www.garlic.com/~lynn/2001n.html#6000clusters1
and item later in the spring claiming that the interest in cluster stuff
caught them by *SURPRISE*
http://www.garlic.com/~lynn/2001n.html#6000clusters2

The old 4300 email include references having done benchmarks for LLNL in
1979 and they were looking for large cluster compute farm of 4300s
... and then later in 1988 being asked to help LLNL standardize some
high-speed serial stuff they had (so had long relation with them, going
back quite awhile, prior to ha/cmp).

Quadibloc

unread,
Dec 27, 2013, 9:35:51 PM12/27/13
to
On Friday, December 27, 2013 9:56:48 AM UTC-7, Charlie Gibbs wrote:
> In article <6410e9e2-adf7-4821...@googlegroups.com>,
> jsa...@ecn.ab.ca (Quadibloc) writes:

> > * used advanced multiplication and division techniques

> FSVO "advanced".

> "I am Pentium of Borg. Division is futile. You will be approximated."

Yes, unfortunately there was the division bug in the initial Pentiums, partly because the advanced technique Intel chose was SRT division, which depended on a table, instead of Goldschmidt division, as used by IBM.

But that only slightly delayed, at worst, the availability of microprocessors so powerful that multiple-chip designs could no longer improve on them.

John Savard

jmfbahciv

unread,
Dec 28, 2013, 9:54:25 AM12/28/13
to
I always considered a "mainframe" on what kinds of computing it would do.
I wouldn't call a Cray a mainframe if it was only used for compute-intensive
work.

/BAH

Quadibloc

unread,
Dec 28, 2013, 3:58:37 PM12/28/13
to
On Saturday, December 28, 2013 7:54:25 AM UTC-7, jmfbahciv wrote:

> I always considered a "mainframe" on what kinds of computing it would do.
> I wouldn't call a Cray a mainframe if it was only used for compute-intensive
> work.

A Cray I would indeed typically have been called a "supercomputer".

I recall the vocabulary developing as follows:

First, there were just computers. Made out of vacuum tubes, of course all of them were big.

Eventually, the minicomputer came along.

Even before the minicomputer, there were computers like the Recomp II, the LGP-30, the Bendix G-15, and the PDP-5, that were noticeably smaller in scale and more affordable than large computers.

The term "mainframe" originated as a synonym for CPU - this is the main part of a computer system, in which electronics, held in a frame, are housed.

But if you look at, say, a typical PDP-8/I system, there isn't a *main* frame, there's one rack which includes the CPU, memory, and even tape and disk drives.

So the term "mainframe" became associated with large computers. And it didn't matter if they were commercial 7010 computers or scientific 7090 computers. Both were mainframes.

Since the RAMAC was a vacuum-tube computer system, you could get hard drives for the 7010 and 7090, but the database in its modern sense hadn't been invented yet. Large quantities of data were on punched cards or tape; disk space was used for program files and working storage.

So the Control Data 1604A and the Control Data 6600 were mainframes; big computers that had punched cards and line printers. How could a System/360 Model 91 or Model 44 get to be called a mainframe while they weren't?

The PDP-6 came from a company traditionally known as a mini manufacturer, and so some people might have wondered if it was really a mainframe or... a supermini (except that term hadn't been coined yet), but universities that bought a PDP-10, while patting themselves on the back for saving money, would have bristled at the thought that they didn't have a real mainframe computer.

For that matter, I've seen an SDS 940 with my own eyes, and it needed a raised floor just like any other mainframe.

While, therefore, I think most people thought of a Cray I as a mainframe that was more specifically a supercomputer, when you move forwards in time a bit, and get supercomputers like the ETA minisupers or the Fujitsu SX-8r, with IBM making the only remaining mainframes, the idea that a mainframe is a machine primarily used for database work made sense.

So my point is that if the mainframe got reinvented, the old mainframe could have died on schedule without anyone noticing.

John Savard

hanc...@bbs.cpcn.com

unread,
Dec 28, 2013, 4:33:11 PM12/28/13
to
On Friday, December 27, 2013 4:31:34 PM UTC-5, Anne & Lynn Wheeler wrote:

> 4300s sold into the same mid-range market as vax machine ... and in
> similar numbers for orders with small numbers ... big difference for
> 4300 was the large corporate orders for multiple hundreds at a time
> ... sort of the leading edge of the distributed computing tsunami. Later
> in the 80s, large PCs and workstations moving up into the mid-range took
> over that market ... decade of vax numbers sliced & diced by year,
> model, US/non-US

The hospital I worked at bought a 4300 to replace its S/360. Obviously the investment in programs was preserved.

As an aside, eventually that hospital replaced its internal I.T. office with a service bureau specializing in hospital applications. (I just checked to see if that service bureau was still in existence, and Google said it was purchased by Siemens back in 2000 for $2.1 billion.)

As to the mainframe marketplace, it seemed that the larger organizations were developing major complex (multi-million dollar) applications in the 1980s and 1990s. This greatly expanded the functionality of existing on-line processes or converted batch to on-line. Naturally, these applications required bigger mainframes, which were purchased.




Anne & Lynn Wheeler

unread,
Dec 28, 2013, 6:07:21 PM12/28/13
to
hanc...@bbs.cpcn.com writes:
> The hospital I worked at bought a 4300 to replace its S/360.
> Obviously the investment in programs was preserved.
>
> As an aside, eventually that hospital replaced its internal
> I.T. office with a service bureau specializing in hospital
> applications. (I just checked to see if that service bureau was still
> in existence, and Google said it was purchased by Siemens back in 2000
> for $2.1 billion.)
>
> As to the mainframe marketplace, it seemed that the larger
> organizations were developing major complex (multi-million dollar)
> applications in the 1980s and 1990s. This greatly expanded the
> functionality of existing on-line processes or converted batch to
> on-line. Naturally, these applications required bigger mainframes,
> which were purchased.

http://www.garlic.com/~lynn/2013o.html#75 "Death of the mainframe"

totally unrelated we did lots of work with the Siemens Infineon chip
spinoff on the AADS chip strawman ... even doing walk-through of their
(then brand new) security chip fab in dresden
http://www.garlic.com/~lynn/x959.html#aads

and then tried to do some stuff with the Siemens medical services group
on DBMS and medical record technology.

... however, in the 90s, there was major effort by the remaining core of
mainframe use ... the financial industry ... to move to large numbers of
"killer micros". The issue was that online transactions had been added
over the years ... but were really just queueing up transactions to be
settled in the traditional batch system ... that ran overnight.

the problem was globalization was both increasing the amount of work to
be done overnight as well as shortening the length of the overnight
window. The rewrites were to move to "straight through processing" using
large numbers of parallel processing. However, the parallelization
libraries they were using introduced a factor of 100 times overhead
(compared to the mainframe cobol batch) ... totally swamping the
anticipated throughput increases anticipated with combination of
straight through processing and large numbers of parallel processing.

They did toy demos and then failed to do the speeds&feeds numbers about
scaleup ... and even with warnings about what was going to happen,
several went to pilot deployments before the magnitude of the problem
was realized/appreciated. There was significant backlash from the failed
efforts regarding attempts to make further moves off the mainframe.

in the last half of the last decade we took some technology to financial
industry standards groups ... that approached the parallelization and
scaleup (for straight through processing) from a totally different
direction. Rather than lots of ROI application code calling
parallelization libraries ... this leveraged the significant work that
had been done on parallizing by the major RDBMS. The implementation took
high-level design specification and decomposed it into fine-grain SQL
statements that could be efficiently parallelized. Initially the
technology saw high acceptance ... but then hit a break wall ... the
comments that eventually came back was that there was still large number
of executives that carried significant scars from the failures in the
90s ... and it would have to wait for a whole new generation before it
could be tried again.

Earlier in this thread, I made references to early Jan1992 meeting in
Ellison's conference room about having 128-way by ye1992
http://www.garlic.com/~lynn/95.html#13

and then it getting it co-opted for scientific and numerical intensive
*ONLY* and being told we couldn't work on anything with more than four
processors (at which point we decide to leave).

Past reference to announcements about parallel processing advances
("From the annals of release no software before its time"):
http://www.garlic.com/~lynn/2009p.html#43 From The Annals of Release No Software Before Its Time
http://www.garlic.com/~lynn/2009p.html#46 From The Annals of Release No Software Before Its Time
http://www.garlic.com/~lynn/2011m.html#46 From The Annals of Release No Software Before Its Time
http://www.garlic.com/~lynn/2011m.html#47 From The Annals of Release No Software Before Its Time
http://www.garlic.com/~lynn/2011m.html#59 From The Annals of Release No Software Before Its Time

recent posts mentioning "straight through processing" work to replace
the "overnight batch window"
http://www.garlic.com/~lynn/2013b.html#42 COBOL will outlive us all
http://www.garlic.com/~lynn/2013c.html#84 What Makes an Architecture Bizarre?
http://www.garlic.com/~lynn/2013f.html#57 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013f.html#64 What Makes an Architecture Bizarre?
http://www.garlic.com/~lynn/2013g.html#6 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013g.html#50 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013h.html#42 The Mainframe is "Alive and Kicking"
http://www.garlic.com/~lynn/2013i.html#49 Internet Mainframe Forums Considered Harmful
http://www.garlic.com/~lynn/2013m.html#35 Why is the mainframe so expensive?

jmfbahciv

unread,
Dec 29, 2013, 10:52:09 AM12/29/13
to
Quadibloc wrote:
> On Saturday, December 28, 2013 7:54:25 AM UTC-7, jmfbahciv wrote:
>
>> I always considered a "mainframe" on what kinds of computing it would do.
>> I wouldn't call a Cray a mainframe if it was only used for
compute-intensive
>> work.
>
> A Cray I would indeed typically have been called a "supercomputer".

I think of it as a computing device.
That implies a lot of cables are necessary for delivering computing services.
It signifies that the computer is doing many different things during the
same wall-clock second.
>
> While, therefore, I think most people thought of a Cray I as a mainframe
that was more specifically a supercomputer, when you move forwards in time a
bit, and get supercomputers like the ETA minisupers or the Fujitsu SX-8r, with
IBM making the only remaining mainframes, the idea that a mainframe is a
machine primarily used for database work made sense.
>
> So my point is that if the mainframe got reinvented, the old mainframe could
have died on schedule without anyone noticing.

Perhaps the phusical mainframe, but not mainframe computing. I'm not trying
to correct the industry definitions but trying to describe how I looked at it.
A PDP-10 could easily be used as a computing device. If it were, I would not
consider that system as a mainframe. The minis weren't mainframes because
their abilities to deliver computing services were severely constrained
compared to mainframes.

/BAH

Patrick Scheible

unread,
Dec 31, 2013, 6:45:53 PM12/31/13
to
Quadibloc <jsa...@ecn.ab.ca> writes:

> Incidentally, on March 22, 1993, Intel came out with the Pentium.
>
> That chip:
>
> * was pipelined
>
> * had an on-chip cache
>
> * was hardwired, not microcoded
>
> * used advanced multiplication and division techniques
>
> Architecturally, therefore, it was a lot like a System 360/195.
>
> At that point, it was no longer possible to achieve gains in
> performance by building a CPU out of multiple chips.
>
> So the mainframe *did* die on schedule. IBM built reliable servers out
> of 370 ISA microprocessors and called them mainframes. Maybe that's
> too much of a generalization, and the term "mainframe" still has a
> real meaning - but I'm not so sure IBM really proved Stewart Alsop
> wrong simply because they're still selling things they call
> mainframes.

I would say that being a "mainframe" doesn't refer to the underlying
architecture. Rather, it's what it's used for:

CPU-bound and storage to CPU bandwidth bound problems

High reliability environment, whether for one computer or several
working together. Commodity PC hardware doesn't cut it.

Usually multiple user

-- Patrick




Michael Black

unread,
Dec 31, 2013, 9:46:31 PM12/31/13
to
I'd say so too.

There obviously was a difference initially, the small computer not coming
close to a mainframe or a minicomputer. But they merged at some point.

I have that O'Reilly book (published by someone else) from about 1988
about Unix system administration, and early on it mentions small, medium
and large Unix systems. And a quarter century later the descriptions are
a laugh, tiny, puny systems compared to what someone with a recent
computer has at home.

But, those systems were intended for a large number of users, and a bunch
of them connected at the same time. That has to change the architecture,
in the form of terminal handling if nothing else. Probably handling disk
I/O too, if everyone wants to save a file at the same time, that will be a
bottleneck. I saw an ad for a used server the other day, lots of RAM (a
server/mainframe would have tried for as much RAM as possible when a
desktop wouldn't need as much that early on), and something like six 80gig
SCSI drives. Presumably splitting the drives helps to handle multiple
users at the same time.

Michael

Charlie Gibbs

unread,
Jan 2, 2014, 1:48:58 AM1/2/14
to
In article <86r48st...@chai.my.domain>, k...@zipcon.net
(Patrick Scheible) writes:

> I would say that being a "mainframe" doesn't refer to the underlying
> architecture. Rather, it's what it's used for:
>
> CPU-bound and storage to CPU bandwidth bound problems
>
> High reliability environment, whether for one computer or several
> working together. Commodity PC hardware doesn't cut it.
>
> Usually multiple user

There seem to be as many definitions of "mainframe" as there are
people. My definition of mainframe is a machine designed to process
large volumes of data in a not-so-interactive form. I/O is record-
rather than byte-oriented, terminals run in block mode, applications
are batch rather than interactive (you can do interactive work and
character-at-a-time I/O, but it's going against the grain).

And then there's JCL.

Mainframes are arguably more self-sufficient; they're designed
to keep running with a minimum of user interaction. Remember,
"interactive" is a synonym for "manual".

Morten Reistad

unread,
Jan 3, 2014, 2:53:22 AM1/3/14
to
In article <1258.149T29...@kltpzyxm.invalid>,
Charlie Gibbs <cgi...@kltpzyxm.invalid> wrote:
>In article <86r48st...@chai.my.domain>, k...@zipcon.net
>(Patrick Scheible) writes:
>
>> I would say that being a "mainframe" doesn't refer to the underlying
>> architecture. Rather, it's what it's used for:
>>
>> CPU-bound and storage to CPU bandwidth bound problems
>>
>> High reliability environment, whether for one computer or several
>> working together. Commodity PC hardware doesn't cut it.
>>
>> Usually multiple user
>
>There seem to be as many definitions of "mainframe" as there are
>people. My definition of mainframe is a machine designed to process
>large volumes of data in a not-so-interactive form. I/O is record-
>rather than byte-oriented, terminals run in block mode, applications
>are batch rather than interactive (you can do interactive work and
>character-at-a-time I/O, but it's going against the grain).

This has always been a form factor measurement. I half-jokingly
tell that you can throw a laptop, carry a pc, cart a mini, and
with a mainframe you can enter it and close the door after you.

Only half joking.

>And then there's JCL.
>
>Mainframes are arguably more self-sufficient; they're designed
>to keep running with a minimum of user interaction. Remember,
>"interactive" is a synonym for "manual".

And what exactly makes this differ from high end "pc" servers,
easily showing uptimes of several years?

-- mrr

Rod Speed

unread,
Jan 3, 2014, 1:17:52 PM1/3/14
to
Morten Reistad <fi...@last.name> wrote
> Charlie Gibbs <cgi...@kltpzyxm.invalid> wrote
>> k...@zipcon.net (Patrick Scheible) wrote

>>> I would say that being a "mainframe" doesn't refer to
>>> the underlying architecture. Rather, it's what it's used for:

>>> CPU-bound and storage to CPU bandwidth bound problems

>>> High reliability environment, whether for one computer or several
>>> working together. Commodity PC hardware doesn't cut it.

>>> Usually multiple user

>> There seem to be as many definitions of "mainframe" as there are
>> people. My definition of mainframe is a machine designed to process
>> large volumes of data in a not-so-interactive form. I/O is record-
>> rather than byte-oriented, terminals run in block mode, applications
>> are batch rather than interactive (you can do interactive work and
>> character-at-a-time I/O, but it's going against the grain).

> This has always been a form factor measurement. I half-jokingly
> tell that you can throw a laptop, carry a pc, cart a mini, and
> with a mainframe you can enter it and close the door after you.

Trouble is that you can do that last with a PDP9 and it was
never anything like a mainframe.

Quadibloc

unread,
Jan 3, 2014, 3:06:50 PM1/3/14
to
On Friday, January 3, 2014 11:17:52 AM UTC-7, Rod Speed wrote:

> Trouble is that you can do that last with a PDP9 and it was
> never anything like a mainframe.

A PDP-9 isn't that much bigger than a PDP-8/I; both have to be carted, neither fills a room.

John Savard

Rod Speed

unread,
Jan 3, 2014, 3:32:18 PM1/3/14
to
Quadibloc <jsa...@ecn.ab.ca> wrote
> Rod Speed wrote
>> Morten Reistad <fi...@last.name> wrote
>>> Charlie Gibbs <cgi...@kltpzyxm.invalid> wrote
>>>> k...@zipcon.net (Patrick Scheible) wrote

>>>>> I would say that being a "mainframe" doesn't refer to
>>>>> the underlying architecture. Rather, it's what it's used for:

>>>>> CPU-bound and storage to CPU bandwidth bound problems

>>>>> High reliability environment, whether for one computer or several
>>>>> working together. Commodity PC hardware doesn't cut it.

>>>>> Usually multiple user

>>>> There seem to be as many definitions of "mainframe" as there are
>>>> people. My definition of mainframe is a machine designed to process
>>>> large volumes of data in a not-so-interactive form. I/O is record-
>>>> rather than byte-oriented, terminals run in block mode, applications
>>>> are batch rather than interactive (you can do interactive work and
>>>> character-at-a-time I/O, but it's going against the grain).

>>> This has always been a form factor measurement. I half-jokingly
>>> tell that you can throw a laptop, carry a pc, cart a mini, and
>>> with a mainframe you can enter it and close the door after you.

>> Trouble is that you can do that last with a PDP9
>> and it was never anything like a mainframe.

> A PDP-9 isn't that much bigger than a PDP-8/I;

The 9 is MUCH wider.

http://www.piercefuller.com/collect/pdp8pix/pdp8i-3p.jpg
You can't enter that and close the door after you.

I can't find a decent picture of a 9 with the back door open.
You really could enter it and close the door after you.

> both have to be carted,

You don't cart a 9.

> neither fills a room.

That wasn't even mentioned.

Charlie Gibbs

unread,
Jan 3, 2014, 1:33:10 PM1/3/14
to
In article <is6ipa-...@wair.reistad.name>, fi...@last.name
Well, my definition might be becoming somewhat dated. On the
other hand, since my definition is usage-oriented, those "PC"
servers are actually doing things that traditional mainframes
did, and they live in the same big air-conditioned rooms that
the mainframes did - as opposed to a box that's architecturally
very similar but which sits on a user's desk and is turned off
at night (or rebooted regularly at Windows' whim).

On the other other hand, our software is running 24/7 - with
minimal need for human intervention - on a number of those
desktop machines...

Anne & Lynn Wheeler

unread,
Jan 3, 2014, 4:52:53 PM1/3/14
to
"Charlie Gibbs" <cgi...@kltpzyxm.invalid> writes:
> Well, my definition might be becoming somewhat dated. On the
> other hand, since my definition is usage-oriented, those "PC"
> servers are actually doing things that traditional mainframes
> did, and they live in the same big air-conditioned rooms that
> the mainframes did - as opposed to a box that's architecturally
> very similar but which sits on a user's desk and is turned off
> at night (or rebooted regularly at Windows' whim).
>
> On the other other hand, our software is running 24/7 - with
> minimal need for human intervention - on a number of those
> desktop machines...

http://www.garlic.com/~lynn/2013o.html#80 "Death of the mainframe"


the low & mid range 4300s ... in part because of their small
environmental footprint, started to move out into dept supply and
conferences rooms (inside ibm was significant contributor to conference
rooms becoming scarce commodity) ... leading edge of the distributed
computing tsunami

other recent post in ibm-main discussioin
http://www.garlic.com/~lynn/2014.html#4 Application development paradigms [was: RE: Learning Rexx]

Lon

unread,
Jan 3, 2014, 7:37:52 PM1/3/14
to
High end PC servers don't typically require a staff of a few dozen
high priced systems analysts to keep them running.
And they don't take 48 hours or longer to cold start.




Lon

unread,
Jan 3, 2014, 7:39:53 PM1/3/14
to
If it will fit in a single room, it may not really qualify for mainframe
status.


Quadibloc

unread,
Jan 3, 2014, 10:34:36 PM1/3/14
to
On Friday, January 3, 2014 1:32:18 PM UTC-7, Rod Speed wrote:

> http://www.piercefuller.com/collect/pdp8pix/pdp8i-3p.jpg
> You can't enter that and close the door after you.

Here's a picture of the PDP-9:
http://www.csse.monash.edu.au/museum/PDP9_cab.jpg

I was thinking of a dual-rack PDP-8/I system being about the same size - and such configurations were very common.

John Savard

Rod Speed

unread,
Jan 4, 2014, 12:25:31 AM1/4/14
to
Quadibloc <jsa...@ecn.ab.ca> wrote
> Rod Speed wrote

>> http://www.piercefuller.com/collect/pdp8pix/pdp8i-3p.jpg
>> You can't enter that and close the door after you.

> Here's a picture of the PDP-9:
> http://www.csse.monash.edu.au/museum/PDP9_cab.jpg

Don't need one, I ran one for years and did all the maintenance
on it and added lots of stuff to it, including stealing the design
of mag tape controller and doing my own with wirewrapped
socketted TTL in a massive great drawer of wire wrap sockets.

Like I said, its MUCH wider than an 8/I

> I was thinking of a dual-rack PDP-8/I system being about the same size

Not as far as being able to enter it and close the door after you is
concerned.

The 9 had the logic on a full width door at the back that
swung out and you could stand in the rack and needed
to to get at the back of the control panel etc.

You couldn't do that with a dual rack 8/I system.

> - and such configurations were very common.

Sure, but you couldn't stand in the rack and close the door behind you.


Stephen Wolstenholme

unread,
Jan 4, 2014, 4:17:11 AM1/4/14
to
On Fri, 03 Jan 2014 17:37:52 -0700, Lon <lon.s...@comcast.net>
wrote:

>High end PC servers don't typically require a staff of a few dozen
>high priced systems analysts to keep them running.

The mainframes I worked on for years did not need anyone to keep them
running apart from operators to load mag tapes and disc packs when the
machine "asked". The systems analysts were long gone!

>And they don't take 48 hours or longer to cold start.
>

Only if they get switched off. I worked on a system once that needed
the cooling and air conditioning system running for a hour before the
six mainframes could be switched on.

Steve

--
Neural Planner Software http://www.npsnn.com
EasyNN-plus neural network software http://www.easynn.com
SwingNN prediction software http://www.swingnn.com


Nick Spalding

unread,
Jan 4, 2014, 5:34:06 AM1/4/14
to
Stephen Wolstenholme wrote, in <dnjfc95551h6rijcg...@4ax.com>
on Sat, 04 Jan 2014 09:17:11 +0000:

> On Fri, 03 Jan 2014 17:37:52 -0700, Lon <lon.s...@comcast.net>
> wrote:
>
> >High end PC servers don't typically require a staff of a few dozen
> >high priced systems analysts to keep them running.
>
> The mainframes I worked on for years did not need anyone to keep them
> running apart from operators to load mag tapes and disc packs when the
> machine "asked". The systems analysts were long gone!
>
> >And they don't take 48 hours or longer to cold start.
> >
>
> Only if they get switched off. I worked on a system once that needed
> the cooling and air conditioning system running for a hour before the
> six mainframes could be switched on.

I remember a (probably apocryphal) story of the Stretch at Aldermaston where
some bean counter insisted on shutting down everything including the air
conditioning over some particularly long public holiday break. It took three
weeks to get it running again.
--
Nick Spalding

jmfbahciv

unread,
Jan 4, 2014, 10:05:46 AM1/4/14
to
Which systems took that long to cold start?

/BAH

jmfbahciv

unread,
Jan 4, 2014, 10:05:47 AM1/4/14
to
Charlie Gibbs wrote:
> In article <is6ipa-...@wair.reistad.name>, fi...@last.name
> (Morten Reistad) writes:
>
>> In article <1258.149T29...@kltpzyxm.invalid>,
>> Charlie Gibbs <cgi...@kltpzyxm.invalid> wrote:
>>
>>> Mainframes are arguably more self-sufficient; they're designed
>>> to keep running with a minimum of user interaction. Remember,
>>> "interactive" is a synonym for "manual".
>>
>> And what exactly makes this differ from high end "pc" servers,
>> easily showing uptimes of several years?
>
> Well, my definition might be becoming somewhat dated. On the
> other hand, since my definition is usage-oriented, those "PC"
> servers are actually doing things that traditional mainframes
> did,

YUp.

> and they live in the same big air-conditioned rooms that
> the mainframes did - as opposed to a box that's architecturally
> very similar but which sits on a user's desk and is turned off
> at night (or rebooted regularly at Windows' whim).
>
> On the other other hand, our software is running 24/7 - with
> minimal need for human intervention - on a number of those
> desktop machines...

The services provided by a laptop are similar to what mainframes
did.

/BAH

Walter Bushell

unread,
Jan 4, 2014, 10:24:00 AM1/4/14
to
In article <2oofc9lfc31tlfbea...@4ax.com>,
Nick Spalding <spal...@iol.ie> wrote:

> I remember a (probably apocryphal) story of the Stretch at Aldermaston where
> some bean counter insisted on shutting down everything including the air
> conditioning over some particularly long public holiday break. It took three
> weeks to get it running again.

Way back when a couple of fiscal panics ago, there was a fiscal firm a
block or so from where I worked that turned the heat off over
Presidents day or was it still George's birthday. It seems the heating
system froze and flooded their office on the first floor and ruined
all there equipment and put them I think at least temporarily out of
business.

We've had a lot of comments about physical plant people and their
obliviousness to switching things off and on. Training them to give
warning is like trying to potty train a horse.

--
Never attribute to stupidity that which can be explained by greed. Me.

osmium

unread,
Jan 4, 2014, 11:24:37 AM1/4/14
to
"jmfbahciv" wrote:

>> High end PC servers don't typically require a staff of a few dozen
>> high priced systems analysts to keep them running.
>> And they don't take 48 hours or longer to cold start.
>
> Which systems took that long to cold start?

The Univac File Computer took 20 minutes for the photocells in the tape
units to heat up adequately. They were used for end of tape detection -
each end. They were lead oxide cells and often seemed better at detecting
heat than detecting light.


Scott Lurndal

unread,
Jan 4, 2014, 2:19:18 PM1/4/14
to
"Charlie Gibbs" <cgi...@kltpzyxm.invalid> writes:
>In article <is6ipa-...@wair.reistad.name>, fi...@last.name
>(Morten Reistad) writes:
>
>> In article <1258.149T29...@kltpzyxm.invalid>,
>> Charlie Gibbs <cgi...@kltpzyxm.invalid> wrote:
>>
>>> Mainframes are arguably more self-sufficient; they're designed
>>> to keep running with a minimum of user interaction. Remember,
>>> "interactive" is a synonym for "manual".
>>
>> And what exactly makes this differ from high end "pc" servers,
>> easily showing uptimes of several years?
>
>Well, my definition might be becoming somewhat dated. On the
>other hand, since my definition is usage-oriented, those "PC"
>servers are actually doing things that traditional mainframes
>did, and they live in the same big air-conditioned rooms that
>the mainframes did - as opposed to a box that's architecturally
>very similar but which sits on a user's desk and is turned off
>at night (or rebooted regularly at Windows' whim).

From 2001 to 2010, I ran a web site and sendmail server on an
2000 vintage sony vaio laptop. It was rebooted once in that
10 year span (Power failure and the laptop battery was old and
tired - a UPS was added). After that single interruption, it
ran for another 5 years without any intervention or even reboot
(running Redhat 8).

It was replaced with a 2" x 4" x 4" ZOTEC box running a dual-core
AMD64 processor hosting four virtual machines - two websites and
an email server. Draws 11 watts. Up for over a year now.

I'd call them both servers in function, if not in form.

scott

Ahem A Rivet's Shot

unread,
Jan 4, 2014, 2:44:18 PM1/4/14
to
Raised floor, air conditioning and a forklift entrance are common
features of rooms suitable for mainframes.

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/

Charlie Gibbs

unread,
Jan 4, 2014, 9:38:25 PM1/4/14
to
In article <proto-04D28F....@news.panix.com>, pr...@panix.com
(Walter Bushell) writes:

> In article <2oofc9lfc31tlfbea...@4ax.com>,
> Nick Spalding <spal...@iol.ie> wrote:
>
>> I remember a (probably apocryphal) story of the Stretch at
>> Aldermaston where some bean counter insisted on shutting
>> down everything including the air conditioning over some
>> particularly long public holiday break. It took three weeks
>> to get it running again.

At one PPOE I forgot to turn off the air conditioner after
shutting down the system for a long weekend. It didn't have
a thermostat (actually, we later found the thermostat in the
crawl space under the machine room, and it was turned all the
way down) and when we came back in to start it up our breath
was condensing. The oil in the disk drives' actuators had
congealed, and we had to let the drives spin for an hour
before the heads would load.

> Way back when a couple of fiscal panics ago, there was a fiscal
> firm a block or so from where I worked that turned the heat off
> over Presidents day or was it still George's birthday. It seems
> the heating system froze and flooded their office on the first
> floor and ruined all there equipment and put them I think at
> least temporarily out of business.
>
> We've had a lot of comments about physical plant people and their
> obliviousness to switching things off and on. Training them to give
> warning is like trying to potty train a horse.

An electrician was in the machine room working on a circuit unrelated
to the computer equipment. He traced wires by using the time-honoured
technique of switching off breakers and seeing which lights went out.
And sure enough, when he switched off the breaker in the separate box
marked COMPUTER POWER ONLY, all the lights on the computer went out.

Charlie Gibbs

unread,
Jan 4, 2014, 9:45:02 PM1/4/14
to
In article <biqqu6...@mid.individual.net>, r124c...@comcast.net
I once used a Honeywell tape drive where if the temperature exceeded
a certain value the BOT photocell failed to work. If you then tried
to rewind a tape, it would pull the tape right off the takeup reel.
The end of the tape would shoot to the bottom of the vacuum column
with a loud THWUP, the drive would shut down, and you'd have to
re-thread the tape.

Morten Reistad

unread,
Jan 4, 2014, 9:57:33 PM1/4/14
to
In article <W8Zxu.169906$_n7....@fx20.iad>,
Servers indeed. Laptops make stellar servers, they even
have builtin UPSes.

If you have basic tier-3 (1 level of redundancy, everywhere) operations
with an expectancy uptime of 99.85-ish percent or half a day downtime
per year, then one senior linux/bsd sysadmin should be able to handle
around 150-200 hosted servers, staying current with fixes etc. Nearly
three times that number with unlimited access to "janitor-like" hands
and eyes for connecting KVM and install media.

Go for tier4 and you divide that by around three, i.e. 1 sysadmin
per 50-ish servers.

This is for keeping OS and basic functions alive, like having LAMP
installed and functioning, replacing disks, powers etc as they burn
out.

From the looks of it above you were running a tier-2-like approach
(i.e. mirror likely-to-fail components like power and disk) and used
around a day or two of your time during that decade. But you didn't
stay current with patches, either.

-- mrr

William Pechter

unread,
Jan 4, 2014, 11:09:24 PM1/4/14
to
In article <02ba6634-9610-4805...@googlegroups.com>,
<hanc...@bbs.cpcn.com> wrote:
>On Friday, December 27, 2013 4:31:34 PM UTC-5, Anne & Lynn Wheeler wrote:
>
>> 4300s sold into the same mid-range market as vax machine ... and in
>> similar numbers for orders with small numbers ... big difference for
>> 4300 was the large corporate orders for multiple hundreds at a time
>> ... sort of the leading edge of the distributed computing tsunami. Later
>> in the 80s, large PCs and workstations moving up into the mid-range took
>> over that market ... decade of vax numbers sliced & diced by year,
>> model, US/non-US
>
>The hospital I worked at bought a 4300 to replace its S/360. Obviously
>the investment in programs was preserved.
>
>As an aside, eventually that hospital replaced its internal I.T. office
>with a service bureau specializing in hospital applications. (I just
>checked to see if that service bureau was still in existence, and Google
>said it was purchased by Siemens back in 2000 for $2.1 billion.)
>
>As to the mainframe marketplace, it seemed that the larger organizations
>were developing major complex (multi-million dollar) applications in the
>1980s and 1990s. This greatly expanded the functionality of existing
>on-line processes or converted batch to on-line. Naturally, these
>applications required bigger mainframes, which were purchased.
>
>
>
>

Good old SMS -- Shared Medical Systems IIRC.

A pretty good source of Field Service income to DEC via their PDP
11/70 systems installed in all the hospitals around me here...

Siemens also controlled their CAT scanners with RT11 on PDP11's
until they changed to newer hardware (Sun IIRC) like Xerox did as well.


Bill
--
--
Digital had it then. Don't you wish you could buy it now!
pechter-at-pechter.dyndns.org http://xkcd.com/705/

Shmuel Metz

unread,
Jan 4, 2014, 10:42:21 PM1/4/14
to
In <0p-dnQAI9qTvxVrP...@giganews.com>, on 01/03/2014
at 05:39 PM, Lon <lon.s...@comcast.net> said:

>If it will fit in a single room, it may not really qualify for
>mainframe status.

Then there has never been a mainframe.

--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to spam...@library.lspace.org

Shmuel Metz

unread,
Jan 4, 2014, 10:41:23 PM1/4/14
to
In <0p-dnQEI9qRnylrP...@giganews.com>, on 01/03/2014
at 05:37 PM, Lon <lon.s...@comcast.net> said:

>High end PC servers don't typically require a staff of a few dozen
>high priced systems analysts to keep them running.
>And they don't take 48 hours or longer to cold start.

K3wl! In over half a century of working with mainframes, I have yet to
see one of them take 48 hours to cold start.

jmfbahciv

unread,
Jan 5, 2014, 11:31:27 AM1/5/14
to
Charlie Gibbs wrote:
> In article <biqqu6...@mid.individual.net>, r124c...@comcast.net
> (osmium) writes:
>
>> "jmfbahciv" wrote:
>>
>>>> High end PC servers don't typically require a staff of a few dozen
>>>> high priced systems analysts to keep them running.
>>>> And they don't take 48 hours or longer to cold start.
>>>
>>> Which systems took that long to cold start?
>>
>> The Univac File Computer took 20 minutes for the photocells in the
>> tape units to heat up adequately. They were used for end of tape
>> detection - each end. They were lead oxide cells and often seemed
>> better at detecting heat than detecting light.
>
> I once used a Honeywell tape drive where if the temperature exceeded
> a certain value the BOT photocell failed to work. If you then tried
> to rewind a tape, it would pull the tape right off the takeup reel.
> The end of the tape would shoot to the bottom of the vacuum column
> with a loud THWUP, the drive would shut down, and you'd have to
> re-thread the tape.
>
That's why TW said that god never meant for there to be magtapes.

/BAH

jmfbahciv

unread,
Jan 5, 2014, 11:31:25 AM1/5/14
to
WAs he still breathing 10 minutes later?

/BAH

Morten Reistad

unread,
Jan 5, 2014, 2:01:05 PM1/5/14
to
In article <PM0004EF3...@aca21ea5.ipt.aol.com>,
This is a DECism, noone else[*] had that hate relationship with tapes.

* Well, ND, Tandem, Prime, Sun, SGI all used tapes all the time without
much incident. Of the 9" variety I was especially fond of the Kennedy
OEM brand that Prime, Tandem and some ND and Sun computers used. They
were workhorses running years of 10x5x252 (hours/day,days/week,days/year)
without any incident at all. IBM and the seven dwarves also used tapes
all the time.

It is probably among the "corporate fetishes", although a negative one.

Like the "coax&radio" one of our national incumbent telco, Telenor.
Where everyone else would put in fiber they always put in some coax
and radio link if they can.

Or our co-flag air carrier SAS that would rather land in Copenhagen
than fly direct. (I gather several of the US air carriers have similar
traits).

-- mrr

Ahem A Rivet's Shot

unread,
Jan 6, 2014, 3:57:51 AM1/6/14
to
On Sat, 04 Jan 2014 09:17:11 +0000
Stephen Wolstenholme <st...@easynn.com> wrote:

> On Fri, 03 Jan 2014 17:37:52 -0700, Lon <lon.s...@comcast.net>
> wrote:
>
> >High end PC servers don't typically require a staff of a few dozen
> >high priced systems analysts to keep them running.
>
> The mainframes I worked on for years did not need anyone to keep them
> running apart from operators to load mag tapes and disc packs when the
> machine "asked". The systems analysts were long gone!

No monthly causative maintenance ?

Stephen Wolstenholme

unread,
Jan 6, 2014, 6:09:45 AM1/6/14
to
On Mon, 6 Jan 2014 08:57:51 +0000, Ahem A Rivet's Shot
<ste...@eircom.net> wrote:

>On Sat, 04 Jan 2014 09:17:11 +0000
>Stephen Wolstenholme <st...@easynn.com> wrote:
>
>> On Fri, 03 Jan 2014 17:37:52 -0700, Lon <lon.s...@comcast.net>
>> wrote:
>>
>> >High end PC servers don't typically require a staff of a few dozen
>> >high priced systems analysts to keep them running.
>>
>> The mainframes I worked on for years did not need anyone to keep them
>> running apart from operators to load mag tapes and disc packs when the
>> machine "asked". The systems analysts were long gone!
>
> No monthly causative maintenance ?

Not by systems analysts. The only maintenance was on the mainframes
and peripherals and that was done by the operators and engineers.

Walter Bushell

unread,
Jan 6, 2014, 7:14:33 AM1/6/14
to
In article <homopa-...@wair.reistad.name>,
Morten Reistad <fi...@last.name> wrote:

> Or our co-flag air carrier SAS that would rather land in Copenhagen
> than fly direct. (I gather several of the US air carriers have similar
> traits).

Perhaps they offer stopovers to boost tourism with pressure from the
government?

jmfbahciv

unread,
Jan 6, 2014, 8:34:37 AM1/6/14
to
Did any of those use the TU45 pertec drive?

>
> It is probably among the "corporate fetishes", although a negative one.


It is strange.

<snip>

/BAH

Anne & Lynn Wheeler

unread,
Jan 6, 2014, 9:32:54 AM1/6/14
to
Stephen Wolstenholme <eas...@googlemail.com> writes:
> Not by systems analysts. The only maintenance was on the mainframes
> and peripherals and that was done by the operators and engineers.

one of the big issues in getting cp67 7x24 was the required regular
preventive maintenance ... and another was that the machines were
leased and the cpu meter based charges.

an early problem providing off-shift 7x24 was that initially there was
very little use ... and monthly costs/lease (based on cpu based running)
was recovered based on use. the off-shift costs exceeded the off-shift
use ... but to encourage 7x24 off-shift use the machine had to be left
up all the time.

cp67 software enhancements were to automatically boot and come up with
no human intervention ... along with other changes ... it allowed
off-shift operation with no operator present (reducing the costs of
leaving system up and available off-shift).

The cpu meter issue was that it ran whenever the cpu was active and/or
there was any active channel program. One of the channel programming
tricks was to have channel program that wouldn't run the cpu meter when
nothing was going on ... but would wake up for terminal connections and
in-coming characters on demand. A characteristic of the cpu meter would
that it would continue to run for 400ms after all activity had stop
... before it stopped. At least well into the late 70s ... long after
switch to sales ... and no longer leases charges being based on cpu
meter reading ... the POK favorite son operating system (MVS) had task
that would wake up every 400ms (guarenteeing that if the system was
available ... even if doing nothing ... the cpu meter would never stop).

Another issue with off-shift manual requirements was the increasing use
of "service virtual machines" (common current terminology "virtual
appliance") which required somebody to connect and startup. This is
recent post about the autolog command that I had originally developed
for automatic benchmarking ... but was quickly picked up for automatic
initiation of "service virtual machines" ... and included product
shipped to customers ... recent discussion of "service virtual machine"
and "autolog" command
http://www.garlic.com/~lynn/2014.html#1 Application development paradigms [was: RE: Learning Rexx]
past posts mentioning automated benchmarking
http://www.garlic.com/~lynn/submain.html#benchmark

one of the enhancements made for cp67/vm370 by (virtual machine) based
online service bureaus ... some past posts
http://www.garlic.com/~lynn/submain.html#timeshare

was loosely-coupled support and non-disruptive migration (which failed
to showup in the product). One of the 7x24 availability issues was
requirement to take down systems for regularly scheduled preventive
maintanance. The virtual-machine based online service bureaus could
*drain* a machine with transparent, non-disruptive migration (in a
loosely-coupled complex) before taking a system offline for regularly
scheduled preventive maintenance.

other recent posts mentioning autolog command:
http://www.garlic.com/~lynn/2012d.html#24 Inventor of e-mail honored by Smithsonian
http://www.garlic.com/~lynn/2012d.html#38 Invention of Email
http://www.garlic.com/~lynn/2012k.html#17 a clock in it, was Re: Interesting News Article
http://www.garlic.com/~lynn/2013j.html#38 1969 networked word processor "Astrotype"

past posts in this thread:
http://www.garlic.com/~lynn/2013o.html#64 "Death of the mainframe"
http://www.garlic.com/~lynn/2013o.html#65 "Death of the mainframe"
http://www.garlic.com/~lynn/2013o.html#68 "Death of the mainframe"
http://www.garlic.com/~lynn/2013o.html#69 "Death of the mainframe"
http://www.garlic.com/~lynn/2013o.html#71 "Death of the mainframe"
http://www.garlic.com/~lynn/2013o.html#72 "Death of the mainframe"
http://www.garlic.com/~lynn/2013o.html#73 "Death of the mainframe"
http://www.garlic.com/~lynn/2013o.html#75 "Death of the mainframe"
http://www.garlic.com/~lynn/2013o.html#80 "Death of the mainframe"
http://www.garlic.com/~lynn/2014.html#10 "Death of the mainframe"

--
virtualization experience starting Jan1968, online at home since Mar1970

Scott Lurndal

unread,
Jan 6, 2014, 10:12:24 AM1/6/14
to
Of course not, they used drives engineered to be fast and reliable[*]. When
they did OEM drives, they were from STC, memorex, or later Fujitsu
(e.g. the BT3200 and 18-track 1/2" FIPS cartridges).

[*] all vendors had a crappy drive at one point in their existence. The
burroughs cluster drives in the 1960's, for example.

Jon Elson

unread,
Jan 7, 2014, 3:29:22 PM1/7/14
to
Scott Lurndal wrote:

> jmfbahciv <See....@aol.com> writes:

>>Did any of those use the TU45 pertec drive?
>
> Of course not, they used drives engineered to be fast and reliable[*].
> When they did OEM drives, they were from STC, memorex, or later Fujitsu
> (e.g. the BT3200 and 18-track 1/2" FIPS cartridges).
Is the TU-45 the Pertec T9000 vacuum column drive? I have a real Pertec
version of that drive, and it was a VERY reliable drive with excellent
tape handling. It has tachometer rollers on the tape right off the
reels, so it can gauge the tape speed going on/off the reels. The tape
rarely ever touches the vacuum sensor holes. I've never had any problem
with mine. We used it with a Datum controller on an 11/45 and later a
Vax 780, and then used it on my home Microvax with an MDB controller.
The T9000 is a 75 IPS start-stop drive.
There was a 45 IPS spring-arm drive that came before, was that the TM-11?
It definitely was not in the same class as a vacuum column drive.
We actually wore out the capstan on ours (also direct from Pertec, not
the DEC-labeled version.)

Jon

timca...@aol.com

unread,
Jan 13, 2014, 10:04:57 AM1/13/14
to
On Friday, December 27, 2013 4:31:34 PM UTC-5, Anne & Lynn Wheeler wrote:

> then in the last decade ... i86 chips move to risc cores ... largely
>
> eliminating difference in throughput between i86 chips and risc chips.
>
> Even the last two generations of mainframe cmos have introduced
>
> increasingly amount of features that have been part of risc for decades.
>

Sometimes I think Lynn has a automated reply system.

The x86 chips started to have a RISC like core starting with the 486
(released 1989), and the Pentium Pro introduced most of the high
performance characteristics you talk about (branch prediction,
speculative execution, out of order execution, L1 & L2 caches, etc.)
and the was released in 1996 (well, late 95). So, you might want
to update this entry to two decades, not just one.

- Tim

Anne & Lynn Wheeler

unread,
Jan 13, 2014, 10:24:01 AM1/13/14
to
timca...@aol.com writes:
> Sometimes I think Lynn has a automated reply system.
>
> The x86 chips started to have a RISC like core starting with the 486
> (released 1989), and the Pentium Pro introduced most of the high
> performance characteristics you talk about (branch prediction,
> speculative execution, out of order execution, L1 & L2 caches, etc.)
> and the was released in 1996 (well, late 95). So, you might want
> to update this entry to two decades, not just one.

sorry, shouldn't kept using single decade and making transition to two
decades
http://en.wikipedia.org/wiki/Comparison_of_instruction_set_architectures

AMD K5 ... (March 1996) Out-of-order execution, register renaming,
speculative execution based on 29K RISC
http://en.wikipedia.org/wiki/AMD_K5

P6/Pentuium Pro ... (Nov. 1996) Speculative execution, Register
renaming, superscalar design with out-of-order execution
http://en.wikipedia.org/wiki/Pentium_Pro

Dan Espen

unread,
Jan 13, 2014, 11:11:46 AM1/13/14
to
timca...@aol.com writes:

> On Friday, December 27, 2013 4:31:34 PM UTC-5, Anne & Lynn Wheeler wrote:
>
>> then in the last decade ... i86 chips move to risc cores ... largely
>> eliminating difference in throughput between i86 chips and risc chips.
>> Even the last two generations of mainframe cmos have introduced
>> increasingly amount of features that have been part of risc for decades.
>
> Sometimes I think Lynn has a automated reply system.

I thought so too, but not too long ago I spotted a typo in his
periodic post about being hijacked into the disk division.
Perhaps his copy paste machine has some randomness built in
but I doubt he's gone that far.

But the URL blizzard is damn annoying.
When I see one of those, I usually don't read any farther.

--
Dan Espen

Walter Bushell

unread,
Jan 13, 2014, 12:36:16 PM1/13/14
to
In article <5747a411-58ad-42f9...@googlegroups.com>,
timca...@aol.com wrote:

> On Friday, December 27, 2013 4:31:34 PM UTC-5, Anne & Lynn Wheeler wrote:
>
> > then in the last decade ... i86 chips move to risc cores ... largely
> >
> > eliminating difference in throughput between i86 chips and risc chips.
> >
> > Even the last two generations of mainframe cmos have introduced
> >
> > increasingly amount of features that have been part of risc for decades.
> >
>
> Sometimes I think Lynn has a automated reply system.

Alternative theory Lynn is an automated reply system, perhaps among
other attributes.

Aren't we all automated systems? Biological, of course, and more
sophisticated than our computer driven system, but still run by
physics.

>
> The x86 chips started to have a RISC like core starting with the 486
> (released 1989), and the Pentium Pro introduced most of the high
> performance characteristics you talk about (branch prediction,
> speculative execution, out of order execution, L1 & L2 caches, etc.)
> and the was released in 1996 (well, late 95). So, you might want
> to update this entry to two decades, not just one.
>
> - Tim

Michael Black

unread,
Jan 13, 2014, 1:36:59 PM1/13/14
to
They do it on the web. You read a story at your local newspaper site, and
then they have a bunch "maybe you'd like to see these..." and those too
lead to similar selection of "try these...". You could spend forever just
jumping from link to link.

Sometimes there is relevance, sometimes there isn't. But even the
relevant ones seem to be a dumbing down. Let the newspaper people make
the connection deliberately, rather than have the software randomly find
articles that somehow connect.

A few years back there were a couple of associated stories connected with
the local police, about uniforms (and at the same time, the police were
wearing incomplete uniforms as a strike tactic). One blog that thought
they needed news would just pick random stories out of the local paper and
mention that story, not adding commentary or linking it to anything. I
made a comment about how this story connected with a past story and got a
reply "Dude, we don't have the time to keep track of stories". Well, then
don't post the stories you just cannibalized from the local paper. And
the original source, that everyone should be following rather than
"getting all the news they need from the local blog", should be seeing
that this story connects to that story, one of the great benefits of paid
journalism is that people are at the job for many years, so they should be
able to keep track of stories on their beat.

Michael

Dan Espen

unread,
Jan 13, 2014, 2:23:36 PM1/13/14
to
Michael Black <et...@ncf.ca> writes:

> On Mon, 13 Jan 2014, Dan Espen wrote:
>
>> timca...@aol.com writes:
>>
>>> On Friday, December 27, 2013 4:31:34 PM UTC-5, Anne & Lynn Wheeler wrote:
>>>
>>>> then in the last decade ... i86 chips move to risc cores ... largely
>>>> eliminating difference in throughput between i86 chips and risc chips.
>>>> Even the last two generations of mainframe cmos have introduced
>>>> increasingly amount of features that have been part of risc for decades.
>>>
>>> Sometimes I think Lynn has a automated reply system.
>>
>> I thought so too, but not too long ago I spotted a typo in his
>> periodic post about being hijacked into the disk division.
>> Perhaps his copy paste machine has some randomness built in
>> but I doubt he's gone that far.
>>
>> But the URL blizzard is damn annoying.
>> When I see one of those, I usually don't read any farther.
>>
> They do it on the web. You read a story at your local newspaper site,
> and then they have a bunch "maybe you'd like to see these..." and
> those too lead to similar selection of "try these...". You could
> spend forever just jumping from link to link.

They do it on the web with HTML.
You don't see the link, just some underlined words.

In Lynn's case, it's rare that one of the multitude of links
has ANY relevance to the topic at hand.

He's using some kind of algorithm to pull out links
and it doesn't really work. The few times I was foolish enough
to follow some of the links, they led no where relevant.

--
Dan Espen

JimP.

unread,
Jan 13, 2014, 4:10:47 PM1/13/14
to
On Mon, 13 Jan 2014 13:36:59 -0500, Michael Black <et...@ncf.ca>
wrote:
>On Mon, 13 Jan 2014, Dan Espen wrote:
>
>> timca...@aol.com writes:
>>
>>> On Friday, December 27, 2013 4:31:34 PM UTC-5, Anne & Lynn Wheeler wrote:
>>>
>>>> then in the last decade ... i86 chips move to risc cores ... largely
>>>> eliminating difference in throughput between i86 chips and risc chips.
>>>> Even the last two generations of mainframe cmos have introduced
>>>> increasingly amount of features that have been part of risc for decades.
>>>
>>> Sometimes I think Lynn has a automated reply system.
>>
>> I thought so too, but not too long ago I spotted a typo in his
>> periodic post about being hijacked into the disk division.
>> Perhaps his copy paste machine has some randomness built in
>> but I doubt he's gone that far.
>>
>> But the URL blizzard is damn annoying.
>> When I see one of those, I usually don't read any farther.
>>
>They do it on the web. You read a story at your local newspaper site, and
>then they have a bunch "maybe you'd like to see these..." and those too
>lead to similar selection of "try these...". You could spend forever just
>jumping from link to link.

A number of bloggers just post crap to their blogs... and pretend to
themselves its relevant. Or maybe they actually think its relevant to
something ?

JimP
--
"Brushing aside the thorns so I can see the stars." from 'Ghost in the Shell'
http://www.linuxgazette.net/ Linux Gazette
http://travellergame.drivein-jim.net/ December, 2013

William Pechter

unread,
Jan 15, 2014, 12:17:03 PM1/15/14
to
In article <WJSdnX-3ufge_lHP...@giganews.com>,
Was it a split vacuum column where the tapes would loose vacuum very
quikly. They were ok for light use but on PDP10's and Vaxes doing lots
of backups they were failure prone.

Jon Elson

unread,
Jan 28, 2014, 2:43:04 PM1/28/14
to
No, I seem to remember a drive that had a little triangular vacuum
buffer to the side of the main vacuum column, but this isn't it.
We did a lot of heavy tape use on the T9000 and it seemed quite
reliable.

Jon
0 new messages