Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Ferranti Atlas paging

82 views
Skip to first unread message

Bill Findlay

unread,
Nov 28, 2017, 12:36:01 AM11/28/17
to
For the avoidance of doubt, and in response to the misinformation
repeatedly posted
by Lynn Wheeler in his ongoing tsunami of self-congratulation, here are the
facts,
confirmed by original documents found on the WWW:

a. The Atlas DID have page usage bits, exposed in the V store.

b. The Atlas DID allow for multiprogramming, using a lock bit in
the associative memory to exclude a program from any page frame
it had no right to access. Setting these bits on a context switch ensured
a trap if a program transgressed a page allocated to another.
This enabled the Supervisor to swap it out and reallocate it.

c. The Atlas DID NOT use LRU page replacement.

d. The Atlas replacement algorithm was simulated in runs of matrix handling
codes in competition with hand optimised algorithms and out-performed them.

--
Bill Findlay

maus

unread,
Nov 28, 2017, 3:28:02 AM11/28/17
to
I am unaware of the issues involved, but I would support Lynn in
any controversy, from his inputs in any issue I do understand.



--
greymaus.ireland.ie
Just_Another_Grumpy_Old_Man

Ahem A Rivet's Shot

unread,
Nov 28, 2017, 6:30:33 AM11/28/17
to
On 28 Nov 2017 05:35:59 GMT
Bill Findlay <findl...@blueyonder.co.uk > wrote:

> For the avoidance of doubt, and in response to the misinformation
> repeatedly posted
> by Lynn Wheeler in his ongoing tsunami of self-congratulation, here are
> the facts,
> confirmed by original documents found on the WWW:

Or in short - the Atlas worked well enough to be useful and would
almost certainly have worked better with more sophisticated paging
algorithms. An achievement to be proud of at a time when nobody really knew
what made a good paging algorithm.

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/

Bill Findlay

unread,
Nov 28, 2017, 8:36:21 AM11/28/17
to
Ahem A Rivet's Shot <ste...@eircom.net> wrote:
> On 28 Nov 2017 05:35:59 GMT
> Bill Findlay <findl...@blueyonder.co.uk > wrote:
>
>> For the avoidance of doubt, and in response to the misinformation
>> repeatedly posted
>> by Lynn Wheeler in his ongoing tsunami of self-congratulation, here are
>> the facts,
>> confirmed by original documents found on the WWW:
>
> Or in short - the Atlas worked well enough to be useful and would
> almost certainly have worked better with more sophisticated paging
> algorithms. An achievement to be proud of at a time when nobody really knew
> what made a good paging algorithm.
>

Exactly.

--
Bill Findlay

Dan Espen

unread,
Nov 28, 2017, 8:58:38 AM11/28/17
to
Really?

I find his canned responses biased.

I also find his link laden posts hard to read.
Often his posts are more than half links.
Click on the links and you'll find they lead to
irrelevant pages. Clearly Lynn is trying to overload
search engines with links to his own stuff. Very annoying.

For almost all my career I was exposed to IBM's very poorly
designed mainframe software. Lynn thinks he's above all that
but he was there writing that crap.

--
Dan Espen

Peter Flass

unread,
Nov 28, 2017, 2:07:17 PM11/28/17
to
Dan Espen <dan1...@gmail.com> wrote:
> maus <ma...@mail.com> writes:
>
>> On 2017-11-28, Bill Findlay <findl...@blueyonder.co.uk> wrote:
>>> For the avoidance of doubt, and in response to the misinformation
>>> repeatedly posted
>>> by Lynn Wheeler in his ongoing tsunami of self-congratulation, here are the
>>> facts,
>>> confirmed by original documents found on the WWW:
>>>
>>> a. The Atlas DID have page usage bits, exposed in the V store.
>>>
>>> b. The Atlas DID allow for multiprogramming, using a lock bit in
>>> the associative memory to exclude a program from any page frame
>>> it had no right to access. Setting these bits on a context switch ensured
>>> a trap if a program transgressed a page allocated to another.
>>> This enabled the Supervisor to swap it out and reallocate it.
>>>
>>> c. The Atlas DID NOT use LRU page replacement.
>>>
>>> d. The Atlas replacement algorithm was simulated in runs of matrix handling
>>> codes in competition with hand optimised algorithms and out-performed them.
>>
>> I am unaware of the issues involved, but I would support Lynn in
>> any controversy, from his inputs in any issue I do understand.
>
> Really?
>
> I find his canned responses biased.

Obviously we're all biased based on our experience and inclinations. Just
like watching TV news, the reader has to account for the bias.

>
> I also find his link laden posts hard to read.
> Often his posts are more than half links.
> Click on the links and you'll find they lead to
> irrelevant pages. Clearly Lynn is trying to overload
> search engines with links to his own stuff. Very annoying.

Yes.

>
> For almost all my career I was exposed to IBM's very poorly
> designed mainframe software. Lynn thinks he's above all that
> but he was there writing that crap.
>

IBM software was all over the map. VM (along with HASP) was clean and
well-written, much other stuff was junk. Like everything else, it all
depends.

To the point, I didn't know much about the Atlas, doing a quick search I
see it was a pretty big machine for its time - 96K core, writable control
store, and drum. It might be worth reading more


--
Pete

Jorgen Grahn

unread,
Nov 28, 2017, 2:32:17 PM11/28/17
to
On Tue, 2017-11-28, Bill Findlay wrote:
> For the avoidance of doubt, and in response to the misinformation
> repeatedly posted by Lynn Wheeler in his ongoing tsunami of
> self-congratulation, here are the facts, confirmed by original
> documents found on the WWW:

When you're relying on documents which you claim are freely available
on the web, it looks good if you also show the URL.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Scott Lurndal

unread,
Nov 28, 2017, 2:39:21 PM11/28/17
to
Peter Flass <peter...@yahoo.com> writes:
>Dan Espen <dan1...@gmail.com> wrote:

>> For almost all my career I was exposed to IBM's very poorly
>> designed mainframe software. Lynn thinks he's above all that
>> but he was there writing that crap.
>>
>
>IBM software was all over the map. VM (along with HASP) was clean and
>well-written, much other stuff was junk. Like everything else, it all
>depends.

When compared with the competition to VM (e.g. Burroughs various
flavors of MCP), VM (and MVS et alia) were very user
and programmer unfriendly. CHS? SYSGEN? Really?

Anne & Lynn Wheeler

unread,
Nov 28, 2017, 3:53:22 PM11/28/17
to
sc...@slp53.sl.home (Scott Lurndal) writes:
> When compared with the competition to VM (e.g. Burroughs various
> flavors of MCP), VM (and MVS et alia) were very user
> and programmer unfriendly. CHS? SYSGEN? Really?

note that most of VM ... was CP40/CMS and then CP67/CMS (when 360/67
became available) came out of MIT CTSS. Some of the CTSS people
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System

had gone to the 5th flr to do multics.
https://en.wikipedia.org/wiki/Multics

others went to the science center on the 4th flr and did virtual
machines, cp40/cms, cp67/cms, internal network, online applications,
performance tools (some that evolve into capacity planning), invented
GML, etc.
https://en.wikipedia.org/wiki/CP/CMS
past posts mentioning science center, 4th flr, 545 tech sq.
http://www.garlic.com/~lynn/subtopic.html#545tech

little relationship with the product groups doing the "mainline" batch
systems.

When I was at the science center, I had little rivalry with 5th flr.
The number of IBM customer batch systems were significantly larger than
the number of Virtual Machine systems ... however, they were growing
enormously inside of IBM for all development (including development for
all the batch systems). Just comparing the number of internal IBM
virtual machine systems against total number of Multics systems that
ever existed ... still wasn't fair.

However, I had hobby of producing and supporting enhanced operating
systems for internal datacenters ... and for some period, the number of
my enhanced systems for internal datacenters was about 50% more than the
total number of MULTICS systems
http://multicians.org/sites.html

CP67 & VM370 biggest "SYSGEN" issue was that they were shipped
supporting full source maintenance ... and customers were given option
of system build from full source (which a lot did) ... rather than using
the distributed binaries. The full source maintenance option also
resulted in large number of customers doing their own source
modifications. At one point there was claim that the (user contribution)
Univ. of Waterloo library had more lines of code than the official
distributed system (although there was some amount of duplication of
features in Waterloo library).

trivia: GML was invented at the science center in 1969. Then GML tag
formating was added to CMS SCRIPT (which was a re-implementation of CTSS
RUNOFF that used period/dot formating). A decade later GML morphs into
ISO standard SGML ... and then after another decade it morphs into HTML
at CERN. CERN was using the Waterloo implementation of CMS SCRIPT for
SGML that mophs into HTML. some past posts
http://www.garlic.com/~lynn/submain.html#sgml

Note that decision to redo CP67 for VM370, there was split off of
people from the science center that initial moved to the 3rd flr.
Then as the VM370 outgrew the 3rd flr, they moved out to the
old SBS bldg at Burlington mall. Then with the failure of Future
System project ... past posts
http://www.garlic.com/~lynn/submain.html#futuresys

there was mad rush to revive 370 efforts and 3033 and 3081 (& 370/xa)
were kicked off in parallel. detailed description
http://www.jfsowa.com/computer/memo125.htm

the head of POK then managed to convince corporate to kill the vm370
product and move all the VM370 Burlington mall people to POK to work on
MVS/XA (or supposedly he wouldn't be able to ship MVS/XA on schedule
7-8yrs later). The plan was to not inform the Burlington Mall people
until just before the shutdown .... however, the information managed to
leak and lots of people managed to escape. One of the jokes is that the
head of POK was one of the biggest contributors to (DEC) VMS because of
all the people that escape to DEC (and the move to POK).

Eventually Endicott (138/148, 4331/4341, etc) managed to save the VM370
product mission ... but Endicott had to reconstituted a development
group from scratch ... which significantly impacted code quality for a
time. Some of this shows up in VMSHARE conferencing, archives here
http://vm.marist.edu/~vmshare

i.e. in the late 60s and early 70s there were some number of commercial
service bureaus formed to offer CP67-based (and then VM370) online
offerings ... some passed posts
http://www.garlic.com/~lynn/submain.html#online

NSCC was spinoff from the science center, IDC was mostly spinoff from
MIT Lincoln Labs. Both NSCC & IDC quickly moved up the value stream to
offering online financial information to wallstreet. Then there
was TYMSAHRE on the west coast.
https://en.wikipedia.org/wiki/Tymshare

it was TYMSHARE that started offering their VM370/CMS based online
computer conferencing system to (user group) SHARE as VMSAHRE in
AUG1976.

other recent discussion was that 4th generation languages developed
for deployment on CP67(&VM37) in the 60s & 70s
http://www.garlic.com/~lynn/2017j.html#29 Db2! was: NODE.js for z/OS
http://www.garlic.com/~lynn/2017j.html#39 The complete history of the IBM PC, part two: The DOS empire strikes; The real victor was Microsoft, which built an empire on the back of a shadily acquired MS-DOS
wiki
https://en.wikipedia.org/wiki/Ramis_software
https://en.wikipedia.org/wiki/Nomad_software
https://en.wikipedia.org/wiki/Nomad_software#Development:_Late_1970s
https://en.wikipedia.org/wiki/FOCUS
also original SQL/Relational ... some past posts
http://www.garlic.com/~lynn/submain.html#systemr

Boeing also included CP67(& later VM370) online offerings in its BCS
spinoff. I was one of the first half dozen people brought into Boeing
(when I was still an undergraduate) to form Boeing Computer Services
(consolidated all dataprocessing into independent business unit to
monetize its investment, including offering dataprocessing to non-Boeing
entities). This was out of corporate hdqtrs which had a single 360/30 at
the time for doing payroll. The politics with Renton datacenter was
interesting ... since they had something like $200M-$300M in IBM 360
mainframes (in 60s dollars) ... which I thot was possibly largest in the
world. Then it was being replicated at new 747 plant up at Paine Field
... for disaster scenario where Mt. Rainer warms up and mudslide takes
out Rendon.

Later I would sponsor Boyd's briefings at IBM ... and one of Boyd's
biographies mentions he did a stint at spook base (about the same time I
was at Boeing) which was $2.5B "windfall" for IBM (ten times
renton). ref gone 404, but lives on at wayback machine:
http://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html

Boyd would talk about being very vocal about sensors on the trail
wouldn't work ... so putting him in charge of spook base was possibly
punishment.

--
virtualization experience starting Jan1968, online at home since Mar1970

hanc...@bbs.cpcn.com

unread,
Nov 28, 2017, 4:24:28 PM11/28/17
to
On Tuesday, November 28, 2017 at 12:36:01 AM UTC-5, Bill Findlay wrote:

> by Lynn Wheeler in his ongoing tsunami of self-congratulation, here are the

Most posters in this newsgroup avoid personal attacks.

Mike Causer

unread,
Nov 28, 2017, 4:30:28 PM11/28/17
to
On 28 Nov 2017 19:32:15 GMT
Jorgen Grahn <grahn...@snipabacken.se> wrote:

> When you're relying on documents which you claim are freely available
> on the web, it looks good if you also show the URL.

Here is a start point:
http://elearn.cs.man.ac.uk/~atlas/


For 2 years I had to walk past a heavily used Atlas to get to the
Prime 300 I was working on, but never touched anything on the Atlas
itself. It was decommissioned while I was there and I have old
friends I could ask about it, but....


Mike

hanc...@bbs.cpcn.com

unread,
Nov 28, 2017, 4:32:33 PM11/28/17
to
On Tuesday, November 28, 2017 at 8:58:38 AM UTC-5, Dan Espen wrote:

[snip]

> For almost all my career I was exposed to IBM's very poorly
> designed mainframe software. Lynn thinks he's above all that
> but he was there writing that crap.

Over my career I used different brands of computers. In my humble
opinion, IBM was the market leader because, overall, they earned
that spot.

Our Z mainframe continues to run 30 year programs unchanged.

As reported many times, I used a Univac. Their software was
inferior to comparable IBM mainframes. While their people
were personally very nice and good to work with, their support
was much weaker than IBM.

I also used a Burroughs and there, too, the support was weak.

We did use an HP-2000 timeshared basic system that worked very
well (at least it did until some kids found a hidden bug that
caused it crash.)




Anne & Lynn Wheeler

unread,
Nov 28, 2017, 5:48:55 PM11/28/17
to
Mike Causer <m.r.c...@gogglemail.com> writes:
> Here is a start point:
> http://elearn.cs.man.ac.uk/~atlas/

google search doesn't turn up anything there on "one level store", etc
... this is the reference that I've been using

this reference I've found the most information, etc
http://www.chilton-computing.org.uk/acl/technology/atlas/overview.htm
http://www.chilton-computing.org.uk/acl/technology/atlas/p002.htm
http://www.chilton-computing.org.uk/acl/pdfs/atlas-1-level.pdf
http://www.chilton-computing.org.uk/acl/technology/atlas/p019.htm

says 512 word "blocks" ... 16kwords core ... 32 blocks
(pages)... compared to cambridge 768k 360/67 ... recent reference
(compared to grenoble 360/67 with 256 pages or cp40 360/40 with 64
pages)
http://www.garlic.com/~lynn/2017j.html#84 VS/Repack
and
http://www.garlic.com/~lynn/cp40seas1982.txt

each (real) page position has P.A.R. and does associative lookup for
virtual address (akin to 360/40 implementation)

and 96k words (192 blocks-pages) of drum store split across four drums

Compared to 360/67 2301 drum ... that had 4mbytes. It is similar to 2303
drum but read/writes four heads in parallel for data rate of
1.2mbytes/sec. Could have multiple 2301 drums per controller. On CP/67,
"overflow" paging (more than can be accomodated on drums) would be done
to 2314 disks. A heavily loaded CSC CP/67 with 75-80 users, might have
10k-20k or more allocated pages on combination of drum and disks. One of
the things I did at the science center was page migration for fixed head
drums ... moving low used pages off fixed head drums to moveable arm
disks.

I don't find any process id ... like used in the 360/40 implementation
... that greatly reduces overhead in context switching between
concurrent processes ... as well as supporting multiprocessor operation
(different processors could be sharing same real storage while
concurrently running different processes). Using the lock bit hack to do
psuedo invalidate when switching tasks drives the context switch up
nonlinear (overhead increases proportional to the number of pages for
every context switch). detailed 360/67 functional characteristics
http://bitsavers.org/pdf/ibm/360/funcChar/GA27-2719-2_360-67_funcChar.pdf

The 360/67 and smaller 370s had cached map ... typically 8 entries for
fast lookup and translate virtual->real. If entry wasn't in cache, it
would look up in the (active process specific) segment/page table entry
and replace current entry. Process/task switch involved clearing
cache. Larger 370s like 168, went to 128 entry table that was 32 indexed
4-way associative for virtual->real page mapping with entries STO
(process specific segment table pointer) associative ... so table
lookaside buffer didn't have to be reset on process/context switch
... i.e. each look-aside buffer entry had process specific tag ... to
having to do a reset, in similar way that 360/40 had process id for each
associative entry for virtual->real address mapping.

posts in this and related threads:
http://www.garlic.com/~lynn/2017j.html#66 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#71 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#72 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#76 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#78 thrashing, was Re: A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#79 thrashing, was Re: A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#81 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#82 A Computer That Never Was: the IBM 7095
http://www.garlic.com/~lynn/2017j.html#83 Ferranti Atlas paging

trivia: for a time, before VM370 multiprocessor support they had a hack
for pages concurrently shared across multiple address sapces ... that I
fought against because went horrible in multiprocessor environment.
Moving to multiprocessor, they had to have shared pages that were
processor specific. If process was redispatched on different processor
... all its virtual memory tables had to be scanned and shared page
entries changed to processor specific entries ... overhead that
increased non-linearly as number of shared pages increased.

Charles Richmond

unread,
Nov 28, 2017, 5:57:25 PM11/28/17
to
ISTM that the Ferrranti Pegasus should have been called the Atlas, and
the Ferranti Atlas should have been called the Pegasus... because it
could *fly* through instructions!!! :-)


--
numerist at aquaporin4 dot com

Peter Flass

unread,
Nov 28, 2017, 6:35:30 PM11/28/17
to
I was talking about the quality of the code. From what I saw of the
5500MCP, you're right. OTOH, having to do a SYSGEN has nothing to do with
user-friendliness, since it's done once or twice a year by the sysprog and
the user never sees it. In general (IMO) IBM software favored flexibility
and multiplicity of options over simplicity, while Burroughs went the other
way.

--
Pete

Peter Flass

unread,
Nov 28, 2017, 6:35:30 PM11/28/17
to
<hanc...@bbs.cpcn.com> wrote:
> On Tuesday, November 28, 2017 at 8:58:38 AM UTC-5, Dan Espen wrote:
>
> [snip]
>
>> For almost all my career I was exposed to IBM's very poorly
>> designed mainframe software. Lynn thinks he's above all that
>> but he was there writing that crap.
>
> Over my career I used different brands of computers. In my humble
> opinion, IBM was the market leader because, overall, they earned
> that spot.
>
> Our Z mainframe continues to run 30 year programs unchanged.
>
> As reported many times, I used a Univac. Their software was
> inferior to comparable IBM mainframes. While their people
> were personally very nice and good to work with, their support
> was much weaker than IBM.

I was always less than impressed by UNIVAC, they seemed (to me) to be about
ten years behind, although part of this might be their large government
user base. The 1100 used three different character sets, really? (ASCII,
FIELDATA, and something else I forget just now). Their command language was
all positional and featured lots of commas, much worse than OS JCL (or even
DOS).

>
> I also used a Burroughs and there, too, the support was weak.

Hardware not as good, either. Nobody had support to compare to IBM.

>
> We did use an HP-2000 timeshared basic system that worked very
> well (at least it did until some kids found a hidden bug that
> caused it crash.)
>

Never used HP.

--
Pete

Anne & Lynn Wheeler

unread,
Nov 28, 2017, 9:25:43 PM11/28/17
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:
> trivia: for a time, before VM370 multiprocessor support they had a
> hack for pages concurrently shared across multiple address sapces
> ... that I fought against because went horrible in multiprocessor
> environment. Moving to multiprocessor, they had to have shared pages
> that were processor specific. If process was redispatched on different
> processor ... all its virtual memory tables had to be scanned and
> shared page entries changed to processor specific entries ...
> overhead that increased non-linearly as number of shared pages
> increased.

re:
http://www.garlic.com/~lynn/2017j.html#85 Ferranti Atlas paging

the "problem" was because original the 370 virtual memory architecture
had a lot more features ... including being able to do r/o shared
segments ... either 1mbyte segments or 64kbyte segments (16 4kbyte pages
or 32 2kbyte pages). for 370 shared pages, CMS was reorgnized into
64kbyte chunks for r/o shared code and data.

virtual memory had to be retrofitted to the existing non-virtual memory
370s. this was very straight forward for the lower end 370s ... but
retrofitting virtual memory to 370/165 ran into lots of problems.
Eventually 370/165 people said that virtual memory announcement would
have to be delayed at 6months because of the problems ... but they could
gain that back if a lot of features wee dropped ... including r/o shared
segments. The OS/360 MVT->virtual memory VS2 (1st SVS and then MVS)
group said they had no need for any of those features ... and so they
got dropped ... the justification for moving all 370s to virtual
memory was based on the poor application storage management in MVT
... referenced here
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory
in this thread post
http://www.garlic.com/~lynn/2017j.html#84 VS/Repack

The full set of 370 virtual memory architecture was supported on
internal 370/145s and lot of software had already been developed for the
full architecture ... including vm370/cms shared segment. Then when the
additional features were dropped (because of the 370/165), all software
using those features then had to be redone ... and CMS comes up with
real kludge for shared pages.

past posts mention 370/165 responsible for dropping full 370 virtual
memory architecture (and sofware & other models already supporting full
architecture, had to be reworked).

Dan Espen

unread,
Nov 29, 2017, 7:34:39 AM11/29/17
to
Peter Flass <peter...@yahoo.com> writes:

> Dan Espen <dan1...@gmail.com> wrote:
>> For almost all my career I was exposed to IBM's very poorly
>> designed mainframe software. Lynn thinks he's above all that
>> but he was there writing that crap.
>>
>
> IBM software was all over the map. VM (along with HASP) was clean and
> well-written, much other stuff was junk. Like everything else, it all
> depends.

My exposure was primarily to DOS, DOS/VS, MVS. Lots of crap there.
The little while I spent as a VM user wasn't that bad.
The System 3, System 34 stuff was pretty nice.
So sometimes IBM could run a reasonable software effort but
I think the bureaucracy tended to mess things up.

--
Dan Espen

Dan Espen

unread,
Nov 29, 2017, 7:43:53 AM11/29/17
to
hanc...@bbs.cpcn.com writes:

> On Tuesday, November 28, 2017 at 8:58:38 AM UTC-5, Dan Espen wrote:
>
> [snip]
>
>> For almost all my career I was exposed to IBM's very poorly
>> designed mainframe software. Lynn thinks he's above all that
>> but he was there writing that crap.
>
> Over my career I used different brands of computers. In my humble
> opinion, IBM was the market leader because, overall, they earned
> that spot.

No doubt their 1403 printer could really print.
That does not make MVS a work of elegance.

> Our Z mainframe continues to run 30 year programs unchanged.

Hmm, I think I could make that happen by doing absolutely nothing
to my software.

--
Dan Espen

Peter Flass

unread,
Nov 29, 2017, 7:56:22 AM11/29/17
to
Except that there have been a lot of changes: virtual memory, 31-bit
addressing, now 64-bit, IEEE FP and now DFP, complete turnover of
peripheral lines several times, ESCON and FICON channels, Sysplex, etc.

--
Pete

jmfbahciv

unread,
Nov 29, 2017, 8:31:21 AM11/29/17
to
Those are hardware. A recompilation of HLLs would provide seamless
computing services. The other thing which has changed is data
formats. Those can be a bitch to adjust to, especially w.r.t.
record terminators.

/BAH

Joe Pfeiffer

unread,
Nov 29, 2017, 9:54:37 AM11/29/17
to
Yes, but what IBM has gotten really good at is running running geriatric
code on new hardware *without* the need to recompile.

Dan Espen

unread,
Nov 29, 2017, 10:18:52 AM11/29/17
to
Ahh, the subject is software.
Still not impressed.

Half assed job on the long delayed JCL improvements (IF/SET).
System symbols are still subject to all kinds of restrictions
and you still can't submit a shell script without a JOB card
a dozen DD statements. Recently on IBM MAIN there still discussing
a DISP enhancement that will let you create OR replace a dataset without resorting
to jumping out of JCL.

Years ago I wrote my own DOS JCL replacement.
One very simple modification I made, FORMAT=DUMPx.
Any existing output file could be formatted and dumped to print
simply by adding a FORMAT parameter.

--
Dan Espen

Scott Lurndal

unread,
Nov 29, 2017, 10:56:17 AM11/29/17
to
Peter Flass <peter...@yahoo.com> writes:
> In general (IMO) IBM software favored flexibility
>and multiplicity of options over simplicity, while Burroughs went the other
>way.

I'm not sure I'd characterize it that way. We had plenty of
options, but there was no need to rebuild the MCP to enable
or disable them. They'd be loaded upon demand once the
option was enabled (operator 'SO' (Set Option)) command.

For example, until the operator (or coldstart configuration
deck) specified 'SO DCP', none of the data communications
code was loaded into memory. The SO command (and corresponding
RO command) could be issued at any time. RO would unload
the code module.

Likewise, new peripherals could be added to the MCP and old
peripherals removed with operator commands without a halt/load
(reboot).

Scott Lurndal

unread,
Nov 29, 2017, 10:57:54 AM11/29/17
to
hanc...@bbs.cpcn.com writes:

>
>I also used a Burroughs and there, too, the support was weak.

Perhaps your local field office was weak. Our customers
generally felt differently than you about burroughs field engineering.

Scott Lurndal

unread,
Nov 29, 2017, 10:58:35 AM11/29/17
to
hanc...@bbs.cpcn.com writes:
>On Tuesday, November 28, 2017 at 8:58:38 AM UTC-5, Dan Espen wrote:
>
>[snip]
>
>> For almost all my career I was exposed to IBM's very poorly
>> designed mainframe software. Lynn thinks he's above all that
>> but he was there writing that crap.
>
>Over my career I used different brands of computers. In my humble
>opinion, IBM was the market leader because, overall, they earned
>that spot.
>
>Our Z mainframe continues to run 30 year programs unchanged.

So do the Univac machines and Burroughs machines.

Anne & Lynn Wheeler

unread,
Nov 29, 2017, 11:50:34 AM11/29/17
to
Peter Flass <peter...@yahoo.com> writes:
> Except that there have been a lot of changes: virtual memory, 31-bit
> addressing, now 64-bit, IEEE FP and now DFP, complete turnover of
> peripheral lines several times, ESCON and FICON channels, Sysplex, etc.

1980, I was con'ed into doing channel extender support for STL ... that
was looking at moving 300 people from the IMS DBMS group out of STL to
offsite bldg. They had tried "remote" 3270 but found the human factors
totally intolerable (considering they were use to vm370/cms with local
channel attached 3270 controllers). Then the hardware vendor tried to
get IBM to let them release my support. There was a group in POK that
were playing with some serial stuff, that got in veto'ed because they
were afraid that if it was in the market, it would make it hardware to
justify releasing what they were doing. past posts channel extender
posts
http://www.garlic.com/~lynn/submisc.html#channel.extender

then in 1988, I was asked to help LLNL standardize some serial stuff
that they were working with which quickly becomes fibre channel standard
(including some stuff I had done back in 1980 for I/O program latency
compensation) finally in 1990, the POK people get their stuff released
as ESCON with ES9000, when it was already obsolete.

Then some of the POK people start playing with fibre channel standard
and define a heavy weight protocol that drastically reduces the
throughput of the native fibre channel standard ... that is eventually
released as FICON.

Most recent published peak I/O mainframe benchmark I can find is for
z196 that gets 2M IOPS using 104 FICON (running over 104 fibre channel
standard). About the same time as the z196 peak I/O benchmark, there was
fibre channel announced for E5-2600 blade claiming over million IOPS
(two native fibre channel getting higher throughput than 104 FICON).
past FICON posts
http://www.garlic.com/~lynn/submisc.html#ficon

IBM has announced TCW/zHPF that is a little bit like what I did in 1980
for channel extender ... but says its only 30% improvement over standard
FICON (would possibly improve peak I/O 2M IOPS using 80 FICON instead of
104 ... compared to two native fibre channel standard).

also back in late 70s & early 80s, they would let me play disk engineer
in bldgs 14&15 (across street from san jose research). IBM had come out
with 3370 fixed block architecture for mid-range ... and was working on
3380 for high-end ... past posts getting to play disk engineer
http://www.garlic.com/~lynn/subtopic.html#disk

In the early 80s, mid-range 4341s were selling 100-1000 orders at a time
to large corporations for placing out in departmental areas (conference
rooms, stock rooms, etc) ... sorting of the leading edge of the coming
distributed computing tsunami. MVS wanted to play in this market, but
they only supported CKD DASD ... which was the high-end 3380. The only
disk for non-datacenter mid-range was 3370 FBA. Eventually they came out
with 3375 CKD which was simulated on 3370 FBA. It didn't really help MVS
a lot ... company with 100-1000 distributed vm/4341s were looking at how
many systems per support person ... while MVS was still stuck with how
many support persons per system. zOS (MVS) still requires CKD ... but no
real CKD have been made for decades, all being simulated on industry
standard fixed-block disks.

I had offered MVS fixed-block support ... but they said even if I
provided them with thoroughly tested and documented support, I needed an
incremental $26M revenue business plan ($200M-$300M in incremental disk
sales) to cover training and documentation. They claimed that since they
were selling every disk as fast as it could be made, any switch from CKD
to FBA would just be the same amount of disk and no incremental
revenue. I wasn't allowed to use things like life time cost savings,
etc. Past posts mentioning CKD dasd, multi-track search, FBA, etc
http://www.garlic.com/~lynn/submain.html#dasd

J. Clarke

unread,
Nov 29, 2017, 9:20:43 PM11/29/17
to
On Wed, 29 Nov 2017 15:58:34 GMT, sc...@slp53.sl.home (Scott Lurndal)
wrote:
He's not talking about keeping an antique alive. He's talking about a
brand new 5+GHz machine with 240 cores and 32 terabytes of RAM.

Where do you find a Univac or Burroughs machine with those specs?

terry-...@glaver.org

unread,
Nov 29, 2017, 9:55:05 PM11/29/17
to
On Wednesday, November 29, 2017 at 11:50:34 AM UTC-5, Anne & Lynn Wheeler wrote:
> In the early 80s, mid-range 4341s were selling 100-1000 orders at a time
> to large corporations for placing out in departmental areas (conference
> rooms, stock rooms, etc) ... sorting of the leading edge of the coming
> distributed computing tsunami. MVS wanted to play in this market, but
> they only supported CKD DASD ... which was the high-end 3380. The only
> disk for non-datacenter mid-range was 3370 FBA. Eventually they came out
> with 3375 CKD which was simulated on 3370 FBA. It didn't really help MVS
> a lot ...

One of the good things (at least depending on your point of view) about IBM was that they supported attaching all sorts of obsolete peripherals to [then] new systems like the 4341. We were still running CKD 3340 Data Module drives on the 4300, at least until the rubber bushings in the "Starship Enterprise" pack / module started rotting. And the 1403N1 got attached via a 2821, IIRC. On the previous 3138, it had been attached via an Integrated Mumble Adapter, along with the 3340 drives (on a different IMA). We held onto that so long that by the time I replaced it with a Dataproducts B600 and an IRMAprint hacked to do Dataproducts instead of Centronics protocol, when I called and said "I know we have a year of payments left on the lease, but we're not using it - come pick it up or we'll put it outside under a tarp", they just came and pulled a few boards they wanted for spares and told us to junk the rest of it. They did take the 1403N1, but I ended up with a bunch of print trains left over somehow.

jmfbahciv

unread,
Nov 30, 2017, 10:30:22 AM11/30/17
to
East coast vs. west coast? That would be interesting.

/BAH

jmfbahciv

unread,
Nov 30, 2017, 10:30:22 AM11/30/17
to
Only if it had "owned" the previous hardware with no local modifications.
Recompiling is one method; there are others which are more problematic.

/BAH

jmfbahciv

unread,
Nov 30, 2017, 10:30:22 AM11/30/17
to
One of the good things about IBM was they refused to attach heterogeneous
hardware to their systems; this gave DEC a lot of business. :-)

/BAH

Scott Lurndal

unread,
Nov 30, 2017, 11:04:26 AM11/30/17
to
First, I was at the decommissioning of a V380 in 2010 that was
installed in 1987. That system is still running at the LCM
in seattle. Those machines ran unmodified binaries from 1966
even today. So "continues to run 30 year old programs unchanged".

Second, all the modern Univac and Burroughs machines are running
in emulation on arrays of modern high-end intel 64-bit processors,
but until recently (last year), they were custom cmos processors with characteristics
similar to the Z series chips.
0 new messages