Multiple Virtual Memory

35 views
Skip to first unread message

Anne & Lynn Wheeler

unread,
Mar 10, 2011, 2:33:27 PM3/10/11
to

from ibm jargon:

MVM - n. Multiple Virtual Memory. The original name for MVS (q.v.),
which fell foul of the fashion of changing memory to storage.

MVS - n. Multiple Virtual Storage, an alternate name for OS/VS2
(Release 2), and hence a direct descendent of OS. OS/VS2 (Release 1)
was in fact the last release of OS MVT, to which paging had been
added; it was known by some as SVS (Single Virtual Storage). MVS is
one of the big two operating systems for System/370 computers (the
other being VM (q.v.)). n. Man Versus System.

... snip ...

part of Future System effort was "single-level store"
http://en.wikipedia.org/wiki/Single-level_store

being able to treat everything as (virtual) address space ... which then
shows up later in S/38 (and as/400 followon). It somewhat came from
TSS/360 (the "other" operating system for 360/67) and other virtual
memory systems from the 60s.

"single pool" of data was scaling issue for s/38 ... since there was
scatter allocation across all disks ... the whole infrastructure had to
be backedup and restored as single entity ... single disk failure
required restoring the whole infrastructure. this problem was possibly
major motivation for s/38 being early RAID adopter.
http://en.wikipedia.org/wiki/System/38

During "single-level store" focus of Future System possibly contributed
to the corporation moving to "storage" ... misc. past posts mentioning
Future System
http://www.garlic.com/~lynn/submain.html#futuresys

Problems that TSS/370 had scaling its memory mapped filesystem
... didn't appear to have been corrected/improved with Future
System. The FS convoluted hardware execution contributed to the analysis
that applications running on FS machine built with fastest available
technology (370/195) would have thruput of 370/145 (about factor of 30
times thruput hit). Combination of performance, extreme complexity, and
many (complex) areas not even being fully defined ... all contributed to
demise of FS.

single-level store webpae also references "IBM i" (current incarnation
of as/400):
http://en.wikipedia.org/wiki/IBM_i5/OS

and multics
http://en.wikipedia.org/wiki/Multics

which was on 5th flr of 545 tech sq ... while science center was on 4th
flr
http://www.garlic.com/~lynn/subtopic.html#545tech

for whatever reason the single-level store wiki also mentions
EROS
http://en.wikipedia.org/wiki/Extremely_Reliable_Operating_System

which is descendent of KeyKOS. Precursor to KeyKOS was GNOSIS developed
for 370 at Tymshare. When MD bought Tymshare, I was brought in to
evaluate GNOSIS as part of its spinning off to Key Logic.
http://www.cis.upenn.edu/~KeyKOS/

for additional drift, Coyotos is successor to EROS:
http://www.coyotos.org/history/index.html

above mentions person having done a stint at HaL ... which was turning
out 64bit sparc system. The "H" in HaL was my former manager at IBM and
the "L" in HaL was a former employee at SUN ... during the initial
formation of HaL, SUN objected strenously to "L" being part of HaL.

wiki HaL page:
http://en.wikipedia.org/wiki/HAL_Computer_Systems

Recent mention of having done paged-map filesystem for cp67/cms
http://www.garlic.com/~lynn/2011c.html#12 If IBM Hadn't Bet the Company

that tried to avoid several of the issues having watched the tss/360
implementation. other past posts mentioning doing paged-map filesystem
http://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

Quadibloc

unread,
Mar 11, 2011, 12:02:21 AM3/11/11
to
On Mar 10, 12:33 pm, Anne & Lynn Wheeler <l...@garlic.com> wrote:
> from ibm jargon:
>
> MVM - n. Multiple Virtual Memory. The original name for MVS (q.v.),
> which fell foul of the fashion of changing memory to storage.
>
> MVS - n. Multiple Virtual Storage, an alternate name for OS/VS2
> (Release 2), and hence a direct descendent of OS. OS/VS2 (Release 1)
> was in fact the last release of OS MVT,

Since that was Multiprogramming with a Variable number of Tasks,
wasn't MVS Multiprogramming with Virtual Storage?

John Savard

Joe Morris

unread,
Mar 11, 2011, 6:37:50 AM3/11/11
to

I don't know the official answer, but since MVS was the successor to SVS
("Single Virtual Storage"), and the names "Single" and "Multiple" actually
described the virtual memory implementation, the definition Lynn quoted from
Mike's "IBM Jargon" list sounds more likely.

While we're on the subject of the old system names, and the older ones that
were dropped from usage, does someone recall the original names for
PCP/MFT/MVT that until very late in the life of OS/360 were enshrined in the
comments of the CVT macro (possibly buried in the macro but not in the
expansion)? This thread made me remember their presence in the macro text
but I'm drawing a blank on just what they were. (Something like "MS2" for
MVT - I don't think that was the name but it was something like that.)


Joe Morris


Anne & Lynn Wheeler

unread,
Mar 11, 2011, 9:46:22 AM3/11/11
to

Quadibloc <jsa...@ecn.ab.ca> writes:
> Since that was Multiprogramming with a Variable number of Tasks,
> wasn't MVS Multiprogramming with Virtual Storage?

re:
http://www.garlic.com/~lynn/2011d.html#71 Multiple Virtual Memory

the MVM/MVS was OS/VS2 Release 2 ... to differentiate it from OS/VS2
Release 1 "SVS" (single virtual storage) ... which was MVT layed out in
single (16mbyte) virtual address space ... somewhat similar to running
MVT in a 16mbyte virtual machine ... except MVT had some stub code that
handled the page faults and page I/O ... instead of having CP67 do it

I recently got some background history on what was to have been OS/VS2
Release 3 ... the operating system for FS. Note that MFT transition to
virtual memory was called OS/VS1 (which was MFT typically laid out in
single 4mbyte virtual address space).

The biggest code for MVT to virtual memory, had to do with I/O
operations. Traditional MVT paradigm had applications (and/or libraries
called by applications, aka bsam, qsam, bdam, etc) build channel
programs in application space and then invoked "EXCP/SVC0" system
call. channel programs used "real addresses" ... the issue for SVS
(and/or cp67 with virtual machines) was that the passed channel programs
now all had virtual addresses.

In CP67, the virtual machine channel programs were processed by
"CCWTRANS" which scanned the virtual machine channel programs
... creating a duplicate ... replacing the virtual addresses with real
addresses. The initial pass for making MVT support its own single
16mbyte virtual address space (for "SVS") was to "borrow" CP67's
CCWTRANS ... and craft it into MVT's EXCP processing (I remember being
in POK machine room offshift for some long forgotten reason, and Ludlow
was busily getting MVT running with its own virtual memory support on
360/67 ... initial work for SVS).

There was big challenge from "SVS" to "MVS", OS/360 had a heavy "pointer
passing" API paradigm. The result started out with each application
getting its own 16mbyte virtual address space ... but an 8mbyte image of
the os/360 kernel occupied every application 16mbyte virtual address.

The other major problem was MVT had a lot of "subsystems" that sat
outside the kernel. An application would generate a subsystem call, pass
through the kernel and show up in the subsystems ... the passed
application pointer could be used by the subsystem (because they were
all in single address space). In transition to MVS, all applications got
their own virtual address space ... but so did all these "subsystems" ..
which no longer had access to the application address spaces.

The solution to this was the "common segment" ... an area that was
common to every virtual address space (analogous to the 8mbyte kernel
image), applications could acquire a dedicated part of the "common
segment" ... stuff its parameters, call the subsystem ... and the passed
pointer was accessable to the subsystems. This area started out as
1mbyte, which along with the 8mbyte kernel area, left the application
only 7mbyte. However, as systems grew and subsystems were added ... the
common segment size had to grow. Prior to transition to 370/xa & 31bit
virtual addressing ... many large customer systems had 4-5mbyte "common
segment" ... threatening to grow to 5-6mbytes (in some cases leaving
2mbytes for application use).

As a temporary expedite, a small subset of 370/xa architecture was
retrofitted to 3033s call "dual-address" space. This provided new
instructions for semi-privileged "subsystems" which could be used to
access secondary virtual address space ... subsystem would be entered
with a secondary address space pointer of the calling application. This
required rewritting all the subsystems ... some of which still hasn't
been done ... so common segment support has continued up until the
current day.

things have since gotten more complex than dual-address space
... some discussed here
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/3.2.1?DT=20040504121320

For other drift ... charlie had invented compare&swap instruction when
he was doing work on cp67 fine-grain multiprocessor locking at the
science center (compare&swap was chosen because CAS are charlie's
initials). An attempt was made to get the instruction added to 370. This
was initially rebuffed because the POK favorite son operating system
(aka MVT) claimed the test&set instruction was more than sufficient. The
owner's of 370 architecture gave a challenge to come up uses for
compare&swap other than multiprocessor locking (to justify instruction
inclusion in 370). Thus was born the uses for application
multiprogramming (multithreaded) ... examples that still survive in
current principles of operation. multiprogramming and multiprocessing
examples:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/A.6?DT=20040504121320

past posts mentioning smp &/or compare&swap
http://www.garlic.com/~lynn/subtopic.html#smp

compare&swap semantics, in some form, has been picked up by many other
platforms and typically is heavily used by, at least, large DBMS
implementations (for multithreaded operations, whether or not running in
multiprocessor environment).

another item from science center with people's initials is "GML" which
was invented at the science center in 1969.
http://www.garlic.com/~lynn/submain.html#sgml

where GML are the last name initials of the three inventors. "GML"
morphed into ISO standard "SGML" in the late 70s, and then into "HTML"
in the late 80s.

misc. past posts mentioning science center
http://www.garlic.com/~lynn/subtopic.html#545tech

Anne & Lynn Wheeler

unread,
Mar 11, 2011, 10:30:02 AM3/11/11
to

Anne & Lynn Wheeler <ly...@garlic.com> writes:
> I recently got some background history on what was to have been OS/VS2
> Release 3 ... the operating system for FS. Note that MFT transition to
> virtual memory was called OS/VS1 (which was MFT typically laid out in
> single 4mbyte virtual address space).

re:

http://www.garlic.com/~lynn/2011d.html#72 Multiple Virtual Memory

small piece from last summer (customer asked me if I could contact
people with more details regarding history in the period ... reply
was both to me and to the original requester):

Of course, the estimates for OS/VS were based on a misperception. The
Kingston estimate for OS/VS2 Release 1 (SVS) had an estimate for the
work needed for Release 2 (MVS), but it was couched as release 1 cost
plus a delta - in other words, the same cost as release 1 plus some
more. Since the Kingston resources were being redeployed to FS, that
meant that there weren't going to be enough people to do both. Since MVS
was supposed to be the glide path for FS (which would be OS/VS2 Release
3), this was unwelcome news. xxxxx and yyyyy modified the plan to reuse
some of the SVS resources plus people transitioning from OS/360. Bob
Evans did his part by cutting a year off the MVS development schedule.

... snip ...

misc. past posts mentioning FS
http://www.garlic.com/~lynn/submain.html#futuresys

reference about Ludlow offshift work (mentioned in previous note):

Lynn is right about Ludlow. They installed a full duplex Model 67 in the
706 computing center (705 had a model room). Don worked out the channel
program translation techniques. He and bbbbb had a patent on real-time
channel program translation that probably worked, but had a lot of
moving parts. ccccc and I had an alternative proposal which was closed
because it was felt that the bbbbbb-Ludlow patent was all we
needed. Moot point, because we never built the hardware anyway. Don
spent every waking minute on the model 67 (which was configured as two
systems).

... snip ...

another piece:

Note to Lynn - I have always given zzzzz the credit for turning Bob
Evans around. For reasons unknown to me, the TSO group had the flip
charts and wallboard zzzzz used. The clincher was the ability to run 16
initiators simultaneously on a 1 megabyte system, taking advantage of
the fact that MVT normally used only 25% of the memory in a
partition. The resulting throughput gain (compared to real hardware) was
substantial enough to convince Bob. It helped that Tom Simpson and Bob
Crabtree had hosted an MFT II system TSS-Style and shown similar
performance gains. Of course, since CP67 was a pickup group they weren't
considered and we had the OS/VS adventure instead.

... snip ...

Note Simpson & Crabtree had done HASP. another piece:

HASP (Houston Automatic Spooling Priority System) was developed for the
Houston Manned Space Center by Tom Simpson, Bob Crabtree and a couple of
others who used their experience with the Moonlight (DCS) system. It
overcame some of the horrendous design decisions that crippled MFT (many
of these were fixed in MFT II). It was released as a type 3 program (I
still have one of the source tapes, long since changed to chocolate) and
turned the OS/360 program around. At the same time the West Region used
their experience with Moonlight to create ASP. We used the BSC-B release
when I was at Ohio State in 1969. Prior to that the RJE package was
awful, although they all were due to the STR protocol. Lynn's workaround
was one of the usable RJE packages - aaaaaa's team never did get it, and
produced some real stones.

... snip ...

As undergraduate in the 60s, I had ripped out the HASP 2780/STR support
in part to reduce HASP real storage footprint ... and replaced it with
2741&TTY terminal support along with editor (that mostly re-implemented
CMS editor syntax ... none of the code was re-useable since the CMS and
HASP environments were so different).

misc. past posts mentioning HASP:
http://www.garlic.com/~lynn/submain.html#hasp

HASP support (& some people) move to g'burg for morphing HASP into JES2.
ASP support was also moved into the same g'burg. My wife did a stint in
the group (after FS) ... inclduing part of the "catchers" for ASP to
"JES3". She was co-author (along with person that sent the note) for
"JESUS" ... "JES Unified System" ... that included all the things from
JES2 & JES3 that nether customer camps could live without. Various
reasons ... it never happened ... and both JES2 & JES3 continue to exist
today.

the "TSS-Style" thing was called "RASP". Simpson later leaves and
appears as a "Fellow" in Dallas working for Amdahl ... and redoing
"RASP" (in "clean room"). There was some legal action that attempted to
find any RASP code included in the new stuff being done at Amdahl.

more HASP stuff from the note:

To find SPOOL you'll have to get some old 7070 marketing material. The
problem being addressed was that the peripheral equipment (card reader,
punch, printer) were all 150 docs/minute devices, whilst Univac and
others were touting fast card readers (basically model 85 collators
wired directly into the computer) and printers - 300 cards/minute and
600 lines/minute. The mainframes (704/709 and 705) us off-line card to
tape, tape to card, and tape to printer equipment (with a ghastly wire
printer that would print at 1000 lines a minute for a couple of seconds
or so). This was an unacceptable solution for a mid-range system that
was meant to replace the IBM 650. The solution took advantage of the
interrupt facility built into the 7070 - the off line operations
executed in the background while an application was running, so the 7070
became a tape-in, tape-out system, just like its big brothers, but
without requiring so much extra equipment. This was heavily touted until
1959, when the IBM 1401 was announced. Since the 1401 could do
simultaneous card-tape, tape-card, and tape-print, had the wonderful
1403 train printer and a fast card reader-punch (the 1402), and was
incredibly cheap (less than the cost of the card equipment for the
7070), SPOOL suddenly became a bad word, to the dismay of the sales
force.

... snip ...

My first student programming job was re-implementing 1401 "MPIO" in 360.
Univ. had 709/1401 combination where operators manually moved tapes
between 709 & 1401. Univ. was on plan to replace whole thing with 360/67
running tss/360. As part of transition, 1401 was replaced with 360/30.
The 360/30 could run 1401 hardware emulation and needed no new software
... but for whatever reasons, I was paid to redo the app in 360. I got
to design & write my own monitor, device drivers, storage management,
interrupt handlers, task management, error recovery, etc. The only
criteria was card->tape was same tape as created by 1401 ... and 709
tapes for tape->printer/punch was the same. One of the things was being
able to concurrently do card->tape and tape->printer/punch operations.

Before doing the HASP 2741/TTY CRJE thing ... I had added TTY/ASCII
support to cp67 (which came with 2741 & 1052 support). The cp67
2741/1052 terminal support did automatic terminal type identification
(leveraging the 2702 SAD command being able to switch the line-scanner
type associated with port/line). I attempted to preserve the dynamic
terminal type identification ... and it worked for leased/direct
lines. However, the 2702 had taken a hardware shortcut and hardwired the
oscillator/line-speed to each port. My dreams of having a single dial-up
(hunt group) number of all dial-in terminals was blocked.

This was somewhat the motivation for the univ. to start clone controller
effort, reverse engineer channel interface ... and build channel
interface board for Interdata/3 programmed to emulate 2702 ... but
supporting both dynamic terminal type and dynamic line-speed
identification. Later four of us are written up being blamed for clone
controller business. misc. past posts
http://www.garlic.com/~lynn/subtopic.html#360pcm

Anne & Lynn Wheeler

unread,
Mar 11, 2011, 4:08:23 PM3/11/11
to

Anne & Lynn Wheeler <ly...@garlic.com> writes:
> HASP support (& some people) move to g'burg for morphing HASP into JES2.
> ASP support was also moved into the same g'burg. My wife did a stint in
> the group (after FS) ... inclduing part of the "catchers" for ASP to
> "JES3". She was co-author (along with person that sent the note) for
> "JESUS" ... "JES Unified System" ... that included all the things from
> JES2 & JES3 that nether customer camps could live without. Various
> reasons ... it never happened ... and both JES2 & JES3 continue to exist
> today.

re:

http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

my wife was then con'ed into going to pok to be in charge of (mainframe)
loosely-coupled (cluster) architecutre. while there she did
"peer-coupled shared data" ... which saw little uptake (except for IMS
hot-standby) until sysplex (and parallel sysplex) ... contributing to
her not remaining long. misc. past posts mentioning here "peer-coupled
shared data" architecture
http://www.garlic.com/~lynn/submain.html#shareddata

another factor was the skimishes with the communication group which were
demanding that she use SNA for loosely-coupled operation ... there were
be temporary truces where she would be allowed to use whatever she
wanted (but the communication group "owned" everything that crossed the
wall of the datacenter).

decade later, she does short stint as chief architect for Amadeus ...
but doesn't last long ... the communication group gets her removed when
she backs the use of x.25 (over sna); it didn't do them much good,
Amadeus goes with x.25 anyway.

misc. recent refs mentioning Amadeus
http://www.garlic.com/~lynn/2011.html#17 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)
http://www.garlic.com/~lynn/2011.html#41 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
http://www.garlic.com/~lynn/2011d.html#14 Sabre; The First Online Reservation System
http://www.garlic.com/~lynn/2011d.html#43 Sabre; The First Online Reservation System

Mike Hore

unread,
Mar 12, 2011, 8:27:03 PM3/12/11
to
On 11/03/11 6:33 AM, Anne & Lynn Wheeler wrote:

> ...


> During "single-level store" focus of Future System possibly contributed
> to the corporation moving to "storage"

Ummm, I think it goes a loooong way further back - IBM had a tradition
of avoiding words that made computers sound in any way human. I just
checked the 701 manual (from Bitsavers), dated 1953, and they refer to
"electrostatic storage". Likewise the 704 manual talks about "core
storage". Memory has always been "storage" in IBM-speak.

Cheers, Mike.

---------------------------------------------------------------
Mike Hore mike_h...@OVE.invalid.aapt.net.au
---------------------------------------------------------------

Anne & Lynn Wheeler

unread,
Mar 12, 2011, 9:33:08 PM3/12/11
to
Mike Hore <mike_h...@OVE.invalid.aapt.net.au> writes:
> Ummm, I think it goes a loooong way further back - IBM had a tradition
> of avoiding words that made computers sound in any way human. I just
> checked the 701 manual (from Bitsavers), dated 1953, and they refer to
> "electrostatic storage". Likewise the 704 manual talks about "core
> storage". Memory has always been "storage" in IBM-speak.

re:

http://www.garlic.com/~lynn/2011d.html#74 Multiple Virtual Memory

take it up with IBM Jargon ... it was the one claimed that MVM
was the original name for MVS.

of course there is DASD (direct access storage device) ... which comes
from a period when there was a number of different kinds of devices and
it wasn't even clear which kind would come to dominate.

i remember memory being used in the early 70s

there is this:
http://www-03.ibm.com/ibm/history/history/year_1970.html

from above:

In IBM's most important product announcement since the System/360 in
1964, the IBM System/370 is introduced. Able to run System/360 programs,
the System/370 is one of the first lines of computers to include
"virtual memory" technology, a technique developed in England in 1962 to
expand the capabilities of the computer by using space on the hard drive
to accommodate the memory requirements of software.

... snip ...

of course it skates over the existance of 360/67 and tss/360 ... as well
as other systems from 360/67; cp67 by science center, mts at michigan,
etc. misc. past posts mentioning science center
http://www.garlic.com/~lynn/subtopic.html#545tech

then there is this ... from science center in the 60s regarding tss/360
might not know what it was getting into ... "VM and the VM Community:
Past, Present, and Future" ... several formats can be found here:
http://web.me.com/melinda.varian

from above:

What was most significant was that the commitment to virtual memory was
backed with no successful experience. A system of that period that had
implemented virtual memory was the Ferranti Atlas computer, and that was
known not to be working well. What was frightening is that nobody who
was setting this virtual memory direction at IBM knew why Atlas didn't
work.

... snip ...

of course, the same could be said of (later) Future System effort


... misc. past posts mentioning Future System
http://www.garlic.com/~lynn/submain.html#futuresys

little more drift into managing virtual memory ... and global vs local
replacement ... recent references
http://www.garlic.com/~lynn/2011.html#44 CKD DASD
http://www.garlic.com/~lynn/2011.html#70 Speed of Old Hard Disks
http://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)


http://www.garlic.com/~lynn/2011c.html#12 If IBM Hadn't Bet the Company

http://www.garlic.com/~lynn/2011c.html#72 A History of VM Performance
http://www.garlic.com/~lynn/2011c.html#74 A History of VM Performance
http://www.garlic.com/~lynn/2011c.html#87 A History of VM Performance
http://www.garlic.com/~lynn/2011c.html#88 Hillgang -- VM Performance
http://www.garlic.com/~lynn/2011c.html#90 A History of VM Performance

misc. past posts mentioning virtual memory management and replacement
http://www.garlic.com/~lynn/subtopic.html#wsclock

misc. recent references mentioning Melinda:
http://www.garlic.com/~lynn/2011.html#15 545 Tech Square
http://www.garlic.com/~lynn/2011.html#64 Two terrific writers .. are going to write a book
http://www.garlic.com/~lynn/2011.html#72 Speed of Old Hard Disks
http://www.garlic.com/~lynn/2011.html#73 Speed of Old Hard Disks - adcons
http://www.garlic.com/~lynn/2011.html#76 Speed of Old Hard Disks - adcons
http://www.garlic.com/~lynn/2011.html#98 History of copy on write
http://www.garlic.com/~lynn/2011b.html#4 Rare Apple I computer sells for $216,000 in London
http://www.garlic.com/~lynn/2011b.html#13 Rare Apple I computer sells for $216,000 in London
http://www.garlic.com/~lynn/2011b.html#18 Melinda Varian's history page move
http://www.garlic.com/~lynn/2011b.html#23 A brief history of CMS/XA, part 1
http://www.garlic.com/~lynn/2011b.html#25 Melinda Varian's history page move
http://www.garlic.com/~lynn/2011b.html#29 A brief history of CMS/XA, part 1
http://www.garlic.com/~lynn/2011b.html#33 A brief history of CMS/XA, part 1
http://www.garlic.com/~lynn/2011b.html#35 Colossal Cave Adventure in PL/I
http://www.garlic.com/~lynn/2011b.html#39 1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed
http://www.garlic.com/~lynn/2011b.html#61 VM13025 ... zombie/hung users
http://www.garlic.com/~lynn/2011b.html#81 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011c.html#0 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011c.html#2 Other early NSFNET backbone
http://www.garlic.com/~lynn/2011c.html#3 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011c.html#4 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011c.html#31 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011c.html#32 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011c.html#71 IBM and the Computer Revolution
http://www.garlic.com/~lynn/2011c.html#87 A History of VM Performance
http://www.garlic.com/~lynn/2011c.html#90 A History of VM Performance

Bill Findlay

unread,
Mar 12, 2011, 10:27:54 PM3/12/11
to
On 13/03/2011 02:33, in article m3k4g36...@garlic.com, "Anne & Lynn
Wheeler" <ly...@garlic.com> wrote:

> What was most significant was that the commitment to virtual memory was
> backed with no successful experience. A system of that period that had
> implemented virtual memory was the Ferranti Atlas computer, and that was
> known not to be working well.

Known by whom? What is your evidence for this claim?

--
Bill Findlay
with blueyonder.co.uk;
use surname & forename;

Anne & Lynn Wheeler

unread,
Mar 12, 2011, 11:06:16 PM3/12/11
to

Bill Findlay <ne...@findlayw.plus.com> writes:
> Known by whom? What is your evidence for this claim?

re:
http://www.garlic.com/~lynn/2011d.html#81 Multiple Virtual Memory

it was quote from melinda's document ... attributed to somebody at
science center in the mid-60s(?) ... before my time.

atlas wiki page ... doesn't mentioning anything about performance:
http://en.wikipedia.org/wiki/Atlas_Computer_%28Manchester%29

maybe somebody here has knowledge from early-60s about atlas performance
and what was being referenced.

note however, the use of the quote from melinda's history was to show
example of the use of "virtual memory" (as opposed to "virtual storage")

wiki "virtual memory" page:
http://en.wikipedia.org/wiki/Virtual_memory
wiki "paging" page
http://en.wikipedia.org/wiki/Paging

above have references to atlas.

the other point of the previous post ... was global vis-a-vis local LRU
... in the late 60s, there was some amount of academic work in "local
LRU" ... when i was doing "global LRU" as an undergraduate at the univ.

more than decade later, Jim Gray asked me to help with co-worker at
Tandem being blocked getting his PHD thesis in the area of "global LRU"
... recent post


http://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)

references this older post
http://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's After Multi-Core?
with this old explanation
http://www.garlic.com/~lynn/2006w.html#email821019

i.e. for whatever reason, corporate management blocked my sending a
reply for almost a year (request had been made during acm sigops
conference dec81).

old post with lots of extracts from Melinda's history
http://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)

other past posts with Atlas reference:
http://www.garlic.com/~lynn/2001h.html#10 VM: checking some myths.
http://www.garlic.com/~lynn/2001h.html#26 TECO Critique
http://www.garlic.com/~lynn/2002.html#42 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
http://www.garlic.com/~lynn/2003.html#72 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003b.html#1 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003m.html#34 SR 15,15 was: IEFBR14 Problems
http://www.garlic.com/~lynn/2005o.html#4 Robert Creasy, RIP
http://www.garlic.com/~lynn/2006i.html#30 virtual memory
http://www.garlic.com/~lynn/2007e.html#1 Designing database tables for performance?
http://www.garlic.com/~lynn/2007g.html#36 Wylbur and Paging
http://www.garlic.com/~lynn/2007o.html#41 Virtual Storage implementation
http://www.garlic.com/~lynn/2007r.html#51 Translation of IBM Basic Assembler to C?
http://www.garlic.com/~lynn/2007r.html#64 CSA 'above the bar'
http://www.garlic.com/~lynn/2007t.html#54 new 40+ yr old, disruptive technology
http://www.garlic.com/~lynn/2007u.html#77 IBM Floating-point myths
http://www.garlic.com/~lynn/2007u.html#79 IBM Floating-point myths
http://www.garlic.com/~lynn/2007u.html#84 IBM Floating-point myths
http://www.garlic.com/~lynn/2009i.html#55 Publisher of Geek's Atlas to help save Bletchley Park
http://www.garlic.com/~lynn/2010g.html#71 Interesting presentation
http://www.garlic.com/~lynn/2011.html#44 CKD DASD

http://www.garlic.com/~lynn/2011d.html#77 End of an era

Anne & Lynn Wheeler

unread,
Mar 13, 2011, 2:25:28 PM3/13/11
to

Mike Hore <mike_h...@OVE.invalid.aapt.net.au> writes:
> Ummm, I think it goes a loooong way further back - IBM had a tradition
> of avoiding words that made computers sound in any way human. I just
> checked the 701 manual (from Bitsavers), dated 1953, and they refer to
> "electrostatic storage". Likewise the 704 manual talks about "core
> storage". Memory has always been "storage" in IBM-speak.

re:

http://www.garlic.com/~lynn/2011d.html#82 Multiple Virtual Memory

the other issue contributing to the ambiguity in that period was
rotational drums as computer memory ("storage) ... as well as craft/art
to carefully place instructions on the drum surface to maximize
instructions per revolution:
http://www.columbia.edu/acis/history/650.html
http://en.wikipedia.org/wiki/IBM_650

and 701
http://en.wikipedia.org/wiki/IBM_701

the scarcity of electronic storage/memory contributed to the CKD storage
architecture for 360 ... misc. past posts
http://www.garlic.com/~lynn/submain.html#dasd

360 CKD had i/o programs and arguments all in processor memory ... which
were sequentially fetched (in some cases repeatedly) ... requiring
dedicated I/O resources during i/o operations. the paradigm allowed
file&library directories resident on disks and i/o programs that
searched the disk resident directories for specific file/members (the
i/o "search" operation would scan the disk resident directory entries,
for each entry, it would fetch the match argument from processor
memory/storage for comparison ... repeating the process for each
directory entry until match was found). this traded off relatively
abundent i/o resources for extremely scarce electronic memory/storage.

however, I've pontificated frequently that by at least the mid-70s, the
trade-off was starting to invert ... with the dedicated i/o resources
for CKD (& search) i/o programming was becoming a major system
bottleneck. In a later example, I claimed that between 360/67 and 3081
time-frame that the relative disk system thruput had declined by order
of magnitude (processor & memory resources increased by 40-50 time while
disk thruput increased by only 3-5 times) ... and needed to change
paradigms ... increasingly using electronic memory to compensate for
disk bottleneck ... old post with reference:
http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door

some disk division executives took offense at my claims and assigned
their performance group to refute them. a few weeks later they came back
and effectively said that I had slightly understated the sitution. the
analysis then turns into (user group) SHARE presentation on organizing
disks for improved thruput (B874 at SHARE 64, 18Aug84)
http://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)
http://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s
http://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on anti

As undergraudate I watched as IBMers did os/360 sysgens up thru release
9.5. The univ. turned it over to me starting with release 11 ... but
rather than do a straight sysgen, I started doing highly modified
operations ... re-organizing all the copy/move statements from the
distribution material to the new system disks ... so that the system
compenents were carefully ordered on disk to optimize arm motion during
system operation (achieving nearly three times increase in thruput for
typical univ. workload).

In addition to doing a lot of pathlength operations and new feature for
cp67 ... I also replaced the disk & drum FIFO operation with ordered arm
scheduling and multiple request chaining. CP67 FIFO paging on 2301
paging would peak around 80 pages/sec. With ordered multiple requests, I
could get nearly 300 pages/sec. Similar changing disk operation from
FIFO nearly doubled typical thruput ... along with much more graceful
degradation and workload increased (advantage of ordered seek queuing
tended to increase as the length of queue increased). Some of this
shows up in old SHARE presentation that I made in '68
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

In the early 70s at the science center, I did a paged mapped flavor for
CMS filesystem ... trying to avoid the shortcomings that I saw in
tss/360 single-level-store implementation (and considered what I was
doing was better than what was being formulated for Future System).
While lots of the stuff leaked out in product releases ... the paged
mapped stuff never did ... old email mentioning doing port of work from
cp67 to vm370 ... and doing my own"csc/vm" product distrubtion for
internal customers:
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

misc. past post mentioning paged map filesystem work

Bill Findlay

unread,
Mar 13, 2011, 5:27:27 PM3/13/11
to
On 13/03/2011 04:06, in article m3fwqr6...@garlic.com, "Anne & Lynn
Wheeler" <ly...@garlic.com> wrote:

> Bill Findlay <ne...@findlayw.plus.com> writes:
>> Known by whom? What is your evidence for this claim?
>
> re:
> http://www.garlic.com/~lynn/2011d.html#81 Multiple Virtual Memory
>
> it was quote from melinda's document ... attributed to somebody at
> science center in the mid-60s(?) ... before my time.

So, hearsay?

Can you suggest why IBM bought out the patents for paging from Manchester
University, if they believed that it didn't work?

Anne & Lynn Wheeler

unread,
Mar 13, 2011, 9:28:29 PM3/13/11
to
Bill Findlay <ne...@findlayw.plus.com> writes:
> So, hearsay?
>
> Can you suggest why IBM bought out the patents for paging from Manchester
> University, if they believed that it didn't work?

re:

didn't say that it didn't work ... said that it wan't working well
(statement would have predated cp40 work in 1966); then TSS/360
single-level-store turned out to be not working well ... and I would
claim that cp40 & cp/67, while somewhat better than TSS/360, also didn't
work very well until I rewrote cp67 implementation ... including global
LRU replacement ... past post/reference to global LRU
http://www.garlic.com/~lynn/subtopic.html#wsclock

some assumption that TSS/360 and/or CP67 at least used what was known
from Atlas ... then that would be evidence that none of them were
working all that well. I would contend that at least quality page
thrashing control technology and quality page replacement strategies
were still lacking in the mid-60s.

page thrashing was still issue in late 60s (i.e. which apparently hadn't
been addressed earlier) ... both academic and what I was doing. various
mechanisms for adding effective controls to limit page thrashing
(especially in large/high multitasking environment). The academic page
thrashing controls from the late 60s were coupled with local LRU
replacement algorithms. What I was doing simultaneously but I coupled
page thrashing controls with global LRU replacement algorithms.

since I don't have access to the material that would have prompted the
original statement ... I can only make some inference about the early
60s state-of-the-art ... based on subsequent state-to-the-art from the
mid-60s and late-60s.

The dustup over stanford PHD thesis for global LRU in the early 80s ...
was because ... at least the local vis-a-vis global page replacement
issue was still going on nearly 20yrs after Atlas (although I had done
my work less than decade after Atlas).

CP67 Release 1 delivered to univ. Jan68 appeared to have lots of stuff
from CTSS ... although CTSS swapped tasks ... not paged. Totally lacked
page thrashing control and basically used FIFO replacement.

CP67 Release 2 was shipped with changes from Lincoln Labs that
drastically simplified dispatching and reduced overhead ... also added
primitive page thrashing controls (fixed limit on multitasking based on
real storage size). Page replacement was still pretty much FIFO.

I put in a form of dynamic adaptive working set page thrashing control
(but different from what was going on in academia and published in ACM
at the time) as well as global LRU replacement.

misc. stuff:
http://en.wikipedia.org/wiki/Belady%27s_anomaly

this has paging IBM Systems Journal article from 1966 by Belady that makes
mention of including some part of Atlas in simulation but doesn't
provide any substantial description (and there is no mention of page
thrashing controls):
http://www.google.com/url?sa=t&source=web&cd=4&ved=0CDwQFjAD&url=http%3A%2F%2Fusers.informatik.uni-halle.de%2F~hinnebur%2FLehre%2FWeb_DBIIb%2Fuebung3_belady_opt_buffer.pdf&rct=j&q=page%20replacement%20history%20belady%20atlas&ei=1Wh9TZSELMSw0QG00KDpAw&usg=AFQjCNHSgCh7YUNp6jRtJwDFsoHKFAgskw&cad=rja

IBM has moved all its online System and R&D Journals to IEEE ... and
accessing now requires IEEE membership (or be current IBM employee).
This is 1981 "History of Memory Management" by Belady, Parmelee, and
Scalzi (Parmelee was at science center in 60s and is mentioned in
Melinda's history).
http://domino.research.ibm.com/tchjr/journalindex.nsf/0b9bc46ed06cbac1852565e6006fe1a0/39ddbeca15ffafed85256bfa0067f4d7!OpenDocument
article at IEEE
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5390584

I just found this reference:
http://www.ics.uci.edu/~bic/courses/JaverOS/ch8.pdf

from above:

Paging can be credited to the designers of the ATLAS computer, who
employed an associative memory for the address mapping [Kilburn, et al.,
1962]. For the ATLAS computer, |w| = 9 (resulting in 512 words per
page), |p| = 11 (resulting in 2024 pages), and f = 5 (resulting in 32
page frames). Thus a 220-word virtual memory was provided for a 214-
word machine. But the original ATLAS operating system employed paging
solely as a means of implementing a large virtual memory;
multiprogramming of user processes was not attempted initially, and thus
no process id's had to be recorded in the associative memory. The search
for a match was performed only on the page number p.

... snip ...

The science center initial CP40 implementation was adding associative
virtual memory hardware to 360/40. A difference between the Atlas
hardware implementation and the implementation for 360/40 was that the
360/40 implementation included a process identifier.

The implication from above could be that ATLAS totally swapped all
virtual pages anytime it switched users/virtual-address space ... not
attempting concurrent users/tasks. Such a implementation would also not
need dynamic limit on concurrent executing tasks as page thrashing
control (aka some form of working set control). If it did LRU
replacement, there would be no difference between local & global.

Bill Findlay

unread,
Mar 13, 2011, 10:38:11 PM3/13/11
to
On 14/03/2011 01:28, in article m3fwqqh...@garlic.com, "Anne & Lynn
Wheeler" <ly...@garlic.com> wrote:

> Bill Findlay <ne...@findlayw.plus.com> writes:
>> So, hearsay?
>>
>> Can you suggest why IBM bought out the patents for paging from Manchester
>> University, if they believed that it didn't work?
>

> didn't say that it didn't work ... said that it wan't working well

For all practical purposes, paging "not working well" == paging "not
working" 8-)

> (statement would have predated cp40 work in 1966); then TSS/360
> single-level-store turned out to be not working well ... and I would
> claim that cp40 & cp/67, while somewhat better than TSS/360, also didn't
> work very well until I rewrote cp67 implementation ... including global
> LRU replacement ... past post/reference to global LRU
> http://www.garlic.com/~lynn/subtopic.html#wsclock

That is evidence about IBM's systems not working well, not about Atlas.

> some assumption that TSS/360 and/or CP67 at least used what was known
> from Atlas ... then that would be evidence that none of them were
> working all that well.

That is quite an assumption. AFAIR, CP40 used FIFO page replacement; if so,
it was nothing like the Atlas algorithm.

> I would contend that at least quality page
> thrashing control technology and quality page replacement strategies
> were still lacking in the mid-60s.

Agreed.

> The dustup over stanford PHD thesis for global LRU in the early 80s ...
> was because ... at least the local vis-a-vis global page replacement
> issue was still going on nearly 20yrs after Atlas

The EMAS system originally had a global replacement algorithm and changed it
to local replacement for better performance, IIRC, so I don't think the
issue is as clear-cut as you make it seem.

> I just found this reference:
> http://www.ics.uci.edu/~bic/courses/JaverOS/ch8.pdf
>
> from above:
>

> ... But the original ATLAS operating system employed paging


> solely as a means of implementing a large virtual memory;
> multiprogramming of user processes was not attempted initially, and thus
> no process id's had to be recorded in the associative memory. The search
> for a match was performed only on the page number p.

And it used an inverted page table, with one associative register per core
store page frame.



> The implication from above could be that ATLAS totally swapped all

> virtual pages anytime it switched users/virtual-address space ... Not

Yes.

> attempting concurrent users/tasks. Such a implementation would also not
> need dynamic limit on concurrent executing tasks as page thrashing
> control (aka some form of working set control). If it did LRU
> replacement, there would be no difference between local & global.

Atlas did not do classic LRU replacement, although there was certainly an
LRU flavor to its method. The actual algorithm is described in the paper:

<http://www.icl1900.co.uk/techpub/atlas-r67.pdf>

See Section V.

Anne & Lynn Wheeler

unread,
Mar 13, 2011, 10:49:37 PM3/13/11
to

Anne & Lynn Wheeler <ly...@garlic.com> writes:
> The science center initial CP40 implementation was adding associative
> virtual memory hardware to 360/40. A difference between the Atlas
> hardware implementation and the implementation for 360/40 was that the
> 360/40 implementation included a process identifier.

re:

http://www.garlic.com/~lynn/2011e.html#2 Multiple Virtual Memory

360/67 had CRO which was the segment table pointer (or STO, segment
table origin) and eight associative array entries. on task switch, CRO
was reloaded for different virtual address space ... which would
automatically invalidate all array entries (individual entries didn't
carry process id).

370 moved STO to CR1 (using CR0 for bits specifying 2kbyte pages or
4kbyte pages and 64kbyte or 1mbyte segments). 370/165 had a 7-entry "STO
stack" and 128 entry look-aside buffer (instead of associative array)
that was four way associative; used five bits from virtual address to
index one of the 32 sets of four entries ... and then lookup on those
four entries. Each entry had 3bit virtual space identifier (mapped to
the 7-entry "STO stack") and the virtual to real address
translation. Loading new CR1 STO (switch task/virtual address space)
... would find its corresponding entry in the STO-stack and then use
that 3-bit identifier. If it wasn't already in the STO-stack ... it
would choose one of the seven entries to replace and invalidate all
TLB-entries with that 3-bit identifier).

MVS design had 8mbytes for kernel and supposedly 8mbytes for kernel
... so one of the TLB index bits was the 8mbyte bit (allowing 64 TLB
entries for kernel and 64 TLB entries for application). some more
detail discussed here

however CMS started at low address and worked up, most applications of
the period rarely got over 1mbyte ... so at least half the 370/165 TLB
entries would typically go unused in a CMS intensive environment.

I'm giving my 7Oct1986 (European user group) SEAS "VM Performance
History" presentation next weds. at local (user group) HILLGANG meeting.
The original was in (foildoc) script (GML) ... GML controls for what was
"large" print and the backup "small" print text (aka notes). It would be
printed on plan paper for handouts ... and then just the "foils" section
would be printed on plan paper ... the "foils" would then be run thru
foil copier. Early foil copiers had manual transparency layed over the
plan paper copy and then run through machine that heated the combination
transforming the black lettering to the transparency. Later on, you
could load stack of transparencies in copier machine and make
transparency copies similar to the way regular copies were made.

In any case, I've been manual cut&pasting the GML image into powerpoint.
Current notes from a "Work as undergraudate" "foil":

Over the two years that I worked on CP/67 at WSU, I designed and
implemented numerous modifications to CP and CMS, many in the area of
performance (I was also very active in several other areas, in editors,
I modified the standard CMS editor to drive a 2250-3 for full-screen
support. I also rewrote the editor to be completely re-entrant and
imbedding it in HASP for CRJE support. I wrote the original ASCII
terminal support for CP and someplace I am blamed with being part of
the team that developed OEM control unit for IBM 360s

In the performance arena, I worked on several areas, a) generalized path
length reduction, b) fastpath - specialized paths for most frequently
encountered cases, c) control data structures that would minimize CPU
overhead, d) identifying closed CP/67 subroutines and modifying them to
use pre-allocated savearea in page 0, and changing their callers to use
BALR rather than SVC, e) improving the page replacement algorithm to use
reference bits & global LRU (rather than FIFO), f) implementing
feedback/feedfoward controls in decision making. The dispatcher
changes implemented code that implicitly took advantage of which
possible virtual machines might require status updates. CPEXBLOKs were
also placed on a master chain instead of being chained off the UTABLE.
Finally an explicit in-q chain was created

... snip ...

Anne & Lynn Wheeler

unread,
Mar 13, 2011, 11:11:17 PM3/13/11
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:
> The science center initial CP40 implementation was adding associative
> virtual memory hardware to 360/40. A difference between the Atlas
> hardware implementation and the implementation for 360/40 was that the
> 360/40 implementation included a process identifier.

re:

previous post discussed 360/67 associative array ... hardware dynamic
LRU replacement of one of the eight associative array entries from the
segment & page table entries in real storage. 370/165 table-look-aside
was somewhat similar but hardware dynamic LRU replacement of one of the
four indexed entries (for virtual address not currently loaded in
hardware).
http://www.garlic.com/~lynn/2011e.html#3 Multiple Virtual Mmeory

the 360/40 implementation was different ... it had one hardware
associative entry for each real page. when virtual page was "assigned"
to real page, the corresponding real page entry was loaded with the
virtual page number along with the process (i.e. virtual address space)
ID (making the 360/40 and Atlas hardware implementation apparently very
similar except the 360/40 having process id ... as noted above

from Melinda's history

Virtual memory on the 360/40 was achieved by placing a 64-word
associative array between the CPU address generation circuits and the
memory addressing logic. The array was activated via mode-switch logic
in the PSW and was turned off whenever a hardware interrupt occurred.

The 64 words were designed to give us a relocate mechanism for each 4K
bytes of our 256K-byte memory. Relocation was achieved by loading a user
number into the search argument register of the associative array,
turning on relocate mode, and presenting a CPU address. The match with
user number and address would result in a word selected in the
associative array. The position of the word (0-63) would yield the
high-order 6 bits of a memory address. Because of a rather loose cycle
time, this was accomplished on the 360/40 with no degradation of the
overall memory cycle. The modifications to the 360/40 would prove to be
quite successful, but it would be more than a year before they were
complete. Dick Bayles has described the process that he and Comeau and
Giesin went through in debugging the modifications ).

... snip ...

360/40 modifications and cp40 was done well before availability of
standard 360/67 product with virtual memory. when 360/67 became
available, cp40 morphed into cp67 (and handling of virtual memory was
modified to correspond to 360/67 hardware).

John Levine

unread,
Mar 13, 2011, 11:20:39 PM3/13/11
to
>For all practical purposes, paging "not working well" == paging "not
>working" 8-)

I used TSS at Princeton in the late 1960s. The paging worked, but
the performance was awful, so it didn't work well.

IBM apparently said it'd support 50 users, but the reality was about
eight of us at a time. Fortunately the 2741 terminals were unreliable
enough that there were rarely more than 8 working at once.

R's,
John

Charles Richmond

unread,
Mar 13, 2011, 11:34:32 PM3/13/11
to

Right, 50 users... six shifts taken approximately 8 at a time... ;-)


--
+----------------------------------------+
| Charles and Francis Richmond |
| |
| plano dot net at aquaporin4 dot com |
+----------------------------------------+

Anne & Lynn Wheeler

unread,
Mar 13, 2011, 11:51:47 PM3/13/11
to
John Levine <jo...@iecc.com> writes:
> I used TSS at Princeton in the late 1960s. The paging worked, but
> the performance was awful, so it didn't work well.
>
> IBM apparently said it'd support 50 users, but the reality was about
> eight of us at a time. Fortunately the 2741 terminals were unreliable
> enough that there were rarely more than 8 working at once.

re:

http://www.garlic.com/~lynn/2011e.html#1 Multiple Virtual Memory

http://www.garlic.com/~lynn/2011e.html#3 Multiple Virtual Memory
http://www.garlic.com/~lynn/2011e.html#4 Multiple Virtual Memory

The SE at the univ. was doing a lot of TSS testing the same time I was
doing cp67 ... so we had to share & trade-off the 360/67 on weekends.

We did a synthentic script for fortran program edit, compile and execute
with wait times inserted between emulated terminal operations

I had better thruput and response for 30 CMS users running the script
than he did with 4 TSS users running the (essentially same) script ...
this was early on spring '68, before I had done hardly any improvements
on cp67.

There was huge bloat in TSS/360 pathlength, there was lots of stuff that
were being done rote w/o really good understanding of why, the fixed
kernel real storage size had bloated and the single-level-store stuff
had gone really crazy (if you have 16mbytes to do anything, try to use
it all ... regardless of what you do).

tss/360 was originally supposed to run on 512kbyte 360/67 ... but kernel
bloat size required the 360/67s be upgraded to 768kbyte (minimum).
360/67 single processor was effectively same as 360/65 and could only
get 1mbyte max ... but a pair of 360/67s in multiprocessor would have
2mbyte max.

one of the major tss players tried to gloss over a huge problem by
claiming that a two processor, 2mbyte tss system 3.8 times throughput
over a one processor, 1mbyte tss system ... was because tss had the best
multiprocessor support on the planet.

The real problem was that tss/360 bloat was so bad it pretty much page
thrashed in 1mbyte machine (regardless of what you did). 2mbyte system
finally had enough storage (after fixed kernel requirements) to actually
get some work done (the 3.8 times improvement wasn't because the
multiprocessor support was the best on the planet ... 3.8times on twice
the hardware ... it was because it was no longer page thrashing ... but
even at 3.8times, it was still much worse than cp67).

Note that while 2mbytes eliminated the worst of the page constrained
operation ... the pathlength bloat was still enormous (compared to
cp67/cms) and every application on tss tended to have several times the
working set of the cms equivalent (on 2mbyte machine while it was no
longer page thrashing to death ... it still required very large number
of page operations to get anything done).

Much later in the tss/370 days ... there was significant work done on
tss pathlength bloat ... while vm370 pathlength bloat was increasing.
shows up in some of this '80s comparison I did ... included in this
past post
http://www.garlic.com/~lynn/2001m.html#53 TSS/360

other posts mentioning above TSS analysis
http://www.garlic.com/~lynn/2001n.html#18 Call for folklore - was Re: So it's cyclical.
http://www.garlic.com/~lynn/2001n.html#26 Open Architectures ?
http://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
http://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
http://www.garlic.com/~lynn/2002e.html#32 What goes into a 3090?
http://www.garlic.com/~lynn/2002e.html#47 Multics_Security
http://www.garlic.com/~lynn/2002f.html#42 Blade architectures
http://www.garlic.com/~lynn/2002l.html#14 Z/OS--anything new?
http://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
http://www.garlic.com/~lynn/2002o.html#15 Home mainframes
http://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
http://www.garlic.com/~lynn/2002p.html#44 Linux paging
http://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
http://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions
http://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
http://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
http://www.garlic.com/~lynn/2003o.html#52 Virtual Machine Concept
http://www.garlic.com/~lynn/2004c.html#9 TSS/370 binary distribution now available
http://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates
http://www.garlic.com/~lynn/2004c.html#61 IBM 360 memory
http://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
http://www.garlic.com/~lynn/2007c.html#23 How many 36-bit Unix ports in the old days?
http://www.garlic.com/~lynn/2007f.html#18 What to do with extra storage on new z9
http://www.garlic.com/~lynn/2007g.html#72 The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007k.html#46 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007p.html#65 Translation of IBM Basic Assembler to C?
http://www.garlic.com/~lynn/2008m.html#63 CHROME and WEB apps on Mainframe?
http://www.garlic.com/~lynn/2009f.html#37 System/360 Announcement (7Apr64)
http://www.garlic.com/~lynn/2009l.html#55 IBM halves mainframe Linux engine prices
http://www.garlic.com/~lynn/2010.html#86 locate mode, was Happy DEC-10 Day
http://www.garlic.com/~lynn/2010e.html#17 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
http://www.garlic.com/~lynn/2010k.html#51 Information on obscure text editors wanted
http://www.garlic.com/~lynn/2011.html#20 IBM Future System
http://www.garlic.com/~lynn/2011.html#73 Speed of Old Hard Disks - adcons
http://www.garlic.com/~lynn/2011b.html#78 If IBM Hadn't Bet the Company

Charlie Gibbs

unread,
Mar 14, 2011, 1:49:42 AM3/14/11
to
In article <m3fwqqh...@garlic.com>, ly...@garlic.com

(Anne & Lynn Wheeler) writes:

> The dustup over stanford PHD thesis for global LRU in the early 80s ...
> was because ... at least the local vis-a-vis global page replacement
> issue was still going on nearly 20yrs after Atlas (although I had done
> my work less than decade after Atlas).

I could see global LRU being opposed on general principles
by any self-respecting Structured Programming zealot,
to whom "global" is almost as bad a profanity as GOTO.

--
/~\ cgi...@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!

Dave Wade

unread,
Mar 14, 2011, 5:04:43 AM3/14/11
to

"John Levine" <jo...@iecc.com> wrote in message
news:ilk1i7$316h$1...@gal.iecc.com...

I used MTS on the 360/67 at Newcastle University and when we started to page
it slowed. Demand Paging as a way to extend real store still gives issues
today. The paging rate is the first counter you look at on a sad Windows
server. One thing that made it worse in the 60's was the fact that Fortran
stores its arrays in such away that varying the last subscript fastest tends
to access multiple storage pages, and as thast the natural way to write
code, much niave fortran sucked in perforance terms.


> R's,
> John

Joe Morris

unread,
Mar 14, 2011, 6:37:54 AM3/14/11
to
"John Levine" <jo...@iecc.com> wrote:

> IBM apparently said it'd support 50 users, but the reality was about
> eight of us at a time. Fortunately the 2741 terminals were unreliable
> enough that there were rarely more than 8 working at once.

Interesting comment on the 2741s...at my shop (running CALL and ATS, not
TSS) the only ongoing problem we had was the seeming inability of students
to keep their cotton-pickin' fingers off the typeball. I don't recall the
replacement frequency, but we were never short of typeballs with a chipped
tooth.

Funny coincidence...last week I was cleaning up at home and ran across a
couple of typeballs, still sealed in their original shrink-wrap packaging; I
had grabbed them from the wastebasket at my PPOE when the last of the 2741s
were decommissioned.

Joe Morris


Anne & Lynn Wheeler

unread,
Mar 14, 2011, 11:55:25 AM3/14/11
to
"Dave Wade" <dave....@gmail.com> writes:
> I used MTS on the 360/67 at Newcastle University and when we started
> to page it slowed. Demand Paging as a way to extend real store still
> gives issues today. The paging rate is the first counter you look at
> on a sad Windows server. One thing that made it worse in the 60's was
> the fact that Fortran stores its arrays in such away that varying the
> last subscript fastest tends to access multiple storage pages, and as
> thast the natural way to write code, much niave fortran sucked in
> perforance terms.


processor caches are now larger than 360/67 storage/memory.

note that one of the previous references mentioned that for number some
number of recent windows releases ... they were doing FIFO replacement
algorithm

recent mention of rewriting "routes" for major airline res system
http://www.garlic.com/~lynn/2011c.html#42 If IBM Hadn't Bet the Company


http://www.garlic.com/~lynn/2011d.html#43 Sabre; The First Online Reservation System

and getting 100 times performance improvement. secret was compressing
information for all flt segments for all commercial scheduled flts in
the world (everything on OAG tape) and all commercial airports ... so
that it could be memory resident. Initial implementation was only 20
times faster. I then re-org'ed the operation so that it was aligned with
the processor cache operation and got another factor of five improvment
(for 100 times overall).

the science center had ported apl\360 to cms for cms\apl ... eliminating
all the code in apl\360 for doing its own terminal support,
multi-tasking, swapping, etc. However, the base apl\360 environment were
16kbyte (sometimes 32kbyte) workspaces that were swapped as single unit.
apl\360 workspace storage management involved allocating a new storage
location on every assignment ... and when workspace was exhausted, do
garbage collection to compress all allocated storage to contiguous area
and then start all over. this work reasonably well for swapping
enviornment, but was disaster in larger virtual memory, demand paged
environment. every apl application was just about guarenteed to quickly
touch nearly every page in the virtual address space repeatedly over
small period of time (in effect, working set size always became the same
as the virtual memory size, regardless the size of the apl
application). so one of the other things that had to be done was adapt
cms\apl storage operation to large virtual memory demand paged
environment.

science center had a bunch of virtual memory monitoring tools, as well
as virtual memory modeling tools and simulators (i.e. do real traces and
run them thru simulators testing large various of different algorithms).
one of the tools took application I&D memory trace ... along with
application load map ... and did semi-auto program re-organization for
optimal performance in demand page virtual memory enviornment. Part of
this tool included display & analysis of the I&D memory trace ... and
was used in the cms\apl storage rework.

the tool was later released in mid-70s as customer product called
VS/Repack (for its semi-automated program reorganization). It was also
used internally by major os/360 compilers, applications, subsystems,
dbms for their migration from real storage/memory to 370 virtual memory
environment.

past posts in this thread:

http://www.garlic.com/~lynn/2011e.html#5 Multiple Virtual Memory

misc. past posts mentioining vs/repack:
http://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
http://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
http://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
http://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
http://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
http://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
http://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
http://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
http://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
http://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
http://www.garlic.com/~lynn/2002f.html#50 Blade architectures
http://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
http://www.garlic.com/~lynn/2003f.html#53 Alpha performance, why?
http://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
http://www.garlic.com/~lynn/2003h.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003j.html#32 Language semantics wrt exploits
http://www.garlic.com/~lynn/2004.html#14 Holee shit! 30 years ago!
http://www.garlic.com/~lynn/2004c.html#21 PSW Sampling
http://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
http://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
http://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
http://www.garlic.com/~lynn/2005.html#4 Athlon cache question
http://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
http://www.garlic.com/~lynn/2005d.html#48 Secure design
http://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
http://www.garlic.com/~lynn/2005j.html#62 More on garbage collection
http://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
http://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
http://www.garlic.com/~lynn/2005o.html#5 Code density and performance?
http://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
http://www.garlic.com/~lynn/2006b.html#23 Seeking Info on XDS Sigma 7 APL
http://www.garlic.com/~lynn/2006e.html#20 About TLB in lower-level caches
http://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
http://www.garlic.com/~lynn/2006i.html#37 virtual memory
http://www.garlic.com/~lynn/2006j.html#18 virtual memory
http://www.garlic.com/~lynn/2006j.html#22 virtual memory
http://www.garlic.com/~lynn/2006j.html#24 virtual memory
http://www.garlic.com/~lynn/2006l.html#11 virtual memory
http://www.garlic.com/~lynn/2006o.html#23 Strobe equivalents
http://www.garlic.com/~lynn/2006o.html#26 Cache-Size vs Performance
http://www.garlic.com/~lynn/2006r.html#12 Trying to design low level hard disk manipulation program
http://www.garlic.com/~lynn/2006x.html#1 IBM sues maker of Intel-based Mainframe clones
http://www.garlic.com/~lynn/2006x.html#16 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2007g.html#31 Wylbur and Paging
http://www.garlic.com/~lynn/2007m.html#55 Capacity and Relational Database
http://www.garlic.com/~lynn/2007o.html#53 Virtual Storage implementation
http://www.garlic.com/~lynn/2007o.html#57 ACP/TPF
http://www.garlic.com/~lynn/2007s.html#41 Age of IBM VM
http://www.garlic.com/~lynn/2008c.html#24 Job ad for z/OS systems programmer trainee
http://www.garlic.com/~lynn/2008c.html#78 CPU time differences for the same job
http://www.garlic.com/~lynn/2008d.html#35 Interesting Mainframe Article: 5 Myths Exposed
http://www.garlic.com/~lynn/2008e.html#16 Kernels
http://www.garlic.com/~lynn/2008f.html#36 Object-relational impedence
http://www.garlic.com/~lynn/2008l.html#81 Intel: an expensive many-core future is ahead of us
http://www.garlic.com/~lynn/2008m.html#69 Speculation ONLY
http://www.garlic.com/~lynn/2008q.html#65 APL
http://www.garlic.com/~lynn/2010j.html#48 Knuth Got It Wrong
http://www.garlic.com/~lynn/2010j.html#81 Percentage of code executed that is user written was Re: Delete all members of a PDS that is allocated
http://www.garlic.com/~lynn/2010k.html#8 Idiotic programming style edicts
http://www.garlic.com/~lynn/2010k.html#9 Idiotic programming style edicts
http://www.garlic.com/~lynn/2010m.html#5 Memory v. Storage: What's in a Name?

Anne & Lynn Wheeler

unread,
Mar 14, 2011, 4:18:01 PM3/14/11
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:
> The science center initial CP40 implementation was adding associative
> virtual memory hardware to 360/40. A difference between the Atlas
> hardware implementation and the implementation for 360/40 was that the
> 360/40 implementation included a process identifier.

re:


the virtual memory hardware for 360/40 ... at least appeared to add
"process identifier" for its "inverted table" implementation
... allowing for multiple concurrent executing tasks/users (aka multiple
different virtual address spaces) ... separate from the operational
issues dealing with multiple concurrent executings tasks/users ... like
choice of replacement algorithm and mechanism for controlling page
thrashing (caused by contention from excessive/high multi-task/user
levels).

http://www.garlic.com/~lynn/2011e.html#5 Multiple Virtual Memory
http://www.garlic.com/~lynn/2011e.html#8 Multiple Virtual Memory

one of the downsides in the single-level-store, demand page paradigm
... are large commercial applications that sequentially process large
amounts of data. In the file paradigm ... it is relatively
straight-forward to do large block asyncrhonous double buffering (both
read ahead and write behind) ... allowing overlap between processing and
tranfers as well larger/more efficient transfers (also processed data is
quickly discarded by being overlayed with subsequent i/o operations) In
the single-level-store, demand paging ... this processing slows down
significant with synchronous operation for page read at a time and data
that has been finished with tends to linger around. To boost
single-level-store, demand page performance for this type of operation
requires some sort of application operational hints ... and/or
sophisticated system heuristics to recognize things like sequential
processing.

for other topic drift ... later 801/risc inverted tables used
segment/pto associative (instead of process/address space associative).

i've frequently contended that a lot of 801 features where hardware
simplification trade-offs to be the opposite of what had been attempted
in failed "Future System" effort.

Part of this was "inverted tables" ... but with "segment associative"
instead of process/address-space associative ... i.e. there were 16
"segment registers" ... each containing a "segment id" ... 801 with
32bit virtual address ... which use the high four bits of the virtual
address to select a "segment id" from corresponding segment register.
The remaining page number bits (4k pages so 32-12-4 ... 16bits) would be
combined with the "segment id" (12 bits on ROMP used in the pc/rt) to
find corresponding real page. This allowed for process to have some
number of process-specific, "private" segments ... but also some number
of "shared" segments ... where all processes that shared the same
segments would use the same "segment-id".

In the 70s, I was doing all this stuff with paged mapped and "segment
sharing" on 360/370 (first with cp67 and then with vm370). In the late
70s, i got into small dustup with the 801 people over their relatively
small number of segments. Setting up shared objects in virtual address
space required privileged kernel calls to validate permissions ... so
there tended to be some trend to relatively long-lived sharing to
amortize the cost of the kernel call ... but there also tended to be
relative large number of possible shared things ... so there needed to
be smaller granularity and more of them (24bit 370 addressing could have
256 64kbyte shared segments).
http://www.garlic.com/~lynn/submain.html#mmap
and
http://www.garlic.com/~lynn/submain.html#adcon

The 801/risc counter was that it was designed to do all privilege
validation by the compiler ... and the loader would guarentee only
"correct" programs were loaded for execution ... thus the
hardware/system needed no protection feature ... and applications could
have inline switching of segment values ... as easy as changing address
pointer in general purpose registers (no kernel call overhead to
amortize, changing part of address space access as changing
address/pointer values).

This sort of fell apart with the 801/romp displaywriter follow-on was
killed and it was decided to retarget to unix workstation. running unix
required hardware protection paradigm (between kernel and application)
for permission enforcement. It then also lost the ability to do inline
application switching of segment-id values ... and required kernel calls
and permission validation. So one of the things investigated for the
unix market was how to aggregate lots of small (shared) objects into
much larger (shared) application library ... that was better fit with
the (relatively small number of) 256mbyte segments.

misc. past email mentioning 801/risc ... including some referring to
methodology for shared object packing.
http://www.garlic.com/~lynn/lhwemail.html#801

Anne & Lynn Wheeler

unread,
Mar 14, 2011, 5:26:52 PM3/14/11
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:
> for other topic drift ... later 801/risc inverted tables used
> segment/pto associative (instead of process/address space associative).
>
> i've frequently contended that a lot of 801 features where hardware
> simplification trade-offs to be the opposite of what had been attempted
> in failed "Future System" effort.

re:
http://www.garlic.com/~lynn/2011e.html#10 Multiple Virtual Memory

trying to work out packing "shared objects" for 801/romp:

Date: 11/16/84 07:22:14
From: wheeler

re: relocate;

what I'm looking for is a data flow of which lines go where out of the
virtual address, thru the segment regs, thru the tlb, and off to the
cache (along with the size & "how many associative" for the tlb and
the cache). With that information I can verify that the design will
work in the hardware. I also need a detailed description of how the
tlb miss hardware will search the inverted table ... to make sure the
software can do its job.

I think we have it all worked out ... but we need detailed specs. to
verify what we have will work.

... snip ...

Date: 11/27/84 06:24:44
From: wheeler

i got xxxxx to explain the inverted table to me yesterday. Will have
replacement for RMP001 sometime later today ... with possibly some of
the software area discussed.

... snip ...

"xxxxx" was "father" of 801/risc

"RMP001" document that I was doing that included "processor cluster"

email from couple days earlier (14nov) ("romp small shared segments")
http://www.garlic.com/~lynn/2006y.html#email841114c
and another followup from the 27th
http://www.garlic.com/~lynn/2006y.html#email841127

from this old post
http://www.garlic.com/~lynn/2006y.html#36 Multiple mappings

old post referencing "processor cluster" part:
http://www.garlic.com/~lynn/2004m.html#17 mainframe and microprocessor

recent references to "processor clusters"
http://www.garlic.com/~lynn/2011b.html#48 Speed of Old Hard Disks
http://www.garlic.com/~lynn/2011b.html#50 Speed of Old Hard Disks
http://www.garlic.com/~lynn/2011b.html#55 Speed of Old Hard Disks
http://www.garlic.com/~lynn/2011c.html#6 Other early NSFNET backbone


http://www.garlic.com/~lynn/2011c.html#12 If IBM Hadn't Bet the Company

http://www.garlic.com/~lynn/2011c.html#20 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011c.html#27 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011c.html#38 IBM "Watson" computer and Jeopardy
http://www.garlic.com/~lynn/2011c.html#54 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011c.html#69 The first personal computer (PC)
http://www.garlic.com/~lynn/2011d.html#7 IBM Watson's Ancestors: A Look at Supercomputers of the Past
http://www.garlic.com/~lynn/2011d.html#24 IBM Watson's Ancestors: A Look at Supercomputers of the Past
http://www.garlic.com/~lynn/2011d.html#65 IBM100 - Rise of the Internet

Quadibloc

unread,
Mar 15, 2011, 4:55:14 AM3/15/11
to
On Mar 12, 7:27 pm, Mike Hore <mike_hore...@OVE.invalid.aapt.net.au>
wrote:

> Ummm, I think it goes a loooong way further back - IBM had a tradition
> of avoiding words that made computers sound in any way human.  I just
> checked the 701 manual (from Bitsavers), dated 1953, and they refer to
> "electrostatic storage".  Likewise the 704 manual talks about "core
> storage".  Memory has always been "storage" in IBM-speak.

Yes, indeed. And, thus, they long avoided the word "computer". Thus,
one sees the IBM 704 Electronic Data Processing Machine.

John Savard

Anne & Lynn Wheeler

unread,
Mar 15, 2011, 10:32:43 AM3/15/11
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:
> one of the downsides in the single-level-store, demand page paradigm
> ... are large commercial applications that sequentially process large
> amounts of data. In the file paradigm ... it is relatively
> straight-forward to do large block asyncrhonous double buffering (both
> read ahead and write behind) ... allowing overlap between processing and
> tranfers as well larger/more efficient transfers (also processed data is
> quickly discarded by being overlayed with subsequent i/o operations) In
> the single-level-store, demand paging ... this processing slows down
> significant with synchronous operation for page read at a time and data
> that has been finished with tends to linger around. To boost
> single-level-store, demand page performance for this type of operation
> requires some sort of application operational hints ... and/or
> sophisticated system heuristics to recognize things like sequential
> processing.

re:

so I had done quite a bit of that for cms paged mapped filesystem ...
which contained two parts ... the stuff in cms ... and other stuff in
the cp kernel that provided interface to low level paging subsystem.
http://www.garlic.com/~lynn/submain.html#mmap

now the internal network (larger than arpanet/internet from just about
the beginning until late '85 or early '86) was primarily vm rscs/vnet.
rscs/vnet implementation leveraged the cp "spool" filesystem ... which
underneath mapped into low-level paging 4k block transfers. part of the
characteristic of the spool operation was "synchronous" for 4k/page
transfers ... thruput characteristics very akin to synchronous
demand-page page faults (rscs/vnet non-runnable during block transfers).
This would limit typical RSCS/VNET to aggregate thruput of around
30kbytes (5-8 4k) per second (say 300kbits). In the days of 9.6kbit
links it wasn't a significant issue.

however, for the hsdt stuff i was doing
http://www.garlic.com/~lynn/subnetwork.html#hsdt

single full-duplex T1 was 300kbit aggregate ... with multiple and faster
links ... needed closer to multi-mbyte thruput.

so looking at nsfnet backbone type stuff for multi-mbyte thruput
http://www.garlic.com/~lynn/subnetwork.html#nsfnet

but for vnet/rscs supporting such aggregate thruput ... the backend
spool bottleneck had to be "fixed" ... basically provide vnet/rscs with
"spool" paradigm that operated more like cms paged-mapped filesystem
... allowing asynchronous operation, multi-page tranfers, contiguous
block allocation, read-aheads and write-behinds.

While I could deploy on HSDT backbone nodes ... i then tried to get the
changes deployed on the corporate RSCS/VNET backbone ... to really break
free the rest of corporate network ... old email
http://www.garlic.com/~lynn/2011.html#email870306
in this (linkedin) Greater IBM discussion/post
http://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?

as referenced in the above, there were enormous forces in-play narrowly
focused on converting internal network to SNA (some of it was
mis-information to the executive board ... like "PROFS was a VTAM
application" ... in the same time-frame there was also a bunch of
mis-information about the applicability of SNA for the NSFNET backbone).

other posts in the above thread:
http://www.garlic.com/~lynn/2011.html#1 Is email dead? What do you think?
http://www.garlic.com/~lynn/2011.html#5 Is email dead? What do you think?
http://www.garlic.com/~lynn/2011d.html#2 Is email dead? What do you think?
http://www.garlic.com/~lynn/2011d.html#4 Is email dead? What do you think?
http://www.garlic.com/~lynn/2011d.html#5 Is email dead? What do you think?
http://www.garlic.com/~lynn/2011d.html#41 Is email dead? What do you think?

old email mentioning internal network
http://www.garlic.com/~lynn/lhwemail.html#vnet

old email mentioning nsfnet backbone
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

for other drift ... (linkedin) Greater IBM thread on NSFNET backbone:


http://www.garlic.com/~lynn/2011d.html#65 IBM100 - Rise of the Internet

http://www.garlic.com/~lynn/2011d.html#66 IBM100 - Rise of the Internet
http://www.garlic.com/~lynn/2011e.html#6 IBM100 - Rise of the Internet

Anne & Lynn Wheeler

unread,
Mar 15, 2011, 11:34:54 AM3/15/11
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:
> single full-duplex T1 was 300kbit aggregate ... with multiple and faster
> links ... needed closer to multi-mbyte thruput.

re:
http://www.garlic.com/~lynn/2011e.html#12 Multiple Virtual Memory

finger-slip/typo ... T1 is 1.5mbit, full-duplex 3mbit, 300kbyte
aggregate ... tens times the typical RSCS/VNET thruput ... and I needed
at least ten times that ... 100 times typical RSCS/VNET thruput from the
spool file system ... discussed in (linkedin) Greater IBM post


http://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?

--

Anne & Lynn Wheeler

unread,
Mar 15, 2011, 7:44:35 PM3/15/11
to

Anne & Lynn Wheeler <ly...@garlic.com> writes:
> While I could deploy on HSDT backbone nodes ... i then tried to get the
> changes deployed on the corporate RSCS/VNET backbone ... to really break
> free the rest of corporate network ... old email
> http://www.garlic.com/~lynn/2011.html#email870306
> in this (linkedin) Greater IBM discussion/post
> http://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?

re:

http://www.garlic.com/~lynn/2011e.html#13 Multiple Virtual Memory

part of the thread (linkedin) Greater IBM thread (on internal VNET/RSCS
network)


http://www.garlic.com/~lynn/2011d.html#2 Is email dead? What do you think?

reference from ibm jargon in above

notwork - n. VNET (q.v.), when failing to deliver. Heavily used in 1988,
when VNET was converted from the old but trusty RSCS software to the new
strategic solution. To be fair, this did result in a sleeker, faster
VNET in the end, but at a considerable cost in material and in human
terms. nyetwork, slugnet
slugnet - n. VNET (q.v.) on a slow day. Some say on a fast day, and
especially in 1988. notwork, nyetwork

... snip ...

there was several comments about that eventually the SNA conversion was
"better" network ... however, they put in enormous additional resources.

my counter (in the thread) was that it would have been significantly
more cost/effective and efficient to have converted the RSCS/VNET links
to TCP/IP (than to SNA, the conversion to SNA was pointless effort ...
dispite the enromous mis-information to the contrary).

The base mainframe tcp/ip support may have had some performance issues
... but in that time-frame ... I had done RFC1044 support in mainframe
product and in some tuning tests at Cray Research ... was getting
channel media mbyte/sec thruput between 4341 and Cray, using only modest
amount of the 4341 (possibly 500 times improvement in ratio of cpu
instructions executed per byte transmitted). misc. past post mentioning
RFC1044 support:
http://www.garlic.com/~lynn/subnetwork.html#1044

other past post mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

Quadibloc

unread,
Mar 18, 2011, 10:20:20 AM3/18/11
to
On my old Windows ME system, I couldn't get Cygwin to work, so I had
given up on Hercules there. Now, there's Hercules 3.07 which doesn't
require Cygwin.

If it's installed in the /mvs38j/hercules directory, it seems to
start, but I don't get very far. My attempts to use wc3270, which also
is needed to avoid Cygwin, don't seem to be getting anywhere.

John Savard

Anne & Lynn Wheeler

unread,
Mar 18, 2011, 12:07:34 PM3/18/11
to

re:

from long ago and far away, earlier this week I (re-)gave a user group
oct86 presentation on vm performance history (with a couple additional
background history comments ... many of which could be familiar from old
email I've posted here over the past several years). the original was in
gml "foildoc" ... converted to powerpoint (actually open office impress
but saved in ".ppt" format). posted here is "pdf" export (both the
"overheads" version followed by the "notes" version)
http://www.garlic.com/~lynn/hill0316g.pdf

I even figured out how to do a green-bar, continous feed paper
background.

some foildoc trivia:

:foildoc size=20.
.* Ignore blank lines
.dm bl /.*
.dm lb /.*

.ms on

.if &LL@FoIl = FOIL .go fld1
.cm * if foil tags not supported, default to standard gml
:gdoc
.if &@pass = 2 .go fld2
.ty *** Using default, foil tags not supported
.dm xfoil /.pa /
.aa foil xfoil
.go fld2

...fld1
.df stxt type ('gothic' 9 medium normal) codepage t1d0base
.dm stxt(1) /.bf stxt = /
.dm etxt(1) /.pf /

...fld2
.dm stxt(4) /.fo on/
.dm etxt(4) /.cm * /

:titlep.
:title stitle='VM Performance History (86.10 SEAS)'
:title.VM Performance History
:title.86.10 SEAS
:docnum.VMP.DD.003
:date.Oct. 7, 1986
:author.Lynn Wheeler
:address.
:aline.K83/801
:aline.IBM Almaden Research
:aline.1-408-927-2680
:aline.Wheeler@ALMVMA
:eaddress.
:etitlep.

... snip ...

foildoc script (announce and description)

:frontm.
:titlep.
:title.GML for Foils
:date.August 24, 1984
:author.GMB
:author.MER
:author.RPO
:author.MHK
:address.
:aline.T.J. Watson Research Center
:aline.P.O. Box 218
:aline.Yorktown Heights, New York
:aline.&rbl.
:aline.San Jose Research Lab
:aline.5600 Cottle Road
:aline.San Jose, California
:eaddress.
:etitlep.

... snip ...

from ibm jargon:

foil - n. Viewgraph, transparency, viewfoil - a thin sheet or leaf of
transparent plastic material used for overhead projection of
illustrations (visual aids). Only the term Foil is widely used in
IBM. It is the most popular of the three presentation media (slides,
foils, and flipcharts) except at Corporate HQ, where even in the 1980s
flipcharts are favoured. In Poughkeepsie, social status is gained by
owning one of the new, very compact, and very expensive foil projectors
that make it easier to hold meetings almost anywhere and at any
time. The origins of this word have been obscured by the use of lower
case. The original usage was FOIL which, of course, was an
acronym. Further research has discovered that the acronym originally
stood for Foil Over Incandescent Light. This therefore seems to be IBM's
first attempt at a recursive language.

... snip ...

there was folklore that there was full-time department in armonk that
specialized in turning presentations into flipcharts (flipcharts were
the required form for presentation to armonk executives).

Anne & Lynn Wheeler

unread,
Mar 19, 2011, 11:45:16 PM3/19/11
to

Anne & Lynn Wheeler <ly...@garlic.com> writes:
> from long ago and far away, earlier this week I (re-)gave a user group
> oct86 presentation on vm performance history (with a couple additional
> background history comments ... many of which could be familiar from old
> email I've posted here over the past several years). the original was in
> gml "foildoc" ... converted to powerpoint (actually open office impress
> but saved in ".ppt" format). posted here is "pdf" export (both the
> "overheads" version followed by the "notes" version)
> http://www.garlic.com/~lynn/hill0316g.pdf

re:
http://www.garlic.com/~lynn/2011e.html#20 Multiple Virtual Memory

while putting together presentation ... also doing hsdt/nsf stuff

Date: 09/15/86 11:59:48
From: wheeler
To: somebody in paris

re: hsdt; another round of meetings with head of the national science
foundation ... funding of $20m for HSDT as experimental communications
(although it would be used to support inter-super-computer center
links). NSF says they will go ahead and fund. They will attempt to
work with DOE and turn this into federal government inter-agency
network standard (and get the extra funding).

... snip ...

above before some of the internal politics really took hold (shutting
down stuff with outside organizations). other past email mentioning
stuff for NSF using HSDT
http://www.garlic.com/~lynn/lhwemail.html#nsfnet
and
http://www.garlic.com/~lynn/lhwemail.html#hsdt

some related discussion in this (linkedin) Greater IBM thread:


http://www.garlic.com/~lynn/2011d.html#65 IBM100 - Rise of the Internet
http://www.garlic.com/~lynn/2011d.html#66 IBM100 - Rise of the Internet
http://www.garlic.com/~lynn/2011e.html#6 IBM100 - Rise of the Internet

the (above/later) "NSFNET backbone" rfp was for $11.2m. Announce
28Mar86:
http://www.garlic.com/~lynn//2002k.html#12

from the posted program announcement:

A total of up to 40 awards are planned for the two years 1986 and
1987. Support for this program is contingent on the availability of
funds. This announcement does not obligate the NSF to make any awards
if such funding is not available.

... snip ...

also from above:

NSFnet will be built as an Internet, or "network of networks", rather
than as a separate, new network.

... snip ...

the final (major) award didn't exactly turn out like the original
program announcement ... and there was never any funding allowed for
HSDT. misc. past post mentioning HSDT
http://www.garlic.com/~lynn/subnetwork.html#hsdt

besides still doing vm performance stuff ... and the presentation on vm
performance history (user group SEAS meeting was held on isle of
Jersey), the same presentation was also scheduled for VM specialist
meeting in UK (following is 2oct86)

Date: 02/10/86 17:26:55
To: speaker distribution
Subject: VM Specialist event 13th Oct.

Hi,
The venue for the event has been set at CROYDON. Please make sure you
contact xxxx5 or xxxx6 while you are in Jersey to get details of how
to get there.
(Earlier location had to be changed due to number of attendess).

The agenda looks like this...

09.30 Introduction
09.45 History of VM Performance -Lynn Wheeler (Almaden)
10.45 Coffee
11.00 CMS Update -xxxx1 (Endicott)
12.00 Lunch
13.00 Advanced Function Printing -xxxx2 (Almaden)
13.45 PC and VM Cooperative Processing -xxxx3 (Almaden)
14.30 Tea
14.45 MVS Recovery under VM/XA SF -xxxx4 (Jo'burg)
15.45 Announcements & SEAS report -xxxx5 & xxxx6 (B'stoke)
16.30 Open Forum
17.00 Close

See you on 13th...

... snip ...

Anne & Lynn Wheeler

unread,
Mar 21, 2011, 12:08:30 AM3/21/11
to
http://www.garlic.com/~lynn/2011e.html#22 Multiple Virtual Memory

Date: FRI, 02/20/87 10:21:36 PST
From: wheeler

Following is the tentative wrkshp schedule ... I'm giving two talks,
one on history of VM Performance (previously given at SEAS and easily
ran 3-4 hrs). Network Research (previously given at Baybunch and
numerous other places) ... I may also be participating in BOFs on
debugging (DUMPRX) and spooling (HSDTSFS).

... snip ...

from vmshare:
http://vm.marist.edu/~vmshare/browse?fn=VMWK:87A&ft=MEMO

from above:

The Asilomar VM Workshop of 1987 will be held February 23-27 at
Asilomar State Park on Monterey Bay in California. Registration will
be all day Monday, Feb. 23rd, with the sessions being held on the 24th
through the 26th. The setup of the workshop will be as in the past.
Dormitory rooms will be available.

... snip ...

and:
http://vm.marist.edu/~vmshare/browse?fn=VMWKABSA&ft=MEMO

misc past posts mentioning dumprx (problem analysis implemented in
rexx)
http://www.garlic.com/~lynn/submain.html#dumprx

misc. past posts mentioning hsdt:
http://www.garlic.com/~lynn/subnetwork.html#hsdt

recent post discussing hsdt "sfs" in (linkedin) Greater IBM thread:


http://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?

also includes this old email ... referencing converting the internal
network to sna/vtam
http://www.garlic.com/~lynn/2011.html#email870306

the post also references lots of mis-information regarding sna/vtam
applicability for the internal network as well as the nsfnet backbone.
misc. other recent posts mentioning sna/vtam misinformation:
http://www.garlic.com/~lynn/2011b.html#65 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011b.html#83 If IBM Hadn't Bet the Company


http://www.garlic.com/~lynn/2011c.html#12 If IBM Hadn't Bet the Company

http://www.garlic.com/~lynn/2011c.html#16 Other early NSFNET backbone
http://www.garlic.com/~lynn/2011c.html#92 A History of VM Performance
http://www.garlic.com/~lynn/2011d.html#4 Is email dead? What do you think?
http://www.garlic.com/~lynn/2011d.html#58 IBM and the Computer Revolution

misc. posts mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

--

Anne & Lynn Wheeler

unread,
Mar 21, 2011, 10:25:26 AM3/21/11
to

Anne & Lynn Wheeler <ly...@garlic.com> writes:
> the "TSS-Style" thing was called "RASP". Simpson later leaves and
> appears as a "Fellow" in Dallas working for Amdahl ... and redoing
> "RASP" (in "clean room"). There was some legal action that attempted to
> find any RASP code included in the new stuff being done at Amdahl.

re:


old email referencing RASP ... from long ago and far away:

Date: 04/08/81 08:42:17
From: wheeler

got my hands full all day today and tonight between here, stl, & cub
scouts. Another bet might be TSS, especially with the stripped down PRPQ
they did for UNIX interface. RASP may just be IH (opposite of NIH) for
him. I haven't heard of RASP being used for anything but demos (in some
ways put it on par with VMTOOL as non-production system so far -- have
you heard how much problem STL is having with both hardware & software?
-- of course RASP has had higher quality people working on it).

... snip ...

a little later ...

Date: 09/07/82 14:42:21
From: wheeler

talked to somebody about Amdahl RASP. IBM has had quite a large
attrition rate in the RASP group ... a large portion going to Amdahl.
Apparently IBM legal is gearing up to sue as soon as Amdahl announces.
Comment is that big IBM legal talent is going over every line of IBM
RASP code & applying for PATENTS on every possible thing they can come
up with.

... snip ...

later followup

Date: MON, 03/02/87 10:25:35 PST
From: wheeler

re: amdahl; implication was that it was something similar to RASP but
couldn't talk about it. There was a oblique comment regarding IBM
sueing Amdahl over RASP and that in detailed comparision of the code,
only one small section was even remotely similar and that got recoded.

The group is looking for people but are avoiding approaching any
ibm'ers (although I've heard from various IBM people about contacts
from other Amdahl areas, Amdahl appears to be offering $$ in excess of
the IBM going rate for VM system programmers). The Simpson group is
even kept isolated from other Amdahl people which are working in VM
and/or other IBM related software areas.

... snip ...

past posts mentioning RASP
http://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs)
http://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
http://www.garlic.com/~lynn/2000f.html#70 TSS ancient history, was X86 ultimate CISC? designs)
http://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
http://www.garlic.com/~lynn/2002g.html#0 Blade architectures
http://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
http://www.garlic.com/~lynn/2002j.html#75 30th b'day
http://www.garlic.com/~lynn/2002q.html#31 Collating on the S/360-2540 card reader?
http://www.garlic.com/~lynn/2003e.html#65 801 (was Re: Reviving Multics
http://www.garlic.com/~lynn/2005p.html#44 hasp, jes, rasp, aspen, gold
http://www.garlic.com/~lynn/2006f.html#19 Over my head in a JES exit
http://www.garlic.com/~lynn/2006w.html#24 IBM sues maker of Intel-based Mainframe clones
http://www.garlic.com/~lynn/2006w.html#28 IBM sues maker of Intel-based Mainframe clones
http://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing
http://www.garlic.com/~lynn/2007m.html#69 Operating systems are old and busted
http://www.garlic.com/~lynn/2010i.html#44 someone smarter than Dave Cutler
http://www.garlic.com/~lynn/2010o.html#0 Hashing for DISTINCT or GROUP BY in SQL
http://www.garlic.com/~lynn/2010p.html#42 Which non-IBM software products (from ISVs) have been most significant to the mainframe's success?
http://www.garlic.com/~lynn/2011.html#85 Two terrific writers .. are going to write a book

Anne & Lynn Wheeler

unread,
Mar 21, 2011, 10:54:28 AM3/21/11
to

re:

as highlighted in the SEAS performance history presentation ... there
was significant problems introduced by "enhancements" in the
HPO2.5-HPO3.4 period which was taking awhile to clean up (reverting to a
"clean" global LRU just being one)

from long ago and far away

Date: FRI, 03/20/87 09:27:21 PST
From: wheeler

re: big pages; for additional background see vmperf script on vmpcd.
Another set of fixes that went into hpo5 is for >16meg support (on
vmpcd also see >16meg forum and also section in vmperf script).

re: >16meg; when pok originally contacted me about >16meg hardware, I
was told that the development group plan's support was to perform
"bring-downs" by writting to DASD and then bringing back in. I
suggested an alternative approach based on the fact that CP (almost)
never operates directly on any data in virtual storage, but alwas
copies it first to a field in some CP control block. I provided a
subroutine that would be placed in DMKPSA (somewhat similar to the
fetch protect check subroutine already in DMKPSA) that would do the
copying, if necessary it would fix up dummy page table, change
CR0/CR1, enter supervisor state, translate mode and use an MVCL to
copy the necessary data.

The development group decided to stick with their originally plan, but
substituted my subroutine for DASD write/read. That got them into the
<16meg constraint they are today. HPO5 contains clean-up of the page
replacement algorithm per VMPERF SCRIPT along with minor support for
limited copying rather than bring-down (priv. instruction copying and
one or two others).

re: big pages; big pages represent a performance "benefit" from the
stand-point that more data is moved per operation, this somewhat
optimizes CPU overhead and DASD access time (3380 seek/rotation-delay
is done once per group of pages). On the other hand it represents a
performance "cost" in terms of channel capacity and real storage
utilization to transfer pages that wouldn't otherwise have been
required at that point in time. The "benefit"/"cost" trade-off
determines whether big pages help or hinder. Prior to HPO5 there was
also a "cost" associated with the "big page" code would do a
significantly poorer job of managing real pages associated with it.

A trivial example is customer running 3081, hpo4.2, and STC electronic
drum. With the STC drum allocated as SWAP, the system ran at 70% cpu
utilization, changing the STC drum to PAGE, the system ran at 100% cpu
utilization (with essentially the same ratio of prob/supervisor). The
3081 CPU advantage of block reading 400-600 pages a second was rather
negligible (paging CPU overhead is rather quite small in VM compared
to other operations ... although quite large compared to what it use
to be in cp/67). There was negligible "benefit" associated with DASD
access since the STC drum had no seek and/or rotational delay. The
big difference was in the "overhead/cost" associated with 3.4/4.2 big
pages vis-a-vis small pages (a cost differential that will be
substantially lower in 5.0).

... snip ...

A big part of the HPO5 was better alignment of the composition of the
"big pages" and alignment with global LRU replacement (as well as making
treatment of pages above & below the 16mbyte-line more uniform).

misc past posts mentioning "big pages":
http://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
http://www.garlic.com/~lynn/2002b.html#20 index searching
http://www.garlic.com/~lynn/2002c.html#29 Page size (was: VAX, M68K complex instructions)
http://www.garlic.com/~lynn/2002c.html#48 Swapper was Re: History of Login Names
http://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
http://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
http://www.garlic.com/~lynn/2002f.html#20 Blade architectures
http://www.garlic.com/~lynn/2002l.html#36 Do any architectures use instruction count instead of timer
http://www.garlic.com/~lynn/2002m.html#4 Handling variable page sizes?
http://www.garlic.com/~lynn/2003b.html#69 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
http://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#9 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#16 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
http://www.garlic.com/~lynn/2003g.html#12 Page Table - per OS/Process
http://www.garlic.com/~lynn/2003o.html#61 1teraflops cell processor possible?
http://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
http://www.garlic.com/~lynn/2004.html#13 Holee shit! 30 years ago!
http://www.garlic.com/~lynn/2004e.html#16 Paging query - progress
http://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
http://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad


http://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries

http://www.garlic.com/~lynn/2005j.html#51 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
http://www.garlic.com/~lynn/2005l.html#41 25% Pageds utilization on 3390-09?


http://www.garlic.com/~lynn/2005n.html#18 Code density and performance?

http://www.garlic.com/~lynn/2005n.html#19 Code density and performance?
http://www.garlic.com/~lynn/2005n.html#21 Code density and performance?
http://www.garlic.com/~lynn/2005n.html#22 Code density and performance?
http://www.garlic.com/~lynn/2006j.html#2 virtual memory
http://www.garlic.com/~lynn/2006j.html#3 virtual memory
http://www.garlic.com/~lynn/2006j.html#4 virtual memory
http://www.garlic.com/~lynn/2006j.html#11 The Pankian Metaphor
http://www.garlic.com/~lynn/2006l.html#13 virtual memory
http://www.garlic.com/~lynn/2006r.html#35 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#37 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#39 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006t.html#18 Why magnetic drums was/are worse than disks ?
http://www.garlic.com/~lynn/2006v.html#43 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006y.html#9 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2007o.html#32 reading erased bits
http://www.garlic.com/~lynn/2008f.html#6 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
http://www.garlic.com/~lynn/2008k.html#80 How to calculate effective page fault service time?
http://www.garlic.com/~lynn/2010g.html#23 16:32 far pointers in OpenWatcom C/C++
http://www.garlic.com/~lynn/2010g.html#42 Interesting presentation
http://www.garlic.com/~lynn/2010g.html#72 Interesting presentation

Anne & Lynn Wheeler

unread,
Mar 21, 2011, 1:03:33 PM3/21/11
to

Anne & Lynn Wheeler <ly...@garlic.com> writes:
> from vmshare:
> http://vm.marist.edu/~vmshare/browse?fn=VMWK:87A&ft=MEMO
>
> from above:
>
> The Asilomar VM Workshop of 1987 will be held February 23-27 at
> Asilomar State Park on Monterey Bay in California. Registration will
> be all day Monday, Feb. 23rd, with the sessions being held on the 24th
> through the 26th. The setup of the workshop will be as in the past.
> Dormitory rooms will be available.

re:
http://www.garlic.com/~lynn/2011e.html#25 Multiple Virtual Memory

this is post about some of the HSDT performance issues


http://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?

this is in response to email that both VM Performance History and
HSDT/NSFNET talks had been accepted ... suggesting that they might
also be interested on spool-file rewrite for HSDT RSCS operation.

Date: WED, 02/04/87 15:28:52 PST
From: wheeler
To: 87 VM workshop organizer

re: hsdt-sfs; I also have a talk on proto-type hsdt spool file system
that I'll be giving at the ibm vmite two weeks later (week 3/10) ...
but i'm not sure i could get clearance to give that talk in the time
left.

proto-type hsdt-sfs is implemented in pascal/vs extended and operates
in a virtual machine. data records are format compatible with the
existing vm spool system but with several added bells and whistles.
spool file checkpointing is totally eliminated (both the performance
overhead of doing it and the start-up overhead ... even much faster
than the announced hpo 5 support start-up enhancements) by adding a
slight amount of data to each record which essentially makes each
record self-describing and making sure that all i/o is performed in a
consistent manner. all single points of failure are eliminated and
data in the spool file area is recoverable if it can be read (virtual
machine, cp, and/or hardware can have catastrophic failures at any
point and hsdt-sfs system is recoverable) since data & control
information is self-describing and written consistently. As such, the
hsdt-sfs has a much higher reliability than the existing cp spool
system.

Contiguous allocation and imbedding index blocks in the file are
supported for increased performance via multi-block read/write i/o.
there are no limitations on number of spool files, either in the system
or per userid. in-core ssbloks (abbreviated sfbloks, approx. 50 bytes)
are currently chained from a userid specific chain and a master system
chain. The master system chain will shortly be replaced red/black tree.
userid specific anchors are hung off a hash table and contain userid
specific summary information such as total number of files and total
number of 4k blocks allocated. Files &/or file information can be found
either by userid hash and/or thru the red/black tree.

The abbreviated ssbloks require less virtual memory and the associated
virtual pages can either reside in >16meg and/or be paged out.

An offshot of the hsdt-sfs technology are pascal/vs application
programs that can read a pid cp spool checkpoint area & spool disks.
One such application supports importing pid cp spool files into
hsdt-sfs. Another application will simulate the cp SPTAPE DUMP
command but with more function and better performance.

... snip ...

one of the pathlength improvements in HSDT-SFS was the native CP
implementation had a sequential chain ... and for large systems
could have 10K elements (overhead issue is analogous to CP/67
kernel storage management before subpools).

trivia ... mainframe tcp/ip was also implemented in pascal/vs ... and
while it had some thruput/performance issues ... I had done the changes
to support RFC1044 and in some testing at Cray Research was able to get
sustained channel speed thruput between 4341 and Cray ... using only
modest amount of the 4341 (possibly 500 times improvement in the
instructions executed per byte moved) ... some past posts
http://www.garlic.com/~lynn/subnetwork.html#1044

Anne & Lynn Wheeler

unread,
Mar 22, 2011, 11:14:17 PM3/22/11
to

re:
http://www.garlic.com/~lynn/2011e.html#29 Multiple Virtual Memory

for some IPO RSCS "humor":

Date: 24 June 1987, 13:00:18 PDT
Subject: ipo timers

The IPO RSCS timer code uses fields in the task control block to control
the timer, so you only get one per task. I guess you'll have to set up
your own timer queue (simplified since it only has two elements) within
the line driver to get more than one timer.

I was talking to xxxxx about the IPO timer code a while ago and he
thought that the implementation violates a basic RSCS design principle.
The VM development group decided to use a similar approach in 1975, in
spite of attempts by xxxxx and yyyyy to persuade them otherwise. xxxxx
and yyyyy felt strongly enough about it that they responded by
abandoning the product version of RSCS as their VNET development base.
Later, when VNET was released as the RSCS Networking program, the bad
timer code in the SCP RSCS was stabilized out of existence. xxxxx
suspects that the motivation for messing up the RSCS design by putting
in the IPO timer code was the same as what makes kids decorate wet
cement.

... snip ...


The IPO RSCS FDX driver had special y-connector cable to take 56kbit
full-duplex into two separate ports/addresses ... one dedicated for read
and one dedicated for write.

I needed to use the FDX driver for T1/1.5mbit/sec (and faster)
full-duplex ... both terrestrial and satellite. To handle satellite
delay ... I needed to do rate-based pacing (as opposed to "window"
pasing paradigm) ... using timer services that were consistent with the
original rscs/vnet implementors.

Later, I was on the XTP technical advisory board ... past posts
mentioning XTP (we also took it to ANSI x3s3.3 trying for HSP
standardization)
http://www.garlic.com/~lynn/subnetwork.html#xtphsp

and I did the following write-up/specification for xtp rate-based pacing
http://www.garlic.com/~lynn/xtprate.html

XTP wiki
http://en.wikipedia.org/wiki/Xpress_Transport_Protocol

above claims that XTP does not employ congestion avoidance ... but I
managed congestion avoidance with (adaptive) rate-based pacing.

I've periodically claimed that tcp/ip slow-start and sliding window for
congestion control ... was because many of the platforms from the period
lacked timer facilities adequate for implementing rate-based
operation. Approximate same time slow-start was presented at IETF
meeting, ACM SIGCOMM had paper how slow-start & sliding windowing was
non-stable in large multi-hop network. One "failure" mode was that
return ACKS (in window paradigm) tended to "batch-up" ... opening up
large number of windows resulting in transmitting multiple back-to-back
packets (aggrevating congestion).

One of the characteristics is XTP can do reliable transmit in minimum of
3-packet exchange ... while TCP requires a minimum of 7-packet exchange
(VMTP was in-between at minimum of 5-packet exchange).

Reply all
Reply to author
Forward
0 new messages