Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

MVS vs HASP vs JES (was 2821)

203 views
Skip to first unread message

Kevin McQuiggin

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
"HASP" was an acronym for "Houston Automatic Spooling Program". It was a
batch process manager. It queued jobs for processing, submitted each
when it reached the head of the queue, stored and printed output. Most
input in these days was in the form of punched cards.

I still have a HASP manual in the basement. If you need anything more
specific, let me know.

"JES" stood for "Job Entry Subsystem" as I recall, and was another batch
process handler.

Kevin

Lisa or Jeff wrote:
>
> One thing that always confused me was the difference between
> "MVS", "HASP", and "JES".
>
> I always thought the three terms were interchangeable--that they
> basically referred to IBM's S/370-390 operating system that
> grew out of OS/MVT.
>
> At work, we refer to the printout of dataset use and disposition
> and condition codes as the "HASP listing". Actually, we never
> use the term "JES".
>
> I presume the spooler, job scheduler, resource allocator (both
> CPU and peripheral), and actual CPU supervisor were all part of
> "MVS".

Lisa or Jeff

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

Jim Saum

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

Depending upon how precise you want to be, the terms are not
interchangeable. The spooler and the job scheduler are parts of either
JES2 or JES3. Managing CPU time is done by MVS itself, as is
allocating I/O devices, although JES3 gets involved in the latter.

IBM's term "JES" (Job Entry Subsystem), when used loosely, refers to a
job scheduling and spooling system supporting an OS. More
pedantically, over the years there have been three distinct JES
systems. These were quite different packages; they were not just
different release levels of the same software:

JES (there never was a "JES1") was the spooling system for OS/VS1, the
virtual-storage descendant of OS/360 MFT.

JES2 was the spooling system for most MVS systems. It was derived from
HASP, a spooling system for OS/360 MFT or MVT. HASP4 was the spooling
system for OS/VS2 Release 1 ("SVS"), the virtual-storage descendant of
OS/360 MVT. MVS -- originally announced as OS/VS2 Release 2 and soon
renamed -- was an extensive reworking of the MVT base system. An MVS
system could choose either JES2 or JES3 as its spooling system.

JES3 was the other spooling and job management system for MVS.
Spooling was only one part of this very elaborate system, which was
designed mostly for multi-CPU environments. In its original form it
assumed that driving the unit record equipment would be the job of a
separate CPU. JES3 was derived from ASP (Attached Support Processor),
which ran under OS/360.

The other IBM 370 OS's introduced in 1972 avoided the "JES"
terminological confusion: DOS/VS, derived from DOS/360, could run
IBM's optional POWER spooler. VM/370, derived from CP/67, had its own
built-in spooler.

- Jim Saum

Scott Peterson

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
hanc...@bbs.cpcn.com (Lisa or Jeff) wrote:

>One thing that always confused me was the difference between
>"MVS", "HASP", and "JES".
>
>I always thought the three terms were interchangeable--that they
>basically referred to IBM's S/370-390 operating system that
>grew out of OS/MVT.
>
>At work, we refer to the printout of dataset use and disposition
>and condition codes as the "HASP listing". Actually, we never
>use the term "JES".
>

MVS was the operating system. HASP (Houston Automatic Spooling, if I
remembe correctly) was one of the input/output/spooling/ scheduling
commponents developed for MVT that would also run under early MVS.
JES was the spooling system for MVS. It came in two very different
flavors JES2 and JES3. While the basic JCL was the same, there were
special JES control cards to handle routing and output processing that
were very different between the two versions.

Many companies extensively modified HASP (they had source in those
days) and were unable to move to JES for many years as the existing
systems satisfied their needs. I think it was MVS SP3 that finally
broke the links and wouldn't run HASP any more. But by then it was
pretty much gone by the wayside anyway.

>I presume the spooler, job scheduler, resource allocator (both
>CPU and peripheral), and actual CPU supervisor were all part of
>"MVS".

No. you had your choice. JES3 was for the really big or wide spread
sites.

Scott Peterson


If your dog is fat, you aren't getting
enough exercise

Doctor Memory

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Lisa or Jeff wrote in message <7nod0t$q...@netaxs.com>...

>One thing that always confused me was the difference between
>"MVS", "HASP", and "JES".


What, no RJE? ;)

Alan T. Bowler

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

OS/360 (MFT and MVT) was lousy at directly handling unit record
devices. HASP (Houston Automatic Spooling Program) was privileged
user job (OS/360 thought it was just a job) that was created to cure
this situation. It would allocate the real card readers and printers
to itself, and create a bunch of fake devices. It read the jobs on
the card readers mangled the JCL to point at these fake devices. Then
whenever the user program issued an I/O to one of these fake devices
HASP would intercept the I/O and satisfy the request using its disk
spools. It was smart about all this and a program that needed/wanted
direct access to a unit record device could easily get it with the right
JCL.

HASP was distributed through the user's group and was not officially
supported by IBM. The official line was that it was not needed,
and they tried hard to ignore it out of existence. The customers
disagreed and essentially all the big users installed it.

When the /370 systems with paging came out and OS/360 MVT evolved
into MVS, IBM threw in the towel on the old spooling techniques
in OS/360 and brought out JES. JES was rumoured to have evolved
from the HASP source, and was more closely integrated to the OS.

>
> What, no RJE? ;)

RJE was one of the services supplied by HASP.

Anne & Lynn Wheeler

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

HASP was done in Houston by Simpson, Crabtree et, al

I worked on HASP starting with OS release 11 ... up thru OS release
18 (HASP-III). One of the things was putting in CMS edit syntax as
well as 2741 & TTY support for early form of CRJE support.

there was also ASP (asyncronous spooling program) done out on the
west coast.

Migrating HASP into the IBM official product (instead of TYPE-III program)
... group finally landed in G'burg and HASP was renamed to JES2. The
G'burg group also eventually picked up ASP ... renamed to JES3. My wife
worked on JES2 & JES3 in the g'burg group until she was con'ed into going
to POK to be responsible for loosely-coupled (she originated peer-coupled
shared data ... original basis for IMS hot standby ... and then parallel
sysplex).


--
--
Anne & Lynn Wheeler | ly...@garlic.com, finger for pgp key
http://www.garlic.com/~lynn/

Joe Morris

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
"Alan T. Bowler" <atbo...@thinkage.on.ca> writes:

>HASP was distributed through the user's group

No. It was distributed through the normal IBM channel (PID, or
Program Information Department). For some reason I still recall
the program ID number for the original version: 5.1.014.

> and was not officially
>supported by IBM. The official line was that it was not needed,
>and they tried hard to ignore it out of existence. The customers
>disagreed and essentially all the big users installed it.

True. HASP *was* distributed as an "unofficial, unsupported program
written informally by IBM employees," which was the official definition
of so-called "Type 3" programs. (Quickie Quiz: what type of vehicle
was the favorite of the HASP development team?)

There was no "official" support, but as a practical matter IBM management
recognized that HASP was for all intents and purposes critical to the
success of the OS/360 system, and by extension of the large IBM hardware
product line. There were HASP newsletters sent to customer sites, and
Tom Simpson's team was available to help users who ran into trouble.

Over the years I had several occasions to call for support and would
wind up talking directly to the original programmer of the code in
question. It was a sad day when IBM put other people between the
users and the authors.

And for many years IBM sent Dick Hitt (one of Tom's people on the HASP
team) to SHARE meetings to play the piano at the HASP songfest.

>When the /370 systems with paging came out and OS/360 MVT evolved
>into MVS, IBM threw in the towel on the old spooling techniques
>in OS/360 and brought out JES. JES was rumoured to have evolved
>from the HASP source, and was more closely integrated to the OS.

That's JES-2 that was the outgrowth of HASP. And in any case, the
"Type-1 spooling" that shipped with MVT was incapable of providing
useful function in most shops, so OS/360 users ran either ASP (if
you had lots of computer power) or HASP (if you didn't).

ASP was huge, and begat the similarly huge JES-3. HASP, on the
other hand, was quite compact, and some wags claim that its name
was a direct result of comparisons of the sizes of the two competing
spoolers: the memory requirement of HASP was Half-ASP.

Joe Morris (who still occasionally hums "HASPy Days are Here Again")

Joe Morris

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Anne & Lynn Wheeler <ly...@adcomsys.net> writes:

>I worked on HASP starting with OS release 11 ... up thru OS release
>18 (HASP-III). One of the things was putting in CMS edit syntax as
>well as 2741 & TTY support for early form of CRJE support.

Um...that was HASP-II version 3, no?

Joe Morris

Jim Saum

unread,
Jul 30, 1999, 3:00:00 AM7/30/99
to
In article <7nqf2l$8jr$1...@top.mitre.org>, jcmo...@linus.mitre.org wrote:

>ASP was huge, and begat the similarly huge JES-3. HASP, on the

>other hand, was quite compact, ...

In fairness to any ASP/JES3 fans who may be around, ASP (Attached
Support Processor) also did a lot more than HASP, e.g., controlling
cross-system job flow, managing dataset and device allocation so that
OS wouldn't get into allocation delays, and so on.

>..., and some wags claim that its name


>was a direct result of comparisons of the sizes of the two competing
>spoolers: the memory requirement of HASP was Half-ASP.

Another (possible) meaning of the term "half-ASP": ASP usually ran in
a multi-CPU complex. The simplest setup had two CPUs, called a "local"
and a "main". For testing or fallback purposes you could run both
functions on a single CPU, a configuration I remember hearing called
"half-ASP". (This is second-hand; I never worked in an ASP shop
myself.)

Another option for ASP testing was in virtual machines. ASP and JES3
used channel-to-channel adapters (CTCA's) to communicate among CPUs in
the complex. VM/370 (and CP/67 before it, I assume) could simulate
virtual CTCA's connecting virtual machines. So one could test an ASP
complex of several virtual machines all in one real machine under CP.

>Joe Morris (who still occasionally hums "HASPy Days are Here Again")

"To run without it is a sin, ..."

- Jim Saum

Eric Sosman

unread,
Jul 30, 1999, 3:00:00 AM7/30/99
to
Joe Morris wrote:
> And for many years IBM sent Dick Hitt (one of Tom's people on the HASP
> team) to SHARE meetings to play the piano at the HASP songfest.

That wasn't a piano; it was a S/360 Model 88. There was a story
going around at the time that only one 88 existed and it was
shipped from one SHARE meeting to the next, but I don't know how
much credence to attach to such a tale.

--
Eric Sosman
eso...@acm.org

Jim Saum

unread,
Jul 31, 1999, 3:00:00 AM7/31/99
to
In article <37A16385...@acm.org>, Eric Sosman <eso...@acm.org> wrote:

>That wasn't a piano; it was a S/360 Model 88. There was a story
>going around at the time that only one 88 existed and it was

>shipped from one SHARE meeting to the next, ...

One unfortunate trend is the reuse of historic product names,
e.g., IBM's reuse of the terms "RAMAC" (for both the 305 product of
the mid-1950's and a recent RAID product) and "360" (for both S/360
and the ThinkPad 360). Sentimentalists like me would rather see
historic product names retired, the way teams retire the numbers of
legendary athletes.

Now I see that IBM reused the "88" model number, too, for both the
SHARE-RPQ S/360 Model 88 and the rebranded Stratus fault-tolerant
servers they sold as IBM System/88's.

- Jim Saum

Anne & Lynn Wheeler

unread,
Aug 1, 1999, 3:00:00 AM8/1/99
to

posted here previously .... work I did as undergraduate ... presented
at fall share meeting in boston.


the university was running studant fortran jobs on 709... tape-to-tape
ibsys ... with 1401 providing front end unit-record<->tape support. I
believe thruput was in jobs per second.

initial conversion to OS Release MFT 9.5 on 360/67 (running in 65
mode) resulted in changing from jobs per second (on 709) to minutes
per job (on 360).

adding hasp got the times to around 20-30 seconds per job i.e. w/o
hasp system was running syncronous unit-record input (card-reader)
... processed by the compiler and almost as each card processed
... syncronous unit-record (printer) output. HASP provided asyncronous
processing for the unit-record gear ... allowing "jobs" to be run
effectively at asyncronous buffered disk-to-disk speed.

However that was still slower than the 709 with ibsys & fortran
monitor running tape-to-tape.

waterloo introduced a fortran monitor (watfor) that would accept
multiple student jobs for compile and execution ... and would do it
very efficiently ... and finally we started to see fortran student
workload thruput on the 360 surpase the 709.


--
OS Performance Studies With CP/67


OS MFT 14, OS nucleus with 100 entry trace table, 105 record
in-core job queue, default IBM in-core modules, nucleus total
size 82k, job scheduler 100k.

HASP 118k Hasp with 1/3 2314 track buffering

Job Stream 25 FORTG compiles

Bare machine Time to run: 322 sec. (12.9 sec/job)
times Time to run just JCL for above: 292 sec. (11.7 sec/job)

Orig. CP/67 Time to run: 856 sec. (34.2 sec/job)
times Time to run just JCL for above: 787 sec. (31.5 sec/job)


Ratio CP/67 to bare machine

2.65 Run FORTG compiles
2.7 to run just JCL
2.2 Total time less JCL time


1 user, OS on with all of core available less CP/67 program.


Note: No jobs run with the original CP/67 had ratio times higher than
the job scheduler. For example, the same 25 jobs were run under WATFOR,
where they were compiled and executed. Bare machine time was 20 secs.,
CP/67 time was 44 sec. or a ratio of 2.2. Subtracting 11.7 sec. for
bare machine time and 31.5 for CP/67 time, a ratio for WATFOR less
job scheduler time was 1.5.

I hand built the OS MFT system with careful ordering of
cards in the stage-two sysgen to optimize placement of data sets,
and members in SYS1.LINKLIB and SYS1.SVCLIB.

MODIFIED CP/67

OS run with one other user. The other user was not active, was just
available to control amount of core used by OS. The following table
gives core available to OS, execution time and execution time ratio
for the 25 FORTG compiles.


CORE (pages) OS with Hasp OS w/o HASP

104 1.35 (435 sec)
94 1.37 (445 sec)
74 1.38 (450 sec) 1.49 (480 sec)
64 1.89 (610 sec) 1.49 (480 sec)
54 2.32 (750 sec) 1.81 (585 sec)
44 4.53 (1450 sec) 1.96 (630 sec)

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx


--
Anne & Lynn Wheeler | ly...@adcomsys.net, ly...@garlic.com
http://www.garlic.com/~lynn/ http://www.adcomsys.net/lynn

Anne & Lynn Wheeler

unread,
Aug 1, 1999, 3:00:00 AM8/1/99
to

re: hasp-II version 3;

okk .... hasp-i would work on pcp & mft ... and could take over the
machine by modifying the svc new psw directly.

for mvt there was storage protection support and some misc. convention
changes ... which required the installation of a TYPE-1 svc handler
for HASP to assume the privileges it needed.

--

Joe Morris

unread,
Aug 2, 1999, 3:00:00 AM8/2/99
to
Eric Sosman <eso...@acm.org> writes:

>Joe Morris wrote:
>> And for many years IBM sent Dick Hitt (one of Tom's people on the HASP
>> team) to SHARE meetings to play the piano at the HASP songfest.

>That wasn't a piano; it was a S/360 Model 88. There was a story


>going around at the time that only one 88 existed and it was

>shipped from one SHARE meeting to the next, but I don't know how
>much credence to attach to such a tale.

The "model 88" joke started, IIRC, after a SHARE meeting in New York
when the unions made a huge stink over the idea that a guest in the
hotel would be so stupid as to want to play a piano. (Speakers at
the sessions were warned to avoid doing anything that would appear to
infringe on the unions' territory -- such as moving boxes of handouts
from a hotel room into a meeting room.)

Anyway, there was a movement to end the HASP Songfest, but the result
was that the Board of Directors decided that it was appropriate to
continue the tradition, and that it would ensure that Thursday night
in SCIDS (the boozefest) it would provide a "HASP Model 88 processor,
with no strings attched."

(At least up to the time we got rid of our IBM mainframe and stopped
going to SHARE, no further meetings were held in New York. SHARE
members had better things to do than argue with unions.)

Joe Morris

Jcwill...@yahoo.com

unread,
May 23, 2017, 5:57:24 PM5/23/17
to
In the 70's and 80's we had CRJE (conversational remote job entry). In the 90's we went to TSO/ISPF and we are still on it in 2017. Yes - I should be retired, but I'm not. I like my job I don't like being on call. I support virtual tape - precisely IBM VTS - mostly model TS7760T. You'll have to Google it to get capacties and the millions of volser these things can hokd (multiple VTS's. Mind boggling from when I mounted my first reel to reel tape on an IBM 360/20 CPU (if one even called a 360/20 a CPU). Just an old guy rambling.

Jon Elson

unread,
May 23, 2017, 10:21:00 PM5/23/17
to
Well, in the mid 70's, I started working on PDP-11s, and then moved to VAX.
360's were not too bad hardware, although pretty high cost for what you got.
The /20 was at least a 16-bit architecture. The 360/30 had an 8-bit
datapath and fairly slow 8-bit memory, which severely limited what I/O
devices you could attach. The CPU had to stop completely during selector
channel operations, too.

But, JCL shudder, shudder! Maybe just because I didn't know it real well,
but it seemed to be the worst thing on OS 360 systems. At least some of the
rest of OS 360 was pretty decent, but that JCL syntax just seemed way to
cryptic.

So, mostly due to JCL, I was glad to migrate to a command language that was
easy to remember, read and write on the fly.

In about 1980-82 I tried to build a 360 clone, and was making good progress
on the CPU, but when I realized how much system programming lay ahead, I
eventually stopped working on it.

Jon

Peter Flass

unread,
May 24, 2017, 7:14:45 AM5/24/17
to
Jon Elson <el...@pico-systems.com> wrote:
> Jcwill...@yahoo.com wrote:
>
>> In the 70's and 80's we had CRJE (conversational remote job entry). In the
>> 90's we went to TSO/ISPF and we are still on it in 2017. Yes - I should be
>> retired, but I'm not. I like my job I don't like being on call. I support
>> virtual tape - precisely IBM VTS - mostly model TS7760T. You'll have to
>> Google it to get capacties and the millions of volser these things can
>> hokd (multiple VTS's. Mind boggling from when I mounted my first reel to
>> reel tape on an IBM 360/20 CPU (if one even called a 360/20 a CPU). Just
>> an old guy rambling.
>
> Well, in the mid 70's, I started working on PDP-11s, and then moved to VAX.
> 360's were not too bad hardware, although pretty high cost for what you got.
> The /20 was at least a 16-bit architecture. The 360/30 had an 8-bit
> datapath and fairly slow 8-bit memory, which severely limited what I/O
> devices you could attach. The CPU had to stop completely during selector
> channel operations, too.
>
> But, JCL shudder, shudder! Maybe just because I didn't know it real well,
> but it seemed to be the worst thing on OS 360 systems. At least some of the
> rest of OS 360 was pretty decent, but that JCL syntax just seemed way to
> cryptic.

I think I'm the only person in the world who _likes_ JCL. It gives you a
degree of control of the system that you don't get with other systems. My
impression is that you might have to write a program on, e..g. unix to do
what you can do on MVS with a few lines of JCL. Of course most of such
programs have already been written, but JCL is standard as opposed to a
random collection of programs you might have installed on your particular
system.

>
> So, mostly due to JCL, I was glad to migrate to a command language that was
> easy to remember, read and write on the fly.
>
> In about 1980-82 I tried to build a 360 clone, and was making good progress
> on the CPU, but when I realized how much system programming lay ahead, I
> eventually stopped working on it.
>
> Jon
>



--
Pete

Dan Espen

unread,
May 24, 2017, 11:09:33 AM5/24/17
to
Peter Flass <peter...@yahoo.com> writes:

> Jon Elson <el...@pico-systems.com> wrote:
>> Jcwill...@yahoo.com wrote:
>>
>>> In the 70's and 80's we had CRJE (conversational remote job
>>> entry). In the 90's we went to TSO/ISPF and we are still on it in
>>> 2017. Yes - I should be retired, but I'm not. I like my job I don't
>>> like being on call. I support virtual tape - precisely IBM VTS -
>>> mostly model TS7760T. You'll have to Google it to get capacties and
>>> the millions of volser these things can hokd (multiple VTS's. Mind
>>> boggling from when I mounted my first reel to reel tape on an IBM
>>> 360/20 CPU (if one even called a 360/20 a CPU). Just an old guy
>>> rambling.
>>
>> Well, in the mid 70's, I started working on PDP-11s, and then moved
>> to VAX. 360's were not too bad hardware, although pretty high cost
>> for what you got. The /20 was at least a 16-bit architecture. The
>> 360/30 had an 8-bit datapath and fairly slow 8-bit memory, which
>> severely limited what I/O devices you could attach. The CPU had to
>> stop completely during selector channel operations, too.
>>
>> But, JCL shudder, shudder! Maybe just because I didn't know it real
>> well, but it seemed to be the worst thing on OS 360 systems. At
>> least some of the rest of OS 360 was pretty decent, but that JCL
>> syntax just seemed way to cryptic.
>
> I think I'm the only person in the world who _likes_ JCL.

That puts you in a club with possibly one member.

> It gives you a degree of control of the system that you don't get with
> other systems. My impression is that you might have to write a program
> on, e..g. unix to do what you can do on MVS with a few lines of
> JCL. Of course most of such programs have already been written, but
> JCL is standard as opposed to a random collection of programs you
> might have installed on your particular system.

A degree of control no Unix system user wants:

DDSYSIN='/app/data/file.in'
DDFILEOUT='/app/newdata/file.out.%%;MAXSPACE=100000;gen=3' exec myprog

Never seen a need for external filenames, space limits, or external
control of file generations. Similar for all the other stuff JCL
"gives" you.

I especially dislike the clumsy way JCL makes you delete files that may
or may not exist. I once had to spend hours over multiple days trying
to explain NOT CATALOGED 2 to a system tester.

--
Dan Espen

Scott Lurndal

unread,
May 24, 2017, 11:22:54 AM5/24/17
to
Dan Espen <dan1...@gmail.com> writes:
>Peter Flass <peter...@yahoo.com> writes:

>>
>> I think I'm the only person in the world who _likes_ JCL.
>
>That puts you in a club with possibly one member.
>
>> It gives you a degree of control of the system that you don't get with
>> other systems. My impression is that you might have to write a program
>> on, e..g. unix to do what you can do on MVS with a few lines of
>> JCL. Of course most of such programs have already been written, but
>> JCL is standard as opposed to a random collection of programs you
>> might have installed on your particular system.
>
>A degree of control no Unix system user wants:
>
> DDSYSIN='/app/data/file.in'
> DDFILEOUT='/app/newdata/file.out.%%;MAXSPACE=100000;gen=3' exec myprog
>
>Never seen a need for external filenames, space limits, or external
>control of file generations. Similar for all the other stuff JCL
>"gives" you.
>
>I especially dislike the clumsy way JCL makes you delete files that may
>or may not exist. I once had to spend hours over multiple days trying
>to explain NOT CATALOGED 2 to a system tester.

Or using IEFBR14?

Dan Espen

unread,
May 24, 2017, 11:31:39 AM5/24/17
to
Yes I'm talking about IEFBR14 although I somewhat prefer IDCAMS.
Both techniques are too clumsy for my tastes.

--
Dan Espen

Quadibloc

unread,
May 24, 2017, 11:43:28 AM5/24/17
to
I know that the MTS system I used at University, in addition to using IBM's FORTRAN IV G and H compilers,
modified to run under MTS, also used HASP, also so modified.

Anne & Lynn Wheeler

unread,
May 24, 2017, 12:47:14 PM5/24/17
to
Peter Flass <peter...@yahoo.com> writes:
> I think I'm the only person in the world who _likes_ JCL. It gives you
> a degree of control of the system that you don't get with other
> systems. My impression is that you might have to write a program on,
> e..g. unix to do what you can do on MVS with a few lines of JCL. Of
> course most of such programs have already been written, but JCL is
> standard as opposed to a random collection of programs you might have
> installed on your particular system.

re:
http://www.garlic.com/~lynn/99.html#92 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/99.html#94 MVS vs HASP vs JES (was 2821)

back when I was responsible of production os/360 at the univ. The first
thing you did after building new system was verify all the production
JCL continued to work. Release-to-release changes routinely broke some
production JCL.

hasp, jes2, nji, etc posts
http://www.garlic.com/~lynn/submain.html#hasp

Philosophically, online systems tended to start out assuming that the
responsible person was there in case something went wrong. Batch systems
started assuming the reverse, that the responsible person wasn't there
... so increasingly had provisions to automate handling issues (and help
led the way to server, dark room operations).

I periodically pontificate about being brought in as consultant
to small client/server startup that wanted to do payment transactions
on their server, they had also invented this technology they called
"SSL" they wanted to use, the result is now frequently called
"electronic commerce". I had absolute authority over webserver to
internet gateway to payment networks.

The payment networks trouble desk (with circuit-based infrastructure)
had standard of 5mins elapsed time for first-level problem
determination. Early pilot operation with webserver had trouble call
that was eventually closed after 3hrs with "no trouble found". I had to
do a lot of documentation and compensating diagnostic software to try
and get internet/packet-based to comparable to legacy circuit-based
infrastructure.

Until he passes, the internet standard RFC editor also would let me help
do STD1. He also sponsored a talk I gave at ISI (including inviting USC
graduate computer sec. group) on "Why Internet Isn't Business Critical
Dataprocessor".

past posts claiming it can take 4-10 times the (original) effort to
take a well designed and testing application and turn it into
"service".
http://www.garlic.com/~lynn/2001f.html#75 Test and Set (TS) vs Compare and Swap (CS)
http://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
http://www.garlic.com/~lynn/2002n.html#11 Wanted: the SOUNDS of classic computing
http://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003p.html#37 The BASIC Variations
http://www.garlic.com/~lynn/2004b.html#8 Mars Rover Not Responding
http://www.garlic.com/~lynn/2004b.html#48 Automating secure transactions
http://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?
http://www.garlic.com/~lynn/2004p.html#23 Systems software versus applications software definitions
http://www.garlic.com/~lynn/2004p.html#63 Systems software versus applications software definitions
http://www.garlic.com/~lynn/2004p.html#64 Systems software versus applications software definitions
http://www.garlic.com/~lynn/2005b.html#40 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
http://www.garlic.com/~lynn/2005n.html#26 Data communications over telegraph circuits
http://www.garlic.com/~lynn/2006n.html#20 The System/360 Model 20 Wasn't As Bad As All That
http://www.garlic.com/~lynn/2007f.html#37 Is computer history taught now?
http://www.garlic.com/~lynn/2007g.html#51 IBM to the PCM market(the sky is falling!!!the sky is falling!!)
http://www.garlic.com/~lynn/2007h.html#78 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007n.html#77 PSI MIPS
http://www.garlic.com/~lynn/2007o.html#23 Outsourcing loosing steam?
http://www.garlic.com/~lynn/2007p.html#54 Industry Standard Time To Analyze A Line Of Code
http://www.garlic.com/~lynn/2007v.html#53 folklore indeed
http://www.garlic.com/~lynn/2008e.html#41 IBM announced z10 ..why so fast...any problem on z 9
http://www.garlic.com/~lynn/2008e.html#50 fraying infrastructure
http://www.garlic.com/~lynn/2008e.html#53 Why Is Less Than 99.9% Uptime Acceptable?
http://www.garlic.com/~lynn/2008i.html#33 Mainframe Project management
http://www.garlic.com/~lynn/2008n.html#35 Builders V. Breakers
http://www.garlic.com/~lynn/2008p.html#48 How much knowledge should a software architect have regarding software security?
http://www.garlic.com/~lynn/2009.html#0 Is SUN going to become x86'ed ??
http://www.garlic.com/~lynn/2011i.html#27 PDCA vs. OODA
http://www.garlic.com/~lynn/2011k.html#67 Somewhat off-topic: comp-arch.net cloned, possibly hacked
http://www.garlic.com/~lynn/2012d.html#44 Faster, Better, Cheaper: Why Not Pick All Three?
http://www.garlic.com/~lynn/2014f.html#13 Before the Internet: The golden age of online services
http://www.garlic.com/~lynn/2014m.html#86 Economic Failures of HTTPS Encryption
http://www.garlic.com/~lynn/2014m.html#146 LEO
http://www.garlic.com/~lynn/2015e.html#10 The real story of how the Internet became so vulnerable
http://www.garlic.com/~lynn/2015e.html#16 The real story of how the Internet became so vulnerable
http://www.garlic.com/~lynn/2017.html#27 History of Mainframe Cloud


--
virtualization experience starting Jan1968, online at home since Mar1970

Anne & Lynn Wheeler

unread,
May 24, 2017, 1:28:39 PM5/24/17
to
re:
http://www.garlic.com/~lynn/99.html#92 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/99.html#94 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#23 MVS vs HASP vs JES (was 2821)

cp/40-cms was originally implemented on 360/40 that had been modified to
ad hardware support for virtual memory. CP/40-CMS morphs into CP/67-CMS
when standard 360/67 with virtual memory becomes available.

lots of installations were sold 360/67 for use with tss/360 ... but
because tss/360, lots of installations just used it as 360/65 with
os/360 (not using virtual memory). Other installations did their own
virtual memory support ... Univ of Mich did MTS, Stanford did
Orvyl/Wylbur, some recent posts
http://www.garlic.com/~lynn/2017.html#28 {wtf} Tymshare SuperBasic Source Code
http://www.garlic.com/~lynn/2017b.html#10 IBM 1970s
http://www.garlic.com/~lynn/2017c.html#3 The ICL 2900
http://www.garlic.com/~lynn/2017d.html#50 Univ. 709
http://www.garlic.com/~lynn/2017d.html#75 Mainframe operating systems?
http://www.garlic.com/~lynn/2017d.html#97 IBM revenue has fallen for 20 quarters -- but it used to run its business very differently
http://www.garlic.com/~lynn/2017e.html#58 A flaw in the design; The Internet's founders saw its promise but didn't foresee users attacking one another

and the IBM science center did CP/67-CMS
http://www.garlic.com/~lynn/subtopic.html#545tech

CMS ran a lot of the OS/360 compilers, assemblers and other applications
by providing simulation of some of OS/360 system services (less than
64kbytes of code). Later at the VM370/CMS development group they added
much more extensive simulatio of MVS system services (joke was in
120kbytes of code, it did better cost effective simulation than MVS did
in 8mbytes of code). However that was all lost when the POK MVS group
convinced corporate to kill VM370 product, shutdown the Burlington, Mass
development group and move all the people to POK to work on MVS/XA
(Endicott eventually manages to save the VM370 product mission, but had
to resurrect a development group from scratch).

In the early 80s, internal mainframe datacenters were running out of
space ... and they were moving a lot of online computer out into 4341s
in departmental areas. There were several large MVS-based development
applications that wouldn't run on CMS. They managed to move quite a few
of these applications to vm370/cms (on distributed 4341s) by providing
another 12kbytes of MVS system services simulation.

old 4341 emial (including refs to additional 12kbytes simulation).

triva: in 67/68 time-frame when science center had 12 people working on
cp67 & CMS and various online applications, the TSS/360 group had around
1200 people.

Scott Lurndal

unread,
May 24, 2017, 1:54:56 PM5/24/17
to
While on Burroughs mainframes, one just typed 'RM filename' on the console,
in CANDE (timesharing) or via a control card.

Dan Espen

unread,
May 24, 2017, 2:31:43 PM5/24/17
to
Yes, and good luck trying to do the equivalent with JCL.

You can actually write a JCL procedure called RM and have
it delete a file but then you would end up with:

// RM FILE1
// RM FILE2

and both file deletions would need to invoke IEFBR14 or IDCAMS
separately.

Okay, so try to write a proc that handles this:

// RM FILE1,FILE2...

Good luck. Your best bet would be to invoke TSO in the
PROC and use a CLIST or REXX.

--
Dan Espen

Jon Elson

unread,
May 24, 2017, 3:03:01 PM5/24/17
to
Many of the horrors of JCL really stem from the architecture of the 360.
There's a lot of detail of data set allocation, with CKD disks and the
variable record size features of the disks. So, allocating files by having
to know, in ADVANCE, how many tracks or cylinders they will occupy, and
having to deal with what happens if the file grows. Yes, these things
really gave you a lot of flexibility to do it YOUR way, and certainly having
all files on the system perfectly contiguous at all times helped
performance. And, getting full performance out of a 360 was important,
because especially the lower models were REALLY slow, and peripherals (I'm
thinking mainly 2314 disks, here) were really slow to seek.

I wrote some tricky DEC DCL commands that manipulated date/time to collect
accounting data for an entire month and assemble files. This used loops and
character string manipulation in the DCL script. Not sure how you'd do that
in JCL.


Jon

Anne & Lynn Wheeler

unread,
May 24, 2017, 3:07:39 PM5/24/17
to
Dan Espen <dan1...@gmail.com> writes:
> Yes, and good luck trying to do the equivalent with JCL.
>
> You can actually write a JCL procedure called RM and have
> it delete a file but then you would end up with:
>
> // RM FILE1
> // RM FILE2
>
> and both file deletions would need to invoke IEFBR14 or IDCAMS
> separately.
>
> Okay, so try to write a proc that handles this:
>
> // RM FILE1,FILE2...
>
> Good luck. Your best bet would be to invoke TSO in the
> PROC and use a CLIST or REXX.

re:
http://www.garlic.com/~lynn/99.html#92 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/99.html#94 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#23 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#25 MVS vs HASP vs JES (was 2821)

an early CP/67-CMS customer was Union Carbide. One of the Union Carbide
people wrote a OS/360 program that ran on the operator's console that
implemented most of the CMS commands ... similuating them with OS/360
system services (gave os/360 operator's console look&feel of CMS). This
could be run on real 360 ... but was frequently used running os/360 in
virtual machine.

one of the early commercial service bureau spinoffs of science center
was NCSS. Another involved people from both science center and the guy
that ran MIT lincoln labs and cp67 people from lincoln labs ... that had
their office in waltham, mass. The Union Carbide guy joined the
waltham, mass group (they also tried to recruit me when I graduated,
rather than joining the science center).
http://www.garlic.com/~lynn/subtopic.html#545tech

trivia: both cp67 service bureaus quickly moved up the value chain to
providing financial information to wallstreet and other financial
institutions

when there was still facade that TARP funds would be used to buy "Too
Big To Fail" offbook toxic assets ... the waltham corporation was
briefly mentioned in Jan2009 that they would be involved in helping
value the offbook toxic assets (they had previously bought the pricing
services division from one of the major credit rating agencies, giving
rise to some jokes that credit rating agencies didn't really need to
know value of things they were rating).

Then there was publicity that it was too hard to rate the TBTF offbook
toxic assets. Various issues:

1) late summer some offbook toxic assets went for 22cents on the dollar,
if the rest of the offbook toxic assets went for that price, the TBTF
would be declared insolvent and have to be liquidated.

2) ye2008, just the four largest TBTF were still carrying $5.2T in
offbook toxic assets while there had only been $700B appropriated for
TARP. There wasn't even enough to cover $5.2T at 22cents on the
dollar ($1.14T) which would have also been $4.06T loss

3) paying for triple-A ratings trumps documentation so they could start
doing no-documentation, liar loans ... liar loans w/o documentation it
would be impossible to value the offbook toxic assets.

some "too big to fail" posts
http://www.garlic.com/~lynn/submisc.html#too-big-to-fail
some (triple-A rated) toxic asset posts
http://www.garlic.com/~lynn/submisc.html#toxic.cdo

Anne & Lynn Wheeler

unread,
May 24, 2017, 3:48:02 PM5/24/17
to

Jon Elson <jme...@wustl.edu> writes:
> Many of the horrors of JCL really stem from the architecture of the
> 360. There's a lot of detail of data set allocation, with CKD disks
> and the variable record size features of the disks. So, allocating
> files by having to know, in ADVANCE, how many tracks or cylinders they
> will occupy, and having to deal with what happens if the file grows.
> Yes, these things really gave you a lot of flexibility to do it YOUR
> way, and certainly having all files on the system perfectly contiguous
> at all times helped performance. And, getting full performance out of
> a 360 was important, because especially the lower models were REALLY
> slow, and peripherals (I'm thinking mainly 2314 disks, here) were
> really slow to seek.

re:
http://www.garlic.com/~lynn/99.html#92 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/99.html#94 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#23 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#25 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#27 MVS vs HASP vs JES (was 2821)

there is recent thread on ibm-main mailing list about choosing the
optimal blocking factor for (CKD) disks improved performance, taking
into account track size, tracks/cyl, number of cylinders, etc.

However real CKD disks haven't been built for decades, being simulated
on industry standard fixed-block disks
http://www.garlic.com/~lynn/2017f.html#3 SDB (system detrmined Blksize)

ckd, fba, multi-track search, etc posts
http://www.garlic.com/~lynn/submain.html#dasd

thread (archived at google)
https://groups.google.com/forum/#!topic/bit.listserv.ibm-main/FjMmn2SWS1g

big OS360 problem with CKD was early 360 using multi-track search (and
channel capacity) trade-off for limited real storage. By late 70s, the
situation had inverted ... real storage was getting large enough to
cache location tables instead of constantly doing multi-track searches.
I was called into a number of customer severe VS2 performance accounts,
that turned out were doing wide-spread transaction activity with each
transaction loading application out of large PDS library on
(loosely-coupled shared) 3330 with a three cylinder directory. After
depth of search was 1.5cyls, multi-track search of 19tracks
(19recolutions at 60/sec) or .317 seconds elapsed time I/O plus 9.5
tracks 0.156 seconds elapsed time I/O ... plus random access to load the
application ... total around .5seconds elapsed time to load each
transaction application. This is for national retailer that really
wanted to do tens or hundreds of transactions per seconds.

I had offerred the MVS group FBA support (which required keeping tables
and eliminating multi-track search). However, I was told that even if I
provided fully tested and integrated implementation, it would still cost
$26M for education and documentation ... so I needed a $200M-$300M
additional new sales business case (and since customers were already
buying disks as fast as they could be made, it would have just met they
would be same amount of FBA as they were buying CKD).

past posts getting to play disk engineer in bldgs. 14&15
http://www.garlic.com/~lynn/subtopic.html#disk

other perspective ... current processors latency to memory, when
measured in count of machine cycles ... is compareable to number of 60s
machine cycles for latency to 60s disk.

In the 70s, I was pointing out that relative system disk troughput was
declining. Early 80s, I was pointificating that relative system disk
throughput had declined by factor of ten times since 60s (processors
were getting faster, much faster than disks were getting faster). The
disk division executives took exception and assigned the division
performance group to refute my claims. They came back a few weeks later
and effectively said that I had slightly understated the situation.
past posts with comparison http://www.garlic.com/~lynn/93.html#31

they then respun the analysis for configurating disks to improve system
throughput ... which was presented at SHARE (session B874). some recent
posts mentioning b874
http://www.garlic.com/~lynn/2017b.html#32 Virtualization's Past Helps Explain Its Current Importance
http://www.garlic.com/~lynn/2017b.html#70 The ICL 2900
http://www.garlic.com/~lynn/2017d.html#61 Paging subsystems in the era of bigass memory
http://www.garlic.com/~lynn/2017e.html#5 TSS/8, was A Whirlwind History of the Computer

Rich Alderson

unread,
May 24, 2017, 3:50:20 PM5/24/17
to
Jon Elson <jme...@wustl.edu> writes:

> I wrote some tricky DEC DCL commands that manipulated date/time to collect
> accounting data for an entire month and assemble files. This used loops and
> character string manipulation in the DCL script. Not sure how you'd do that
> in JCL.

You don't, you use the correct tool for the job, which is a COBOL or PL/[1I]
program.

And I still have fond memories of JCL, even though I've been exclusively a
PDP-10 guy since 1984. In the stretch where I was an SVS/MVS systems
programmer as well as a TOPS-20 systems programmer (1982-1984), I wrote the
utility for UChicago users to convert their SVS JCL (with HASP and ACF2) to MVS
JCL with JES2 and RACF. Lovely character handling PL/I program, fun to write.

So make that 2 people who like JCL.

--
Rich Alderson ne...@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen

Anne & Lynn Wheeler

unread,
May 24, 2017, 4:55:48 PM5/24/17
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:
> Avg. depth of search was 1.5cyls, multi-track search of 19tracks
> (19recolutions at 60/sec) or .317 seconds elapsed time I/O plus 9.5
> tracks 0.156 seconds elapsed time I/O ... plus random access to load
> the application ... total around .5seconds elapsed time to load each
> transaction application. This is for national retailer that really
> wanted to do tens or hundreds of transactions per seconds.

re:
http://www.garlic.com/~lynn/99.html#92 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/99.html#94 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#23 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#25 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#27 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#28 MVS vs HASP vs JES (was 2821)

first one of these, was invited into large national retailer hdqtrs
datacenter ... took me to classroom with 15-20 classroom tables. half
dozen tables covered with piles of performance activity printouts from
multiple high-end mainframe VS2 systems.

I start leafing through the printouts, after 30mins or so start to
realize that big drop-off in performance seemed to be correlated with
maximum/peak total activity (across multiple systems) on particular 3330
shared disk of around 7 I/Os second (bascially the two multi-track
search I/Os plus application load for approx. two transactions/sec
... plus part of something additional).

futher investigation was disk that contained the transaction application
PDS dataset library ... for all national retail stores. Eventually
solutions was to split it into three PDS libraries and give each
loosely-coupled system, their own private copy of the three PDS
libraries. Highest used applications load-balanced in PDS dataset with
fewest number of members, to minimize the multi-track search
revolutions.

about same time, had something related in IBM San Jose Research. At the
time had 370/158 for VM and 370/168 for MVS with shared 3330 disk pool
... but stricked rules that strings were dedicated for systems, and MVS
3330s couldn't be mounted on VM370 strings.

around 10am one moring, the datacenter starting getting angry calls from
vm370 users about CMS response had radically degrading. Investigating,
turns out an operator had accidentally mounted 3330 pack on vm370
string. The standard MVS multi-track search operations were horribly
interfering with CMS response i.e. a multi-track search ties/locks up
the device, but also channel and (shared) controller. MVS/TSO users are
so use to the problem that they don't know enough to complain ... but it
came as a horrible shock to CMS users (how badly CKD DASD multi-track
searches affect response).

demands were made to immediately move the offending MVS pack. Operations
say they wouldn't do it until 2nd shift. We placed a pack on MVS string
for a VS1 system highly optimived for running under VM370 and brought it
up in a virtual machine ... and started do things involving multi-track
search ... which brings the MVS/168 to its knees (even running under
loaded vm370 running on 370/158 easily outperforms MVS/168) which
significantly improves CMS response. Operations then say they will
immediately move the MVS pack off the VM370 string, if we move the VS1
pack.

past posts mentioning CKD, FBA, multi-track search, pds, vtoc, etc
http://www.garlic.com/~lynn/submain.html#dasd

Charlie Gibbs

unread,
May 24, 2017, 5:34:06 PM5/24/17
to
On 2017-05-24, Huge <Hu...@nowhere.much.invalid> wrote:

> On 2017-05-24, Peter Flass <peter...@yahoo.com> wrote:
>
> [snippage]
>
>> I think I'm the only person in the world who _likes_ JCL.
>
> Yep. You are.

I would always say that the purpose of JCL was to give you
something to debug once you got your program working.

--
/~\ cgi...@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!

Peter Flass

unread,
May 24, 2017, 5:36:39 PM5/24/17
to
With DOS I tuned things like using "split cylinder" allocation for the
COBOL work files - half the tracks on a cylinder for one, half for the
other. It was astonishing how much things sped up - no disk arm movement.

>
> I wrote some tricky DEC DCL commands that manipulated date/time to collect
> accounting data for an entire month and assemble files. This used loops and
> character string manipulation in the DCL script. Not sure how you'd do that
> in JCL.
>
>
> Jon
>



--
Pete

Anne & Lynn Wheeler

unread,
May 24, 2017, 5:55:36 PM5/24/17
to
Peter Flass <peter...@yahoo.com> writes:
> With DOS I tuned things like using "split cylinder" allocation for the
> COBOL work files - half the tracks on a cylinder for one, half for the
> other. It was astonishing how much things sped up - no disk arm movement.

Undergraduate in the 60s (univ. hired me to responsible for their
productions systems), I would hand order output of stage1 sysgen so the
result of stage2 sysgen built system packs trying to optimize dataset
placement, ordering of members in PDS datasets (and therefor also
ordering in PDS directory multitrack search) to optimize disk arm
mortion (and multitrack search time). old post in this thread showing
part of presentation I did at SHARE on the work (also some results of
rewritting sections of cp67 to improve os360 performance in virtual
machine)
http://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)

one of the problems were PTF fixes where constantly being applied,
which would delete old PDS member and insert replacement PDS
member at the end. After 6months or so of PTF activity, there
could be noticeable degradation (and I might have to rebuild
system packs to recover the optimized disk performance).

other posts in thread
http://www.garlic.com/~lynn/99.html#92 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/99.html#94 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#23 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#25 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#27 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#28 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#30 MVS vs HASP vs JES (was 2821)

hanc...@bbs.cpcn.com

unread,
May 24, 2017, 7:17:10 PM5/24/17
to
On Tuesday, May 23, 2017 at 10:21:00 PM UTC-4, Jon Elson wrote:
> But, JCL shudder, shudder! Maybe just because I didn't know it real well,
> but it seemed to be the worst thing on OS 360 systems. At least some of the
> rest of OS 360 was pretty decent, but that JCL syntax just seemed way to
> cryptic.

Even its developer, Fred Brooks, acknowledged JCL was cumbersome.

It is amazing the 50 year success of the product given the rush to
get it out the door and the need to definite its purpose (from
universal to big machine) and fix it multiple times.

hanc...@bbs.cpcn.com

unread,
May 24, 2017, 7:20:05 PM5/24/17
to
On Wednesday, May 24, 2017 at 7:14:45 AM UTC-4, Peter Flass wrote:
> I think I'm the only person in the world who _likes_ JCL. It gives you a
> degree of control of the system that you don't get with other systems. My
> impression is that you might have to write a program on, e..g. unix to do
> what you can do on MVS with a few lines of JCL. Of course most of such
> programs have already been written, but JCL is standard as opposed to a
> random collection of programs you might have installed on your particular
> system.

Good points.

In 360-DOS, we had homegrown programs to do what OS did, and they were
messy, as you said.

However, some things like proc substitution and backward referencing
could get a little tricky.


hanc...@bbs.cpcn.com

unread,
May 24, 2017, 7:32:04 PM5/24/17
to
On Wednesday, May 24, 2017 at 3:03:01 PM UTC-4, Jon Elson wrote:


> Many of the horrors of JCL really stem from the architecture of the 360.
> There's a lot of detail of data set allocation, with CKD disks and the
> variable record size features of the disks. So, allocating files by having
> to know, in ADVANCE, how many tracks or cylinders they will occupy, and
> having to deal with what happens if the file grows. Yes, these things
> really gave you a lot of flexibility to do it YOUR way, and certainly having
> all files on the system perfectly contiguous at all times helped
> performance. And, getting full performance out of a 360 was important,
> because especially the lower models were REALLY slow, and peripherals (I'm
> thinking mainly 2314 disks, here) were really slow to seek.

So true.

In the 1960s, computers were extmrely expensive, and it was necessary
to squeeze out as much performance as possible from limited hardware.
In those days, programmers spent a great deal of their time "squeezing",
saving memory and CPU cycles. For instance, a group of Y/N flags would
be stored as eight bits within a single byte instead of eight separate
bytes.

Disk space was expensive and limited, so precise assignments were
necessary for efficiency and to avoid waste.

Today, a lot techniques of the past are no longer necessary since
disk space is very plentiful, and there are new tools for space
management. For instance, a long time ago one had to calculate
the optimum block size of a disk file, but some years ago (30?)
they automated that.



> I wrote some tricky DEC DCL commands that manipulated date/time to collect
> accounting data for an entire month and assemble files. This used loops and
> character string manipulation in the DCL script. Not sure how you'd do that
> in JCL.

I _think_ you'd have to capture various system log files and then write
a program to dig out what was wanted.

Dan Espen

unread,
May 24, 2017, 10:29:44 PM5/24/17
to
hanc...@bbs.cpcn.com writes:

> On Wednesday, May 24, 2017 at 3:03:01 PM UTC-4, Jon Elson wrote:
>
>
>> Many of the horrors of JCL really stem from the architecture of the 360.
>> There's a lot of detail of data set allocation, with CKD disks and the
>> variable record size features of the disks. So, allocating files by having
>> to know, in ADVANCE, how many tracks or cylinders they will occupy, and
>> having to deal with what happens if the file grows. Yes, these things
>> really gave you a lot of flexibility to do it YOUR way, and certainly having
>> all files on the system perfectly contiguous at all times helped
>> performance. And, getting full performance out of a 360 was important,
>> because especially the lower models were REALLY slow, and peripherals (I'm
>> thinking mainly 2314 disks, here) were really slow to seek.
>
> So true.
>
> In the 1960s, computers were extmrely expensive, and it was necessary
> to squeeze out as much performance as possible from limited hardware.
> In those days, programmers spent a great deal of their time "squeezing",
> saving memory and CPU cycles. For instance, a group of Y/N flags would
> be stored as eight bits within a single byte instead of eight separate
> bytes.

Well, starting in 1964, I was there.
On my first machine, 14xx, we didn't use bit flags because the
architecture didn't make it easy.

On S/360, if you coded COBOL, as most everyone did, you didn't
use bit flags because the language didn't allow easy bit access.

If you coded, Assembler or even PL/I, bit flags were about as easy to
use as character flags. I used them when I wanted to save space.
I don't believe there is any speed difference between testing a bit
and comparing a character. The space saved is tiny. More significant
if the flags are in a record format.

The OS data structures are full of bit flags.

I once had to write screen processors with a 1K limit for each
screen. One of the screens required me to keep track of numbers
00 through 99. No duplicates were allowed and they weren't entered
all at once. So, I turned the 2 character input into a bit/byte offset.
If I recall accurately, I wrote 8 instructions to convert, test the bit
for dupe detection, otherwise set the bit if on.

--
Dan Espen

Dennis Boone

unread,
May 24, 2017, 11:32:38 PM5/24/17
to
> I know that the MTS system I used at University, in addition to using
> IBM's FORTRAN IV G and H compilers, modified to run under MTS, also
> used HASP, also so modified.

UMich felt HASP was clunky, and Resource Manager was eventually
written by one of the consortium members.

De

Anne & Lynn Wheeler

unread,
May 25, 2017, 12:39:08 AM5/25/17
to
Dan Espen <dan1...@gmail.com> writes:
> The OS data structures are full of bit flags.

360/65MP was (os/360 mvt) kernel (test&set) spin-lock implementation
... so kernel code only ran on one processor at a time ... and
applications only ran on one processor at a time.

charlie invented compare&swap instruction (instruction name chosen
because CAS are charlie's initials) while doing fine-grain
multiprocessor locking for cp/67 (kernel code could run simultaneously
on both processors) at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech
and
http://www.garlic.com/~lynn/subtopic.html#smp

initially when we attempted to try and get compare&swap added to 370
architecture ... it was rejected because the pok favorite sone operating
system people said that test&set was sufficient to multiprocessor
support. The 370 architecture owners said that in order to justify
compare&swap for 370, we had to come up with uses besides kernel
multiprocessing locking/serialization. Thus was born the application
multiprogramming uses (whether running on single processor or multiple
processors) ... examples continued to be in principles of operation
appendix.
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6.2?SHELF=DZ9ZBK03&DT=20040504121320&CASE=
and
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6.3?SHELF=DZ9ZBK03&DT=20040504121320&CASE=
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6.4?SHELF=DZ9ZBK03&DT=20040504121320&CASE=
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6.5?SHELF=DZ9ZBK03&DT=20040504121320&CASE=

however, lter as MVS tries to move from single multiprocessor kernel
spin-lock to finer grain locking, the enormous number of bit flag
operations gave rise to the folklore is that high-end 370 MVS machines
had to fiddle the oi&ni instructions ... since they aren't automic like
compare&swap ... aka the processor hardware has to fetch the storage,
modify the bit and then store it back in single instruction operation
(but involves multiple non-atomic storage operations from memory
standpoint) ... aka they had to "fix" the multiprocessor hardware so
that oi/ni operate more like compare&swap (because mvs couldn't fix all
the problems). It is one of the reasons why they also explicitly added
this to principles of operation appendix
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6.1?SHELF=DZ9ZBK03&DT=20040504121320&CASE=

os/360 bit flag problems show up again in moving to mvs/xa 31-bit
addressing because lots of them where in the high-order byte of 24-bit
address fields.

Quadibloc

unread,
May 25, 2017, 1:10:33 AM5/25/17
to
On Wednesday, May 24, 2017 at 1:03:01 PM UTC-6, Jon Elson wrote:

> Many of the horrors of JCL really stem from the architecture of the 360.
> There's a lot of detail of data set allocation, with CKD disks and the
> variable record size features of the disks. So, allocating files by having
> to know, in ADVANCE, how many tracks or cylinders they will occupy, and
> having to deal with what happens if the file grows. Yes, these things
> really gave you a lot of flexibility to do it YOUR way, and certainly having
> all files on the system perfectly contiguous at all times helped
> performance. And, getting full performance out of a 360 was important,
> because especially the lower models were REALLY slow, and peripherals (I'm
> thinking mainly 2314 disks, here) were really slow to seek.

Of course, the hardware architecture of the 360 didn't prevent the Michigan
Terminal System from being written for it, which allowed ordinary mortals to
use one.

MTS incorporated some ideas that I would like to see available for computers
today.

Files in MTS weren't just blobs of data that had to be divided into records by characters which were part of the data.

There was a "sequential" file format which was less flexible, in which a
record consisted of a 16-bit record length followed by the contents of a
record. Those files didn't allow random access to their contents.

But the normal file format was the "line file". Originally, records could be 0-255 characters in length, this was later increased to 0-32,767 characters.

The records which contained the data in the file were memo fields of a
database record. In addition to the text and the length, both parts of a memo
field, there was a line number, which was the primary key. So an MTS line
file was a particular standardized format of an ISAM file.

In Microsoft databases, there is a "currency" datatype, consisting of 64-bit
integers scaled by being divided by 10,000. Line numbers in MTS line files
had a similar format: they were 32-bit integers scaled by being divided by
1000.

They were also signed integers; negative line numbers and line number zero
had special functions, permitting annotations to be associated with a file.
Perhaps this could have been used for the "resource fork" of the original
Macintosh operating system.

John Savard

Anne & Lynn Wheeler

unread,
May 25, 2017, 1:16:33 AM5/25/17
to
d...@ihatespam.msu.edu (Dennis Boone) writes:
> UMich felt HASP was clunky, and Resource Manager was eventually
> written by one of the consortium members.

my wife was in the JES group and one of the "catchers" for ASP turning
it into JES3. She was also co-author JESUS, JES Unified System, all the
features of JES2 and JES3 that the respective customers couldn't live
w/o. However, for various reasons it didn't come to pass ... and then
she was con'ed into going to POK to be responsible for loosely-coupled
architecture (mainframe for cluster). While there she did peer-coupled
shared data architecture
http://www.garlic.com/~lynn/submain.html#shareddata

she didn't remain long in part because of little uptake (except for IMS
hot-standby until sysplex & parallel sysplex) and in part because of
never ending battles with communication group trying to force her into
using SNA/VTAM for loosely-coupled operations.

a little more mention of HASP in this old post primarily having to do
with the justification for moving to virtual memory for all 370s
.... primarily because of the really bad MVT storage management ...
typically requiring regions to be four times larger than actually used
... resulting in only four regions on 1mbyte 370/165. As processors were
getting faster than disks were getting faster ... so high-end 370 needed
increasing numbers of concurrent applications running simultaneously to
keep system utilized.
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

posts mentioning hasp, jes2, nji, etc
http://www.garlic.com/~lynn/submain.html#hasp

posts in thread:
http://www.garlic.com/~lynn/99.html#92 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/99.html#94 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#23 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#25 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#27 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#28 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#30 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#31 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#32 MVS vs HASP vs JES (was 2821)

Peter Flass

unread,
May 25, 2017, 8:15:23 AM5/25/17
to
Really? I thought HASP was some of the slickest code I'd ever seen. I used
to study the source to pick up coding techniques.

--
Pete

Quadibloc

unread,
May 25, 2017, 8:31:32 AM5/25/17
to
On Wednesday, May 24, 2017 at 11:10:33 PM UTC-6, Quadibloc wrote:

> Files in MTS weren't just blobs of data that had to be divided into
> records by characters which were part of the data.

> There was a "sequential" file format which was less flexible, in which a
> record consisted of a 16-bit record length followed by the contents of a
> record. Those files didn't allow random access to their contents.

> But the normal file format was the "line file". Originally, records could
> be 0-255 characters in length, this was later increased to 0-32,767
> characters.

> The records which contained the data in the file were memo fields of a
> database record. In addition to the text and the length, both parts of a memo
> field, there was a line number, which was the primary key. So an MTS line
> file was a particular standardized format of an ISAM file.

> In Microsoft databases, there is a "currency" datatype, consisting of 64-bit
> integers scaled by being divided by 10,000. Line numbers in MTS line files
> had a similar format: they were 32-bit integers scaled by being divided by
> 1000.

Incidentally, if those two were the only file types available, that would
have interfered with setting up a database system to run under MTS that
used general ISAM files for data. So, presumably it was possible to create
and access files of a "raw" type like those on today's computers as well,
although if it was, I don't recall how it was done, and whether such access
was limited to privileged accounts.

MTS, in order to provide the functionality it offered, must have had an
equivalent to the Unix setuid among file attributes.

John Savard

Quadibloc

unread,
May 25, 2017, 8:39:15 AM5/25/17
to
On Thursday, May 25, 2017 at 6:31:32 AM UTC-6, Quadibloc wrote:

> MTS, in order to provide the functionality it offered, must have had an
> equivalent to the Unix setuid among file attributes.

They had something more finely grained, the "Program key".

John Savard

jmfbahciv

unread,
May 25, 2017, 9:53:16 AM5/25/17
to
Rich Alderson wrote:
> Jon Elson <jme...@wustl.edu> writes:
>
>> I wrote some tricky DEC DCL commands that manipulated date/time to collect
>> accounting data for an entire month and assemble files. This used loops
and
>> character string manipulation in the DCL script. Not sure how you'd do
that
>> in JCL.
>
> You don't, you use the correct tool for the job, which is a COBOL or PL/[1I]
> program.
>
> And I still have fond memories of JCL, even though I've been exclusively a
> PDP-10 guy since 1984. In the stretch where I was an SVS/MVS systems
> programmer as well as a TOPS-20 systems programmer (1982-1984), I wrote the
> utility for UChicago users to convert their SVS JCL (with HASP and ACF2) to
MVS
> JCL with JES2 and RACF. Lovely character handling PL/I program, fun to
write.
>
> So make that 2 people who like JCL.
>
Is there an equivalent of JCL on the -10s? Based on the posts, it sounds
like a combination of MIC and a BATCON which could keep counts.

/BAH

Quadibloc

unread,
May 25, 2017, 11:28:16 AM5/25/17
to
On an IBM computer running the OS/360 operating system, JCL is what is
called the shell in UNIX: the program that takes commands like "run this
program, get the input from this device, and put the output in that
file".

In that sense, of course there was an equivalent on the PDP-10, or it would
have been rather hard to use.

The PDP-10 may also have had file and disk management utilities, but that
isn't an equivalent to JCL: a shell that was extremely cryptic, and
forced you to allocate files explicitly. No computer maker in its
right mind would attempt to inflict such a thing on its customers if it
wasn't IBM that could maybe get away with it.

John Savard



Jon Elson

unread,
May 25, 2017, 2:48:17 PM5/25/17
to
Anne & Lynn Wheeler wrote:


> big OS360 problem with CKD was early 360 using multi-track search (and
> channel capacity) trade-off for limited real storage.

Right, on a 360/30 with only one task, this worked great. On a 360/50 with
50 jobs running, letting anybody do a long search tied up the channel for
seconds at a time. Generally, you had all DASD on one selector, and the
tapes on another selector to avoid long (slow) tape operations from
impacting the disks. So, if a serach was performed, all disks were
unavailable until it was completed. Not great in the pultiprogramming
environment.

Jon

Jon Elson

unread,
May 25, 2017, 3:05:40 PM5/25/17
to
Well, of course, there are always tradeoffs. Remember that the 360 started
out with the /30, a 32-bit architecture emulated on an 8-bit machine with 8-
bit memory. It had a memory bandwidth of about 300K bytes/second. Memory
was capped at 64 KB, but a lot of machines were delivered with 8K or 16K.
(Note, the 360/30 did NOT run OS/360, it ran a few lower-level systems such
as TOS and DOS.)

But, anyway, they were struggling to give the best performance with the very
low-performing hardware available (or conceived of) in 1963 or so. Disks
had hydraulic actuators for the head position. On the 360/30, the original
disk drives had to recalibrate to a stop and then seek forward to the
desired cylinder, they couldn't even seek backwards one cylinder. A full-
length seek was close to one second!

So, getting the most out of such a disk system required some compromises.
If you make all files contiguous, you pick up a lot of performance, but it
makes allocation of additional records to a file a problem, if your initial
allocation was too small. Also, the decision to make disks look just like
tapes, where you could write any size record you want, may have looked like
a GREAT idea in 1963. It WAS flexible, but it made it impossible to have a
disk system with "random" allocation of sectors, like almost all systems use
today (fixed block architecture). So, they got locked into a number of
features that at least some people might think came back to bite them. I
suspect that most data disks from the CKD days had LOTS of empty space at
the end of all data sets, as people always allocated more than they thought
they'd need, "just in case".

Jon

Charles Richmond

unread,
May 25, 2017, 5:34:24 PM5/25/17
to
On 5/24/2017 6:32 PM, hanc...@bbs.cpcn.com wrote:
>
> [snip...] [snip...] [snip...]
>
> In the 1960s, computers were extmrely expensive, and it was necessary
> to squeeze out as much performance as possible from limited hardware.
> In those days, programmers spent a great deal of their time "squeezing",
> saving memory and CPU cycles. For instance, a group of Y/N flags would
> be stored as eight bits within a single byte instead of eight separate
> bytes.
>
> Disk space was expensive and limited, so precise assignments were
> necessary for efficiency and to avoid waste.
>
> Today, a lot techniques of the past are no longer necessary since
> disk space is very plentiful, and there are new tools for space
> management. For instance, a long time ago one had to calculate
> the optimum block size of a disk file, but some years ago (30?)
> they automated that.
>

Yes, modern disk drives do a lot more *for* you. And don't forget the
on-board disk cache. Modern disk drives have their own processor with
code to optimize many things. If you write a block to the cache, the
drive may wait a few milliseconds before actually writing the block to
disk. That way, if your program writes another block to the file
(likely), then both blocks can be written with *one* disk head seek. If
you read a block, the drive may read the entire track into the cache at
the same time. If you read the next block on the track, there's *no*
need to do another head seek at all. I wish I knew all the tricks that
the drives do these days to speed things along!!

--
numerist at aquaporin4 dot com

---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

Charles Richmond

unread,
May 25, 2017, 5:51:36 PM5/25/17
to
On 5/24/2017 9:29 PM, Dan Espen wrote:
>
> [snip...] [snip...] [snip...]
>
> Well, starting in 1964, I was there.
> On my first machine, 14xx, we didn't use bit flags because the
> architecture didn't make it easy.
>
> On S/360, if you coded COBOL, as most everyone did, you didn't
> use bit flags because the language didn't allow easy bit access.
>
> If you coded, Assembler or even PL/I, bit flags were about as easy to
> use as character flags. I used them when I wanted to save space.
> I don't believe there is any speed difference between testing a bit
> and comparing a character. The space saved is tiny. More significant
> if the flags are in a record format.
>
> The OS data structures are full of bit flags.
>
> I once had to write screen processors with a 1K limit for each
> screen. One of the screens required me to keep track of numbers
> 00 through 99. No duplicates were allowed and they weren't entered
> all at once. So, I turned the 2 character input into a bit/byte offset.
> If I recall accurately, I wrote 8 instructions to convert, test the bit
> for dupe detection, otherwise set the bit if on.
>

I once wrote a program that printed BOL's (bills of lading) that worked
in a suite of programs that did the other work. The BOL's had six digit
numbers, and the numbers needed to start back at one when all had been
used. (This was all written in C and SQL on a UNIX workstation.) I had
problems from time to time issuing duplicate BOL numbers.

So I created an integer array to act as bit flags. The array was 31,250
32-bit integers all starting out as zeroes. Each time I created a BOL
number, I'd divide the number by 32 to get the index of the array
element... and take the BOL number modulo 32 to get the number of the
bit in the word. The overhead for this bit selection was small.

If the bit was set, I incremented by one the BOL number and tried again.
If the bit was zero, I'd set the bit to one and use the BOL number I
had tested.

Eventually, there would be *no* BOL numbers left, and I'd have to zero
out the entire array and start the BOL numbers over at 1.

hanc...@bbs.cpcn.com

unread,
May 25, 2017, 6:01:32 PM5/25/17
to
On Thursday, May 25, 2017 at 3:05:40 PM UTC-4, Jon Elson wrote:

> Well, of course, there are always tradeoffs. Remember that the 360 started
> out with the /30, a 32-bit architecture emulated on an 8-bit machine with 8-
> bit memory. It had a memory bandwidth of about 300K bytes/second. Memory
> was capped at 64 KB, but a lot of machines were delivered with 8K or 16K.
> (Note, the 360/30 did NOT run OS/360, it ran a few lower-level systems such
> as TOS and DOS.)

I just want to note that the veterans told me that the S/360-30, despite
its limitations, still outperformed the 1401, the machine it replaced.
The 1311 disk was upgraded to be the 2311, and supposedly was faster
and held more data, so the 2311 offered more.

Charlie Gibbs

unread,
May 25, 2017, 6:36:42 PM5/25/17
to
On 2017-05-25, Jon Elson <jme...@wustl.edu> wrote:

> So, getting the most out of such a disk system required some compromises.
> If you make all files contiguous, you pick up a lot of performance, but it
> makes allocation of additional records to a file a problem, if your initial
> allocation was too small. Also, the decision to make disks look just like
> tapes, where you could write any size record you want, may have looked like
> a GREAT idea in 1963. It WAS flexible, but it made it impossible to have a
> disk system with "random" allocation of sectors, like almost all systems use
> today (fixed block architecture). So, they got locked into a number of
> features that at least some people might think came back to bite them. I
> suspect that most data disks from the CKD days had LOTS of empty space at
> the end of all data sets, as people always allocated more than they thought
> they'd need, "just in case".

On the Univac 9300, running out of disk space was usually a fatal error.
Even the standard sort utility would issue an error display and die if
the work files you gave it were too small. This would usually bite you
when a job that had been running successfully for months was given files
larger than usual. It was especially frustrating if the sort was the
first step of a job you left running overnight - you'd arrive in the
morning to find a mess that took a couple of hours to clean up, and
all the while users are calling up asking where their reports are.

Charlie Gibbs

unread,
May 25, 2017, 6:36:42 PM5/25/17
to
On 2017-05-25, Jon Elson <jme...@wustl.edu> wrote:

Yes, I had a lot of fun with CCWs in a single-tasking environment.
I had a CCW chain that would find the VTOC, wherever it was on the
disk, search it for a given file name, and read in the file's format
1 label. Another application was a disk copy utility I got my hands
on and speeded up. It used multi-track operations to analyze the
format of a disk and build custom CCW chains to read or write an
entire track in a single revolution. It could clone any 2316 pack
in 8 minutes (including sorting out alternate track assignments).

Bob Eager

unread,
May 25, 2017, 6:51:23 PM5/25/17
to
This is a detuned variation of the algorithm that appeared in the
Programming Pearls column buy Jon Bentley, in the late 1970s or early
1980s. That also incorporated a sort.

https://goo.gl/FDD1By




--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org

Anne & Lynn Wheeler

unread,
May 25, 2017, 7:25:52 PM5/25/17
to
Jon Elson <jme...@wustl.edu> writes:
> Well, of course, there are always tradeoffs. Remember that the 360 started
> out with the /30, a 32-bit architecture emulated on an 8-bit machine with 8-
> bit memory. It had a memory bandwidth of about 300K bytes/second. Memory
> was capped at 64 KB, but a lot of machines were delivered with 8K or 16K.
> (Note, the 360/30 did NOT run OS/360, it ran a few lower-level systems such
> as TOS and DOS.)

64kbyte 360/30 ran early versions os os360 PCP ... univ. had one up
through at least release 6 or 7. By release 9.5, it had been replaced
with 768kbyte 360/67, ran mostly as 360/65 with os/360 MFT (moving to
MVT with release 15/16).

my 1st programming job was to reimplement 1401 MPIO that did the
tape->printer/punch and cardreader->tape as unit record front end to 709
running tape->tape IBSYS. The 360/30 had 1401 hardware emulation ... so
I guess it was purely part of effort for transitioning to 360 (709 &
360/30 then replaced with 360/67). I got to design and implement my own
monitor, device drivers, interrupt handlers, error recovery, storage
management, dispatching, etc.

eventually had tray of 2000 cards, with assembler option that either did
stand-alone or os/360 with DCB macros and get/put macros. stand-alone
version assembled in about 30mins on 360/30 under os/360 pcp release 6.
DCB version assembled in around an hour because assembly of each DCB
macro took 5-6 minutes elapsed time.

student fortran jobs with ibsys tape->tape took under a second. initial
move to os/360 360/65, each student fortran job took around a
minute. this was reduced to around 30secs elapsed time with HASP. Then
with my careful manual reording of stage2 sysgen (optimized arm seek and
multi-track PDS search) I got it to around 12secs ... mentioned earlier
in thread:
http://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#31 MVS vs HASP vs JES (was 2821)

it wasn't until installing watfor that student fortran jobs got back to
under second (running faster than 709 ibsys). student fortran jobs
started out 3-step fortrang, compile, link-edit and go/execute ... and
effectively all the time was step job scheduler and file open/close
which involved huge number of CKD disk i/os.

Watfor was one step "monitor" than handled multiple student jobs per
execution ... effectively 360/65&2314 step time was 4seconds and then
watfor processed fortran at "20,000 cards/min" (on 360/65) ... around
333 statements/sec. Avg student fortran tended to avg around 60
statements (30-100) usually with almost instantaneous execution. Input
window would accept student jobs and place them in card tray. When tray
got nearly full (2000-3000 cards, 30-50 jobs, maybe partial full, if it
has been a couple hrs since the last run), which would be run as single
watfor job step ... 5+jobs/sec plus 4sec startup ... 9-13secs elapsed
per run.

past posts mentioning watfor
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
http://www.garlic.com/~lynn/96.html#9 cics
http://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
http://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
http://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
http://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/99.html#175 amusing source code comments (was Re: Testing job applicants)
http://www.garlic.com/~lynn/2000.html#55 OS/360 JCL: The DD statement and DCBs
http://www.garlic.com/~lynn/2000d.html#45 Charging for time-share CPU time
http://www.garlic.com/~lynn/2000d.html#46 Charging for time-share CPU time
http://www.garlic.com/~lynn/2001.html#52 Review of Steve McConnell's AFTER THE GOLD RUSH
http://www.garlic.com/~lynn/2001g.html#20 Golden Era of Compilers
http://www.garlic.com/~lynn/2001g.html#22 Golden Era of Compilers
http://www.garlic.com/~lynn/2001h.html#12 checking some myths.
http://www.garlic.com/~lynn/2001i.html#33 Waterloo Interpreters (was Re: RAX (was RE: IBM OS Timeline?))
http://www.garlic.com/~lynn/2002f.html#53 WATFOR's Silver Anniversary
http://www.garlic.com/~lynn/2002f.html#54 WATFOR's Silver Anniversary
http://www.garlic.com/~lynn/2002g.html#1 WATFOR's Silver Anniversary
http://www.garlic.com/~lynn/2002m.html#3 The problem with installable operating systems
http://www.garlic.com/~lynn/2002q.html#29 Collating on the S/360-2540 card reader?
http://www.garlic.com/~lynn/2002q.html#31 Collating on the S/360-2540 card reader?
http://www.garlic.com/~lynn/2003c.html#51 HASP assembly: What the heck is an MVT ABEND 422?
http://www.garlic.com/~lynn/2003j.html#26 A Dark Day
http://www.garlic.com/~lynn/2004.html#48 AMD/Linux vs Intel/Microsoft
http://www.garlic.com/~lynn/2004b.html#53 origin of the UNIX dd command
http://www.garlic.com/~lynn/2004c.html#60 IBM 360 memory
http://www.garlic.com/~lynn/2005m.html#16 CPU time and system load
http://www.garlic.com/~lynn/2005q.html#7 HASP/ASP JES/JES2/JES3
http://www.garlic.com/~lynn/2005r.html#0 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2006.html#15 S/360
http://www.garlic.com/~lynn/2006b.html#5 IBM 610 workstation computer
http://www.garlic.com/~lynn/2007o.html#70 The name "shell"
http://www.garlic.com/~lynn/2007p.html#0 The use of "script" for program
http://www.garlic.com/~lynn/2008.html#33 JCL parms
http://www.garlic.com/~lynn/2008.html#51 IBM LCS
http://www.garlic.com/~lynn/2009b.html#71 IBM tried to kill VM?
http://www.garlic.com/~lynn/2009e.html#18 Microminiaturized Modules
http://www.garlic.com/~lynn/2009f.html#24 Opinion: The top 10 operating system stinkers
http://www.garlic.com/~lynn/2009h.html#41 Book on Poughkeepsie
http://www.garlic.com/~lynn/2009o.html#37 Young Developers Get Old Mainframers' Jobs
http://www.garlic.com/~lynn/2009q.html#73 Now is time for banks to replace core system according to Accenture
http://www.garlic.com/~lynn/2009s.html#19 PDP-10s and Unix
http://www.garlic.com/~lynn/2009s.html#21 PDP-10s and Unix
http://www.garlic.com/~lynn/2010b.html#61 Source code for s/360 [PUBLIC]
http://www.garlic.com/~lynn/2010e.html#54 search engine history, was Happy DEC-10 Day
http://www.garlic.com/~lynn/2010l.html#61 Mainframe Slang terms
http://www.garlic.com/~lynn/2010n.html#66 PL/1 as first language
http://www.garlic.com/~lynn/2011g.html#44 My first mainframe experience
http://www.garlic.com/~lynn/2011g.html#50 My first mainframe experience
http://www.garlic.com/~lynn/2011h.html#17 Is the magic and romance killed by Windows (and Linux)?
http://www.garlic.com/~lynn/2011j.html#13 program coding pads
http://www.garlic.com/~lynn/2011k.html#17 Last card reader?
http://www.garlic.com/~lynn/2011o.html#34 Data Areas?
http://www.garlic.com/~lynn/2011p.html#5 Why are organizations sticking with mainframes?
http://www.garlic.com/~lynn/2012.html#36 Who originated the phrase "user-friendly"?
http://www.garlic.com/~lynn/2012.html#43 Who originated the phrase "user-friendly"?
http://www.garlic.com/~lynn/2012d.html#7 PCP - memory lane
http://www.garlic.com/~lynn/2012e.html#98 Burroughs B5000, B5500, B6500 videos
http://www.garlic.com/~lynn/2012o.html#31 Regarding Time Sharing
http://www.garlic.com/~lynn/2013.html#24 Is Microsoft becoming folklore?
http://www.garlic.com/~lynn/2013.html#31 Java Security?
http://www.garlic.com/~lynn/2013g.html#39 Old data storage or data base
http://www.garlic.com/~lynn/2013h.html#4 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013l.html#18 A Brief History of Cloud Computing
http://www.garlic.com/~lynn/2013o.html#54 Curiosity: TCB mapping macro name - why IKJTCB?
http://www.garlic.com/~lynn/2013o.html#87 The Mother of All Demos: The 1968 presentation that sparked a tech revolution
http://www.garlic.com/~lynn/2014.html#23 Scary Sysprogs and educating those 'kids'
http://www.garlic.com/~lynn/2014f.html#76 Fifty Years of BASIC, the Programming Language That Made Computers Personal
http://www.garlic.com/~lynn/2014f.html#85 Fifty Years of BASIC, the Programming Language That Made Computers Personal
http://www.garlic.com/~lynn/2014m.html#134 A System 360 question
http://www.garlic.com/~lynn/2015.html#51 IBM Data Processing Center and Pi
http://www.garlic.com/~lynn/2015b.html#15 What were the complaints of binary code programmers that not accept Assembly?
http://www.garlic.com/~lynn/2015c.html#102 End of vacuum tubes in computers?
http://www.garlic.com/~lynn/2015h.html#21 the legacy of Seymour Cray
http://www.garlic.com/~lynn/2015h.html#35 high level language idea
http://www.garlic.com/~lynn/2017c.html#29 Multitasking, together with OS operations

John Levine

unread,
May 25, 2017, 11:54:21 PM5/25/17
to
In article <MrCdnUsw2stjt7rE...@giganews.com>,
Jon Elson <jme...@wustl.edu> wrote:
>Well, of course, there are always tradeoffs. Remember that the 360 started
>out with the /30, a 32-bit architecture emulated on an 8-bit machine with 8-
>bit memory. ...

That's not correct. IBM designed the whole 360 series at once, so
they were working on what ended up as the /75 at the same time as the
/30. The hack of doing ISAM key searches in the disk controller
worked great on the /30 but I get the impression they didn't think
through the implications of doing it on larger machines. Or they
wrongly thought that people would put a lot fewer disks per controller
than they did.

>(Note, the 360/30 did NOT run OS/360, it ran a few lower-level systems such
>as TOS and DOS.)

It most often ran DOS, but I can assure you that OS PCP ran on a /30.
I used it.

I do agree that a lot of the horribleness in JCL was intended to
squeeze performance out of low-performance disks and tiny memories.

R's,
John

Anne & Lynn Wheeler

unread,
May 26, 2017, 2:20:15 AM5/26/17
to
Jon Elson <jme...@wustl.edu> writes:
> Right, on a 360/30 with only one task, this worked great. On a 360/50
> with 50 jobs running, letting anybody do a long search tied up the
> channel for seconds at a time. Generally, you had all DASD on one
> selector, and the tapes on another selector to avoid long (slow) tape
> operations from impacting the disks. So, if a serach was performed,
> all disks were unavailable until it was completed. Not great in the
> pultiprogramming environment.

re:
http://www.garlic.com/~lynn/2017f.html#28 MVS vs HASP vs JES (was 2821)

more recent multi-track
http://www.garlic.com/~lynn/2017f.html#30 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#31 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#36 MVS vs HASP vs JES (was 2821)

other recent multi-track
http://www.garlic.com/~lynn/2017.html#88 The ICL 2900
http://www.garlic.com/~lynn/2017b.html#28 Virtualization's Past Helps Explain Its Current Importance
http://www.garlic.com/~lynn/2017b.html#36 IBM LinuxONE Rockhopper
http://www.garlic.com/~lynn/2017f.html#5 SDB (system determined Blksize)

vtoc & PDS directory multi-track search ... find directory entry
... trading off lots of channel I/O for scarce real storage. Part of the
problem was sequential (multi-track) search doesn't scale well as
systems grew ... from multi-track with couple tracks to couple
cyclinders.

IMS did complex multi-track operations along with self-modifying channel
programs ... all in one channel program ... do multi-track search for
item with location, read the location into argument for subsequent
search in the same channel program ... which can be repeated until
finally read/write the actual record.

I was involved in original sql/relational implementation (System/R)
http://www.garlic.com/~lynn/submain.html#systemr

in the late 70s, IMS was criticizing System/R as taking twice the
physical disk space as IMS and 4-5 disk I/Os for the same database. The
issue was that IMS had physical record numbers as part of the actual
data ... while relational had separate index infrastructure (doubling
required disk space) ... and the index required 4-5 disk I/Os to process
the index. The relational response was that IMS had several times the
manual administration effort because the physical record locations were
exposed as part of the data.

Going into the 80s, there was significant drop in disk price/mbyte
... making the relational index space less of issue. Also computer
system processor memory was significantly increasing ... and there was
increasing use of caching to offset the reduction in relative system
disk throughput ... rdbms index caching then minimized the number of
separate disk i/os to access desired record. At the same time
significant increase in DBMS (in part because of significant decrease in
system and dataprocessing costs). The much lower (manual) effort and
skill met that they would mostly be RDBMS.

when Gray was leaving research for tandem ... he was palming off a bunch
of stuff on me ... including DBMS consulting with the IMS group, old
email
http://www.garlic.com/~lynn/2007.html#email801016

after Gray's disappearance
http://www.informationweek.com/database/sailing-mystery-unsolved-court-declares-jim-gray-dead/d/d-id/1104453
http://www.informationweek.com/the-search-for-microsoft-researcher-jim-gray/d/d-id/1053601

there is celebration of Gray at UCB
http://bits.blogs.nytimes.com/2012/05/18/closure-in-disappearance-of-computer-scientist-jim-gray/
http://bits.blogs.nytimes.com/2008/05/31/a-tribute-to-jim-gray-sometimes-nice-guys-do-finish-first/

Tribute press release:
http://web.archive.org/web/20080616153833/http://www.eecs.berkeley.edu/IPRO/JimGrayTribute/pressrelease.html

podcast of the tribute:
http://web.archive.org/web/20080604010939/http://webcast.berkeley.edu/event_details.php?webcastid=23082
http://web.archive.org/web/20080604072804/http://webcast.berkeley.edu/event_details.php?webcastid=23083
http://web.archive.org/web/20080604072809/http://webcast.berkeley.edu/event_details.php?webcastid=23087
http://web.archive.org/web/20080604072815/http://webcast.berkeley.edu/event_details.php?webcastid=23088

Peter Flass

unread,
May 26, 2017, 10:12:09 AM5/26/17
to
You don't need bitmaps or block lists for a file, just a starting track and
a number of tracks. Also you had the ability to control file placement,
from specifying absolute tracks to placing the file on a disk with or not
with other files. Today with large disks many installations don't have too
many of them so this isn't as important.

> if your initial
> allocation was too small. Also, the decision to make disks look just like
> tapes, where you could write any size record you want, may have looked like
> a GREAT idea in 1963. It WAS flexible, but it made it impossible to have a
> disk system with "random" allocation of sectors, like almost all systems use
> today (fixed block architecture).

Of course you could. CMS formatted its disks with a fixed block size, I
believe originally 300 bytes. It would also have been possible to allocate
randomly in units of tracks.

> So, they got locked into a number of
> features that at least some people might think came back to bite them. I
> suspect that most data disks from the CKD days had LOTS of empty space at
> the end of all data sets, as people always allocated more than they thought
> they'd need, "just in case".
>
> Jon
>



--
Pete

Peter Flass

unread,
May 26, 2017, 10:12:10 AM5/26/17
to
Speaking of databases, I know IMS is still in use; is anyone still using
CODASYL (network) databases such as GE's old IDMS? These would have a
similar administration problem as IMS, but I always thought they were neat,
and I believe they performed well also.
Pete

Peter Flass

unread,
May 26, 2017, 10:12:11 AM5/26/17
to
Charlie Gibbs <cgi...@kltpzyxm.invalid> wrote:
> On 2017-05-25, Jon Elson <jme...@wustl.edu> wrote:
>
>> So, getting the most out of such a disk system required some compromises.
>> If you make all files contiguous, you pick up a lot of performance, but it
>> makes allocation of additional records to a file a problem, if your initial
>> allocation was too small. Also, the decision to make disks look just like
>> tapes, where you could write any size record you want, may have looked like
>> a GREAT idea in 1963. It WAS flexible, but it made it impossible to have a
>> disk system with "random" allocation of sectors, like almost all systems use
>> today (fixed block architecture). So, they got locked into a number of
>> features that at least some people might think came back to bite them. I
>> suspect that most data disks from the CKD days had LOTS of empty space at
>> the end of all data sets, as people always allocated more than they thought
>> they'd need, "just in case".
>
> On the Univac 9300, running out of disk space was usually a fatal error.
> Even the standard sort utility would issue an error display and die if
> the work files you gave it were too small. This would usually bite you
> when a job that had been running successfully for months was given files
> larger than usual. It was especially frustrating if the sort was the
> first step of a job you left running overnight - you'd arrive in the
> morning to find a mess that took a couple of hours to clean up, and
> all the while users are calling up asking where their reports are.
>

Or something that would beep your pager or get you a call at 3 AM. i don't
miss those days.

--
Pete

Anne & Lynn Wheeler

unread,
May 26, 2017, 11:32:52 AM5/26/17
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:
> IMS did complex multi-track operations along with self-modifying channel
> programs ... all in one channel program ... do multi-track search for
> item with location, read the location into argument for subsequent
> search in the same channel program ... which can be repeated until
> finally read/write the actual record.

re:
http://www.garlic.com/~lynn/2017f.html#37 MVS vs HASP vs JES (was 2821)

finger slip ... that is ISAM (index sequential access method, not IMS)
doing the modifying channel programs ... where search index, and then
read address of record for subsequent seek/search in the same channel
program.

Anne & Lynn Wheeler

unread,
May 26, 2017, 12:55:14 PM5/26/17
to
Peter Flass <peter...@yahoo.com> writes:
> Of course you could. CMS formatted its disks with a fixed block size, I
> believe originally 300 bytes. It would also have been possible to allocate
> randomly in units of tracks.

original CMS (GH20-0859-0_CP67_Version_3_Users_Guide_Oct70)

CP/67 CMS formated 829, originally from optimal fit on 2311 track size:

Files stored on disk are formatted into records 829 bytes long. This
formatting is handled internally by CMS. and is not controlled by the
user. The maximum CMS file size (assuming that the user·s assigned disk
area can accommodate it), is 24.358 million bytes, or 65,533 records. If
a file consists of a source language program.. a size limitation may be
imposed by the language in which that program is written. and this size
may be smaller than the 24.358 million bytes allowed by CMS. The maximum
disk file for user disks is 203 cylinders each. Although there is no
inherent limitation to the number of files a user may create. he is
limited practically by the sizes of his disk areas. When a user has
filled either of these areas. a message to this effect is typed at his
terminal. Refer to "Recovery Procedures" for steps to be taken in this
case.

.... snip ...

This was "CDF" file system and later changed to 800. In later half of
70s, "EDF" file size was introduced, which had format selection of 1k,
2k, or 4k (fixed block) record size (it also allowed increasing index
depths for much larger file size). On 512byte FBA, a 1k, 2k, 4k logic
record size was 2, 4, or 8 contiguous physical blocks.

CDF & EDF had master file directory ... updating filesystem MDF would
write changed information to new block locations and then single write
would rewrite master record pointing to current MDF.

CKD DASD had failure mode that during power-failure could write zeros to
record w/o any error indication (dropping power to memory, but enough
power to continue channel, controller, disk, so channel would be
generating zeros to finish write). This affected CDF file system during
the update of the pointer to the MDF. EDF fixed this "bug" by having a
pair of master records and would alternate writes. After failure, access
would read both records and select the latest valid (with version number
at the end of the record, undetected zeros write would never be the
latest record). Note that this also affected VTOC and other filesystems
that were never fixed. Later fixed-block (and CKD implementation on
fixed-block) supported never starting a record disk write unless all
data was available to complete the record disk write.

Note that in the early 70s, disk division started migration to fixed
block size with sector feature. 3330 had 20 surfaces, but the 20th
surface was reserved for current rotational position ... and only 19
data surfaces. Original CKD channel program could be
seek/search/read-write all as single uninterruptable channel
program. Quickly goes to "stand-alone" seeks ... while channel is left
available (for other operations) during arm motion. However channel was
still busy during the rotation of the search operation preceding the
read/write. For fixed formating, it was possible to predeterming the
rotational sector position for the start of each record. channel program
then became "set sector"/search/read-write ... where the channel would
disconnect until the specified "set sector" came under the head and
reconnect to the channel for the search operation (minimizing enormous
channel/controller unnecessary busy).

trivia: TSS/360 had page mapped filesystem (single level store) ...
which had significant throughput/performance issues ... and something
similar was adapted for future system ... which contributed to
the enormous future system problems that eventually resulted
in its failure
http://www.garlic.com/~lynn/submain.html#futuresys

For page-mapped filesystem that I did for CP67/CMS ... and then moved to
VM370/CMS, I would claim that I learned from TSS/360 what not to do
... and avoid its significance performance limitations. While a lot of
my stuff was picked up and released in VM370, I blaim the bad rep that
page mapped filesystems got from the Future System failure ... for my
CMS paged mapped filesystem not being released. Old email at
science center
http://www.garlic.com/~lynn/subtopic.html#545tech

about moving from CP67 to VM370
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

most of the CSC/VM features (that I would support for operation and
production at internal datacenters) got picked up and shipped in
standard vm370 product ... except for the page mapped support.

some old PAM, CDF, and EDF performance comparison
http://www.garlic.com/~lynn/2006.html#25 DCSS as SWAP disk for z/Linux

Not apparent in the above numbers, there were a lot of embedded features
that resulted in scaling up significantly better in heavily loaded
concurrent environment.

page mapped filesystem posts
http://www.garlic.com/~lynn/submain.html#mmap

a constant ongoing problem was page-mapped would support address
location independent sharing of file images concurrently in different
address spaces ... like executables. The problem was that CMS used lots
of OS/360 assemblers and compilers which had "relocation adcons" which
weren't location independent execution. OS/360 when loading "relocation
adcons" ... would swizzle relocatable adcons to the loaded address
before starting execution aka after loading the file, the storage is
altered, which results in problems for both straight page mapping as
well as concurrent (read-only) sharing in different address spaces.
TSS/360 did address this issue with generated executabes that allowed
direct file mamory mapping and location independent sharing.

past posts about memory mapping and all the problems I had with the
os/360 relocatable adcons paradigm
http://www.garlic.com/~lynn/submain.html#adcon

Andreas Kohlbach

unread,
May 26, 2017, 2:22:39 PM5/26/17
to
On Thu, 25 May 2017 16:35:03 -0500, Charles Richmond wrote:
>
> Modern disk drives have their own processor with code to optimize many
> things.

Yay, Commodore 1541 drive! ;-)
--
Andreas
You know you are a redneck if
you go to a tupperware party for a haircut.

Jon Elson

unread,
May 26, 2017, 2:57:13 PM5/26/17
to
Charlie Gibbs wrote:


> Yes, I had a lot of fun with CCWs in a single-tasking environment.
> I had a CCW chain that would find the VTOC, wherever it was on the
> disk, search it for a given file name, and read in the file's format
> 1 label. Another application was a disk copy utility I got my hands
> on and speeded up. It used multi-track operations to analyze the
> format of a disk and build custom CCW chains to read or write an
> entire track in a single revolution. It could clone any 2316 pack
> in 8 minutes (including sorting out alternate track assignments).
>
A guy at work had a tape copy program that ran entirely in the channel.
You could flip sense switches on the mag tape control unit to tell it what
to do. It was used to recover as much as could be from damaged or
deteriorated tapes. When it started rocking steadily, you could flip a
switch that would copy the bad block and keep going, or skip the bad block
and continue. Since it totally tied up the channel for the duration, it was
only used for recovery when things were quite bad.

Jon

Jon Elson

unread,
May 26, 2017, 3:01:54 PM5/26/17
to
hanc...@bbs.cpcn.com wrote:

> On Thursday, May 25, 2017 at 3:05:40 PM UTC-4, Jon Elson wrote:
>
>> Well, of course, there are always tradeoffs. Remember that the 360
>> started out with the /30, a 32-bit architecture emulated on an 8-bit
>> machine with 8-
>> bit memory. It had a memory bandwidth of about 300K bytes/second.
>> Memory was capped at 64 KB, but a lot of machines were delivered with 8K
>> or 16K. (Note, the 360/30 did NOT run OS/360, it ran a few lower-level
>> systems such as TOS and DOS.)
>
> I just want to note that the veterans told me that the S/360-30, despite
> its limitations, still outperformed the 1401, the machine it replaced.
> The 1311 disk was upgraded to be the 2311, and supposedly was faster
> and held more data, so the 2311 offered more.
>
He he! In FACT, the 360/30 made a really GREAT 1401 replacement, and a
number of them were used in that mode exclusively. Although the 360/30 was
a dog, by 360 standards, you have to realize the 1401, especially, was a
whole generation earlier machine, mostly limited by the speed of its core
memory. Since the 14xx were character machines, the byte-wide memory and
data paths on the 360/30 were no handicap, comparatively.

Jon

Richard Thiebaud

unread,
May 26, 2017, 3:09:52 PM5/26/17
to
If my 45-year-old memory is correct, a 360/30 in 1401 emulation mode was
2 to 3 times faster than a 1401.

Jon Elson

unread,
May 26, 2017, 3:26:03 PM5/26/17
to
John Levine wrote:

> In article <MrCdnUsw2stjt7rE...@giganews.com>,
> Jon Elson <jme...@wustl.edu> wrote:
>>Well, of course, there are always tradeoffs. Remember that the 360
>>started out with the /30, a 32-bit architecture emulated on an 8-bit
>>machine with 8- bit memory. ...
>
> That's not correct. IBM designed the whole 360 series at once, so
> they were working on what ended up as the /75 at the same time as the
> /30. The hack of doing ISAM key searches in the disk controller
> worked great on the /30 but I get the impression they didn't think
> through the implications of doing it on larger machines. Or they
> wrongly thought that people would put a lot fewer disks per controller
> than they did.
>
Yes, they did at least plan for the range of models, but that was fluid at
first. So, there were actually a couple 360/60 and 360/62 machines
delivered, before they settled on the model /65. (All fairly similar, but
some changes in the memory modules themselves as well as the interface to
memory. The /65 allowed interleaved storage, which gave a big boost to
performance.) As far as I know, the /30 was announced and delivered first,
with the /40 soon after. The /50 and /6x came, I think, more than a year
later. No surprise that the much larger machines took a while longer to get
fully into production.

And, of course, the /85 was really a 370 in sheep's clothing.

Yes, in the early days, especially when the 360 was DESIGNED, so 1962/1963,
disks were fairly rare items, and nobody was really thinking of system
residence volumes, data packs and a full 2314 bank of drives. And, since
multiprogramming was a GOAL, but few people had ever seen a multiprogramming
system run, they were not quite comprehending the voracious need for disk
access these systems would require.

>>(Note, the 360/30 did NOT run OS/360, it ran a few lower-level systems
>>such as TOS and DOS.)
>
> It most often ran DOS, but I can assure you that OS PCP ran on a /30.
> I used it.
>
I think you needed more memory for PCP than a lot of /30's had.

> I do agree that a lot of the horribleness in JCL was intended to
> squeeze performance out of low-performance disks and tiny memories.
And, it DID! While I was much more of a scientific applications programmer,
I was aware that the ability of the 360 line to do I/O was aimed at
traditional data processing applications, and they did quite well, as long
as you avoided certain pitfalls in configuration and coding.

Jon

Quadibloc

unread,
May 26, 2017, 3:58:39 PM5/26/17
to
On Thursday, May 25, 2017 at 9:54:21 PM UTC-6, John Levine wrote:

> That's not correct. IBM designed the whole 360 series at once, so
> they were working on what ended up as the /75 at the same time as the
> /30.

IBM designed the following machines in the time prior to the April 7,
1964 announcement of System/360: the models 30, 40, 50, 65, and 75. As
you correctly note, the 65 and 75 were originally the 60 and 70. There
was also to be a 62 with faster memory.

Other members of the 360 series were designed later: the 67, the 91, the
20, the 95, the 195, the 44 - so it isn't quite correct to say they
"designed the whole 360 series at once" - they designed a series of
machines, but others were designed later and added.

John Savard

Peter Flass

unread,
May 26, 2017, 6:36:54 PM5/26/17
to
They designed the whole architecture at once, modulo a few things like VF,
extended FP, etc. The individual implementations came later.

--
Pete

Anne & Lynn Wheeler

unread,
May 26, 2017, 7:46:50 PM5/26/17
to
Peter Flass <peter...@yahoo.com> writes:
> They designed the whole architecture at once, modulo a few things like VF,
> extended FP, etc. The individual implementations came later.

360 wiki
https://en.wikipedia.org/wiki/IBM_System/360
original 360 system summary
http://bitsavers.org/pdf/ibm/360/systemSummary/A22-6810-0_360sysSummary64.pdf

pg.8, "Instructions" standard set, basic processing and logic;
commercial feature set, floating-point scientific set, two instructions
for storage proection:

The universal instruction set, a standard facility on Models 50-70 and
optional on Models 30 and 40, includes the standard set, the decimal,
the floating-point, and the two instructions for storage protection.

... snip ...

360/44 seems to be stripped down (less expensive) 360/50 with subset of
standard instruction set ... and scientific ... "its performance on
problems for which it is optimized is 30 to 60 percent faster than that
of model 50"
http://bitsavers.org/pdf/ibm/360/funcChar/A22-6875-5_360-44_funcChar.pdf

A22-6821-0 360 Principles of Operation
http://bitsavers.org/pdf/ibm/360/princOps/A22-6821-0_360PrincOps.pdf

I was told folklore about gov IBM case ... where seven dwarfs testified
that by the late 50s all those in the computer business realized that
the single most important criteria was compatible machine architecture
across the line ... greatly simplifying business customers to greatly
expand their computer use over time ... and that IBM executives were the
only ones able to force individual product line managers to toe the
compatible product line. With IBM the only one in the business, able to
meet that objective ... IBM would still prevail even if they got lots of
other things wrong.

bunch and/or IBM&7dwarfs
https://en.wikipedia.org/wiki/BUNCH
http://www.dvorak.org/blog/ibm-and-the-seven-dwarfs-dwarf-one-burroughs/

past posts:
http://www.garlic.com/~lynn/2004d.html#22 System/360 40th Anniversary
http://www.garlic.com/~lynn/2005k.html#4 IBM/Watson autobiography--thoughts on?
http://www.garlic.com/~lynn/2005k.html#27 IBM/Watson autobiography--thoughts on?
http://www.garlic.com/~lynn/2006q.html#60 Was FORTRAN buggy?
http://www.garlic.com/~lynn/2007f.html#77 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007g.html#42 1960s: IBM mgmt mistrust of SLT for ICs?
http://www.garlic.com/~lynn/2010.html#45 360 programs on a z/10
http://www.garlic.com/~lynn/2010b.html#14 360 programs on a z/10
http://www.garlic.com/~lynn/2011b.html#57 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011h.html#36 Happy 100th Birthday, IBM!
http://www.garlic.com/~lynn/2011l.html#12 Selectric Typewriter--50th Anniversary
http://www.garlic.com/~lynn/2013i.html#73 Future of COBOL based on RDz policies was Re: RDz or RDzEnterprise developers
http://www.garlic.com/~lynn/2014m.html#65 Decimation of the valuation of IBM
http://www.garlic.com/~lynn/2016e.html#60 Honeywell 200

previous posts in thread
http://www.garlic.com/~lynn/2017f.html#23 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#25 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#27 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#28 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#30 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#31 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#32 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#33 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#36 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#37 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#38 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2017f.html#39 MVS vs HASP vs JES (was 2821)

Jon Elson

unread,
May 26, 2017, 11:40:34 PM5/26/17
to
jmfbahciv wrote:


> Is there an equivalent of JCL on the -10s? Based on the posts, it sounds
> like a combination of MIC and a BATCON which could keep counts.
Well, I don't really know TOPS-10 or TOPS-20, but they used something
similar to DCL on the VAX under VMS. In VMS DCL, you could do all sorts of
things, it was a true programming language. You had variables, you could do
string operations and arithmetic, and convert between the two. You could do
loops and if/then/else, do things like search a directory for files that
matched some string, put that into a variable and then loop over those
filenames to create commands to execute a program on those files. Lots of
people wrote entire programs using only DCL and standard utilities.

Note that there were at least 4 major OS's used on various PDP-10 systems.

Jon

Jon Elson

unread,
May 26, 2017, 11:51:36 PM5/26/17
to
Anne & Lynn Wheeler wrote:



> 360/44 seems to be stripped down (less expensive) 360/50 with subset of
> standard instruction set ... and scientific ... "its performance on
> problems for which it is optimized is 30 to 60 percent faster than that
> of model 50"
No, actually the 44 was quite different than the /50. It was not
microcoded!
The reduced instruction set was all hardwired. They left out all decimal
and character instructions. It was designed for process control and
scientific computing. it had special direct I/O interfaces, and you could
put two cartridge disks inside the CPU cabinet.

Yes, the /44 is compared to the /50, but it is in no way a derivative of the
/50 hardware architecture. There was an exception handler that would allow
generic 360 programs to run on the /44.

Jon

Anne & Lynn Wheeler

unread,
May 26, 2017, 11:52:10 PM5/26/17
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:
> when there was still facade that TARP funds would be used to buy "Too
> Big To Fail" offbook toxic assets ... the waltham corporation was
> briefly mentioned in Jan2009 that they would be involved in helping
> value the offbook toxic assets (they had previously bought the pricing
> services division from one of the major credit rating agencies, giving
> rise to some jokes that credit rating agencies didn't really need to
> know value of things they were rating).

re:
http://www.garlic.com/~lynn/2017f.html#27 MVS vs HASP vs JES (was 2821)

Are Credit Rating Agencies America's Secret Fifth Column?
http://www.counterpunch.org/2017/05/26/are-credit-rating-agencies-americas-secret-fifth-column/

Even a full-blown war with North Korea or Russia could not inflict the
damage done to this Country by Moody's, Fitch and S&P. The
rating agencies have declared war on the United States and the damage
they are inflicting will eventually destroy this Country from within.

... snip ...

Oct2008 congressional hearings into the (significant) role that the
credit rating agencies played in the economic mess .... that they were
selling triple-A ratings when they knew they weren't worth triple-A.

some (triple-A rated) toxic CDO posts
http://www.garlic.com/~lynn/submisc.html#toxic.cdo

triple-A rating played significant factor in over $27T being done
2001-2008, including being able to sell to entities restricted to only
dealing in safe investments (like large pension funds).

rhetoric in congress that the purpose of Sarbanes-Oxley would prevent
future ENRONS and guarantee that executives and auditors did jail time,
but it required SEC to do something. Possibly because even GAO didn't
believe SEC was doing anything, it started doing reports about public
company fraudulent financial reports, even showing increase after SOX
goes into effect (and nobody doing jail time). Less well known is that
SOX also required SEC to do something about rating agencies ... but they
appeared to do about the rating agencies as they did about fraudulent
financial reports.
http://www.garlic.com/~lynn/submisc.html#financial.reporting.fraud
http://www.garlic.com/~lynn/submisc.html#enron

player in all this, #1 on times list of those responsible for the
economic mess
http://content.time.com/time/specials/packages/article/0,28804,1877351_1877350_1877339,00.html

and recent posts mentioning rating agencies
http://www.garlic.com/~lynn/2017.html#5 The Champions of the 401(k) Lament the Revolution They Started
http://www.garlic.com/~lynn/2017.html#8 "Too big to fail" was Malicious Cyber Activity
http://www.garlic.com/~lynn/2017.html#31 Milton Friedman's Cherished Theory Is Laid to Rest
http://www.garlic.com/~lynn/2017.html#33 Moody's Agrees to Settle Financial Crisis-Era Claims for $864 Million
http://www.garlic.com/~lynn/2017.html#36 Moody's Agrees to Settle Financial Crisis-Era Claims for $864 Million
http://www.garlic.com/~lynn/2017.html#40 The economics of corporate crime
http://www.garlic.com/~lynn/2017.html#50 Finance Is Not the Economy
http://www.garlic.com/~lynn/2017.html#65 Mnuchin Lied About His Bank's History of Robo-Signing Foreclosure Documents
http://www.garlic.com/~lynn/2017.html#76 Avaya: How we arrived at Chapter 11
http://www.garlic.com/~lynn/2017.html#92 Trump's Rollback of the Neoliberal Market State
http://www.garlic.com/~lynn/2017.html#96 Trump, Wall Street and the "banking caucus" ready to rip apart Dodd-Frank
http://www.garlic.com/~lynn/2017.html#97 Trump to sign cyber security order
http://www.garlic.com/~lynn/2017b.html#0 Trump to sign cyber security order
http://www.garlic.com/~lynn/2017b.html#7 OT: Trump Moves to Roll Back Obama-Era Financial Regulations
http://www.garlic.com/~lynn/2017b.html#11 Trump to sign cyber security order
http://www.garlic.com/~lynn/2017b.html#12 Trump to sign cyber security order
http://www.garlic.com/~lynn/2017b.html#43 when to get out???
http://www.garlic.com/~lynn/2017b.html#48 Janet Yellen debunks Trump's case for killing Dodd-Frank
http://www.garlic.com/~lynn/2017b.html#50 when to get out???
http://www.garlic.com/~lynn/2017d.html#8 Congress just obliterated Obama-era rules preventing ISPs from selling your browsing history
http://www.garlic.com/~lynn/2017d.html#67 Economists are arguing over how their profession messed up during the Great Recession. This is what happened
http://www.garlic.com/~lynn/2017e.html#38 [CM] What was your first home computer?
http://www.garlic.com/~lynn/2017e.html#99 [CM] What was your first home computer?
http://www.garlic.com/~lynn/2017f.html#7 [CM] What was your first home computer?
http://www.garlic.com/~lynn/2017f.html#24 [CM] What was your first home computer?

jmfbahciv

unread,
May 27, 2017, 11:05:29 AM5/27/17
to
There wasn't a utility which would do arithmetic; I suppose one could have
been developed.

>
> Note that there were at least 4 major OS's used on various PDP-10 systems.

TOPS-10, TOPS-20, MULTICS... which is the fourth one?

/BAH

Morten Reistad

unread,
May 27, 2017, 12:19:48 PM5/27/17
to
In article <PM0005508...@aca41230.ipt.aol.com>,
Barb, you should know.

Tops10, ITS, Tenex, Tops20. Multics never ran on pdp10s.

-- mrr

Anne & Lynn Wheeler

unread,
May 27, 2017, 12:42:23 PM5/27/17
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:
> Are Credit Rating Agencies America's Secret Fifth Column?
> http://www.counterpunch.org/2017/05/26/are-credit-rating-agencies-americas-secret-fifth-column/
>
> Even a full-blown war with North Korea or Russia could not inflict the
> damage done to this Country by Moody's, Fitch and S&P. The
> rating agencies have declared war on the United States and the damage
> they are inflicting will eventually destroy this Country from within.
>
> ... snip ...

re:
http://www.garlic.com/~lynn/2017e.html#96 [CM] What was your first home computer?
http://www.garlic.com/~lynn/2017f.html#10 [CM] What was your first home computer?
http://www.garlic.com/~lynn/2017f.html#19 [CM] What was your first home computer?
http://www.garlic.com/~lynn/2017f.html#41 [CM] What was your first home computer?
http://www.garlic.com/~lynn/2017f.html#42 MVS vs HASP vs JES (was 2821)

The Limping Middle Class
http://www.nytimes.com/2011/09/04/opinion/sunday/jobs-will-follow-a-strengthening-of-the-middle-class.html
slouching towards 3rd world country status and return of the robber barons.
http://www.nytimes.com/imagepages/2011/09/04/opinion/04reich-graphic.html?ref=sunday

inequality posts
http://www.garlic.com/~lynn/submisc.html#inequality

Third World America
http://thesovereigninvestor.com/us-economy/third-world-america/

Three interrelated factors cause "Third World-itis": lopsided
distribution of income; a government hijacked by the economic elite; and
a political focus on stasis rather than change. Together those features
form a self-reinforcing engine that moves in one direction only: toward
conflict, tyranny and eventual collapse.

... snip ...

The surging ranks of America's ultrapoor
http://www.cbsnews.com/news/the-surging-ranks-of-americas-ultrapoor/

By one dismal measure, America is joining the likes of Third World
countries.

... snip ...

USA: The World's Newest Third World Nation
http://www.truth-out.org/news/item/23535-usa-the-worlds-newest-third-world-nation
Six Ways America Is Like a Third-World Country; Our society lags behind
the rest of the developed world in education, health care, violence and
more
http://www.rollingstone.com/politics/news/six-ways-america-is-like-a-third-world-country-20140305
U.S. students' academic achievement still lags that of their peers in
many other countries
http://www.pewresearch.org/fact-tank/2017/02/15/u-s-students-internationally-math-science/
The World's Most Reputable Countries 2016: U.S. Ranks 28th - Forbes
https://www.forbes.com/sites/karstenstrauss/2016/06/24/the-worlds-most-reputable-countries-2016-u-s-a-ranks-28th/

Why Nations Fail ... has frequent/general "inequality" examples that are
also cited in above articles
http://www.amazon.com/Why-Nations-Fail-Prosperity-ebook/dp/B0058Z4NR8

past posts mentioning "Why Nations Fail"
http://www.garlic.com/~lynn/2012e.html#31 PC industry is heading for more change
http://www.garlic.com/~lynn/2012e.html#34 The never-ending SCO lawsuit
http://www.garlic.com/~lynn/2012e.html#35 The Dallas Fed Is Calling For The Immediate Breakup Of Large Banks
http://www.garlic.com/~lynn/2012e.html#36 The never-ending SCO lawsuit
http://www.garlic.com/~lynn/2012e.html#57 speculation
http://www.garlic.com/~lynn/2012e.html#60 Candid Communications & Tweaking Curiosity, Tools to Consider
http://www.garlic.com/~lynn/2012e.html#70 Disruptive Thinkers: Defining the Problem
http://www.garlic.com/~lynn/2012f.html#2 Did they apply Boyd's concepts?
http://www.garlic.com/~lynn/2012f.html#32 Back to the future: convict labor returns to America
http://www.garlic.com/~lynn/2012f.html#70 The Army and Special Forces: The Fantasy Continues
http://www.garlic.com/~lynn/2012f.html#80 The Failure of Central Planning
http://www.garlic.com/~lynn/2012f.html#84 How do you feel about the fact that India has more employees than US?
http://www.garlic.com/~lynn/2012h.html#10 Monopoly/ Cartons of Punch Cards
http://www.garlic.com/~lynn/2012h.html#15 Imbecilic Constitution
http://www.garlic.com/~lynn/2012i.html#85 Naked emperors, holy cows and Libor
http://www.garlic.com/~lynn/2012j.html#81 GBP13tn: hoard hidden from taxman by global elite
http://www.garlic.com/~lynn/2012k.html#7 Is there a connection between your strategic and tactical assertions?
http://www.garlic.com/~lynn/2012k.html#42 The IBM "Open Door" policy
http://www.garlic.com/~lynn/2012k.html#45 If all of the American earned dollars hidden in off shore accounts were uncovered and taxed do you think we would be able to close the deficit gap?
http://www.garlic.com/~lynn/2012l.html#17 Cultural attitudes towards failure
http://www.garlic.com/~lynn/2012m.html#29 Cultural attitudes towards failure
http://www.garlic.com/~lynn/2012m.html#34 General Mills computer
http://www.garlic.com/~lynn/2012m.html#39 General Mills computer
http://www.garlic.com/~lynn/2012n.html#83 Protected: R.I.P. Containment
http://www.garlic.com/~lynn/2012o.html#71 Is orientation always because what has been observed? What are your 'direct' experiences?
http://www.garlic.com/~lynn/2012p.html#44 Search Google, 1960:s-style
http://www.garlic.com/~lynn/2013k.html#39 copyright protection/Doug Englebart
http://www.garlic.com/~lynn/2013k.html#40 copyright protection/Doug Englebart
http://www.garlic.com/~lynn/2013k.html#69 What Makes a Tax System Bizarre?
http://www.garlic.com/~lynn/2013k.html#72 Versailles on the Potomac at it again
http://www.garlic.com/~lynn/2014e.html#61 Before the Internet: The golden age of online services
http://www.garlic.com/~lynn/2014m.html#84 LEO
http://www.garlic.com/~lynn/2015b.html#62 Future of support for telephone rotary dial ?
http://www.garlic.com/~lynn/2016b.html#49 Corporate malfeasance
http://www.garlic.com/~lynn/2016c.html#38 Qbasic
http://www.garlic.com/~lynn/2016e.html#123 E.R. Burroughs
http://www.garlic.com/~lynn/2017.html#32 Star Trek (was Re: TV show Mannix observations)
http://www.garlic.com/~lynn/2017f.html#10 [CM] What was your first home computer?

hanc...@bbs.cpcn.com

unread,
May 27, 2017, 2:48:42 PM5/27/17
to
On Thursday, May 25, 2017 at 11:54:21 PM UTC-4, John Levine wrote:
> Jon Elson <jmelson> wrote:
> >Well, of course, there are always tradeoffs. Remember that the 360 started
> >out with the /30, a 32-bit architecture emulated on an 8-bit machine with 8-
> >bit memory. ...
>
> That's not correct. IBM designed the whole 360 series at once, so
> they were working on what ended up as the /75 at the same time as the
> /30. The hack of doing ISAM key searches in the disk controller
> worked great on the /30 but I get the impression they didn't think
> through the implications of doing it on larger machines. Or they
> wrongly thought that people would put a lot fewer disks per controller
> than they did.

As I understand, IBM greatly underestimated the number of disks
customers would want.



> >(Note, the 360/30 did NOT run OS/360, it ran a few lower-level systems such
> >as TOS and DOS.)
>
> It most often ran DOS, but I can assure you that OS PCP ran on a /30.
> I used it.

Everyone I heard said PCP on a low end machine was a disaster, it
was just too much squeezed into an underpowered machine; that's why
IBM rushed to develop DOS for their low end machines.


> I do agree that a lot of the horribleness in JCL was intended to
> squeeze performance out of low-performance disks and tiny memories.

I believe another factor was to control a large number of peripherals
in a large site--multiple tapes, disks, printers, etc. Users could
request a generate device or a specific device. Old files could
be retrieved via a catalog. A lot of different options.

Along with this were options to facilitate operations--to have
tapes ready in advance, keep a tape mounted on a drive for further
use, printer set up instructions, etc. I believe the console
operator could look ahead at the job mix and release jobs per
system resource availability.


For instance, we used to print certain checks "hot", not going
through the spooler. Easy JCL notation.


hanc...@bbs.cpcn.com

unread,
May 27, 2017, 3:02:25 PM5/27/17
to
On Friday, May 26, 2017 at 7:46:50 PM UTC-4, Anne & Lynn Wheeler wrote:

> I was told folklore about gov IBM case ... where seven dwarfs testified
> that by the late 50s all those in the computer business realized that
> the single most important criteria was compatible machine architecture
> across the line ... greatly simplifying business customers to greatly
> expand their computer use over time ... and that IBM executives were the
> only ones able to force individual product line managers to toe the
> compatible product line. With IBM the only one in the business, able to
> meet that objective ... IBM would still prevail even if they got lots of
> other things wrong.

As a reminder, it was consolidating _four_ types of machines--small
business, small sci/tech, large business, and large sci/tech.

IBM met a lot of internal resistance over this. The history describes
how a manager, Haanstra, insisted on building a new 1401 with new
circuitry rather than comply with the universal 360 party line. Others
were also very much opposed to S/360, preferring to go with another
design then in the works (IIRC, the 8000 series).

To provide universal compatibility, some compromises had to be made
in those days. Indeed, the S/360 design was not truly universal:
CDC had other designs for its super-computers that were better,
and IBM had to develop a wholly new design for S/3 to make it
affordable on the very low end*.

In the end we know S/360 was a huge success. But getting there was
a very tough road. (Something the business world has not learned.)

In building large things like bridges and tunnels, there is often
a count of fatalities of workers during construction (and of course
striving to make things safer). No count like that was kept for
S/360, but according to Watson's memoir, there were indirect fatalities
from overwork or family neglect.

IBM ended up paying for this when later they had massive defections
to other companies by employees who felt slighted.

* This was described in a recent post here, a very good reference
of the S/3 architecture is on bitsavers, worth a read.



hanc...@bbs.cpcn.com

unread,
May 27, 2017, 3:09:06 PM5/27/17
to
On Friday, May 26, 2017 at 3:09:52 PM UTC-4, Richard Thiebaud wrote:

> >> I just want to note that the veterans told me that the S/360-30, despite
> >> its limitations, still outperformed the 1401, the machine it replaced.
> >> The 1311 disk was upgraded to be the 2311, and supposedly was faster
> >> and held more data, so the 2311 offered more.
> >>
> > He he! In FACT, the 360/30 made a really GREAT 1401 replacement, and a
> > number of them were used in that mode exclusively. Although the 360/30 was
> > a dog, by 360 standards, you have to realize the 1401, especially, was a
> > whole generation earlier machine, mostly limited by the speed of its core
> > memory. Since the 14xx were character machines, the byte-wide memory and
> > data paths on the 360/30 were no handicap, comparatively.
> >
> > Jon
> >
>
> If my 45-year-old memory is correct, a 360/30 in 1401 emulation mode was
> 2 to 3 times faster than a 1401.

Not sure, but I think the improvement was more like 1.5 to 2 times faster.
Emulation, though in hardware, was still rather inefficient.

I heard the _typical_ smallest /30 was 32k. Programming that in native
mode, allowing full utilization of S/360 features and the full space
on disk, would allow faster performance.

The big question I don't know is price. Imagine it's 1967. Here
we have a 1401 16k site paying rent. What would the rental for a
360-30 32k cost them? What would it give them?

Many companies were growing in those days, and had increased volume
of data to push their existing applications, so they had more need
just from that. But in addition, many companies wanted to computerize
more applications than perhaps the core accounting and payroll.

Gene Wirchenko

unread,
May 27, 2017, 7:32:43 PM5/27/17
to
On Thu, 25 May 2017 13:50:53 -0500, Jon Elson <jme...@wustl.edu>
wrote:

>Anne & Lynn Wheeler wrote:

>> big OS360 problem with CKD was early 360 using multi-track search (and
>> channel capacity) trade-off for limited real storage.
>
>Right, on a 360/30 with only one task, this worked great. On a 360/50 with
>50 jobs running, letting anybody do a long search tied up the channel for
>seconds at a time. Generally, you had all DASD on one selector, and the
>tapes on another selector to avoid long (slow) tape operations from
>impacting the disks. So, if a serach was performed, all disks were
>unavailable until it was completed. Not great in the pultiprogramming
>environment.

I assume that this was if you used a search feature built into
the channel/drive. What if you rewrote stuff to use supposedly less
efficient methods? (Maybe having the program implement its searching
or calling a procedure that implemented the searching.) What would
have been the result?

Sincerely,

Gene Wirchenko

Anne & Lynn Wheeler

unread,
May 27, 2017, 8:21:02 PM5/27/17
to
Gene Wirchenko <ge...@telus.net> writes:
> I assume that this was if you used a search feature built into
> the channel/drive. What if you rewrote stuff to use supposedly less
> efficient methods? (Maybe having the program implement its searching
> or calling a procedure that implemented the searching.) What would
> have been the result?

re:
http://www.garlic.com/~lynn/2017f.html#28 MVS vs HASP vs JES (was 2821)

search operation tied up the device, controller, and channel ...
constantly accessing processor storage for the search argument for each
compare.

RDBMS has index ... but as system processor memory size was greatly
increasing, it was possible to cache index in memory and find items
significantly faster with higher throughput ... than having purely
linear sequential on-disk search for every time needed to load item
(scales badly as size of directory increases) i.e.
http://www.garlic.com/~lynn/2017f.html#37 MVS vs HASP vs JES (was 2821)

partway was done with introduction of "sectors" on 3330, for known track
layout and known record location, the channel program would have
set-sector that would disconnect from the channel until the sector
location, presumably just in front of the record that matched the search
argument.
http://www.garlic.com/~lynn/2017f.html#39 MVS vs HASP vs JES (was 2821)

eventually IBM introduced index-based PDSE ... to address the
enormous PDS director sequential search penalty every time
a member has to be loaded
https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zconcepts/zconcepts_166.htm

The directory can expand automatically as needed, up to the addressing
limit of 522,236 members. It also has an index, which provides a fast
search for member names. Space from deleted or moved members is
automatically reused for new members, so you do not have to compress a
PDSE to remove wasted space. Each member of a PDSE can have up to
15,728,639 records. A PDSE can have a maximum of 123 extents, but it
cannot extend beyond one volume. When a directory of a PDSE is in use,
it is kept in processor storage for fast access.

... snip ...

PDS
https://en.wikipedia.org/wiki/Data_set_(IBM_mainframe)#Partitioned_data_sets
PDSE and PDS Differences
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.idad400/d4289.htm

CKD, FBA, multi-track search, etc
http://www.garlic.com/~lynn/submain.html#dasd

Anne & Lynn Wheeler

unread,
May 27, 2017, 8:28:22 PM5/27/17
to
Jon Elson <el...@pico-systems.com> writes:
> No, actually the 44 was quite different than the /50. It was not
> microcoded!
> The reduced instruction set was all hardwired. They left out all
> decimal and character instructions. It was designed for process
> control and scientific computing. it had special direct I/O
> interfaces, and you could put two cartridge disks inside the CPU
> cabinet.
> Yes, the /44 is compared to the /50, but it is in no way a derivative
> of the /50 hardware architecture. There was an exception handler that
> would allow generic 360 programs to run on the /44.

re:
http://www.garlic.com/~lynn/2017f.html#40 MVS vs HASP vs JES (was 2821)

some past description of 360/75 is 360/65 with hardwired instructions
but same memory subsystems

30 funct characteristics
http://bitsavers.trailing-edge.com/pdf/ibm/360/funcChar/GA24-3231-7_360-30_funcChar.pdf
orig 2msec/memory, later 1.5msec memory, 1byte fetch/store

40 funct characteristics
http://bitsavers.informatik.uni-stuttgart.de/pdf/ibm/360/funcChar/A22-6881-2_360-40_funcChar.pdf
2.5msec memory with 2byte fetch/store

44 funct characteristics
http://bitsavers.informatik.uni-stuttgart.de/pdf/ibm/360/funcChar/A22-6875-5_360-44_funcChar.pdf
1msec memory with 4byte fetch/store

50 funct characteristics
http://bitsavers.informatik.uni-stuttgart.de/pdf/ibm/360/funcChar/A22-6898-1_360-50_funcChar_1967.pdf
2msec memory with 4byte fetch/store

360/44 has 4byte wide data paths like 360/50, but with 1mic memory
(instead of 2mic memory)

following doesn't have 360/44
https://en.wikipedia.org/wiki/IBM_System/360#Table_of_System.2F360_models
360/30, 29kips, 1.3mbyte/sec cpu bandwidth, 0.7mbyte/sec memory bandwidth
360/40, 75kips, 3.2mbyte/sec cpu bandwidth, 0.8mbyte/sec memory bandwidth
360/50, 168kips, 8mbyte/sec cpu bandwidth, 2.0mbyte/sec memory bandwidth

claim of 360/44 having 1.3-1.6 times performance of 360/50 for
scientific is combination of memory twice as fast and hardwired rather
mcode instruction.

Jon Elson

unread,
May 27, 2017, 9:16:01 PM5/27/17
to
Yes, that was the beginning of data base software. If you were only running
one job, then let the hardware do it all. If you were running multiple
jobs, the whole POINT of the 360 (at least larger systems) then you can't
have one job lock out the whole disk system for seconds at a time. So,
since you had more memory on the larger systems, keep some kind of index or
hash table in memory, and then you can ask for the desired record on the
first or second access, instead of blindly searching the whole database on
disk for the record you need.

Jon

Richard Thiebaud

unread,
May 27, 2017, 10:42:17 PM5/27/17
to
On 05/27/2017 03:09 PM, hanc...@bbs.cpcn.com wrote:
> Not sure, but I think the improvement was more like 1.5 to 2 times faster.
> Emulation, though in hardware, was still rather inefficient.

You are probably correct.

Quadibloc

unread,
May 27, 2017, 11:14:44 PM5/27/17
to
On Saturday, May 27, 2017 at 10:19:48 AM UTC-6, Morten Reistad wrote:
> Multics never ran on pdp10s.

Yes. Multics was an operating system for the GE 625 and/or 645, I believe.

John Savard

Peter Flass

unread,
May 28, 2017, 6:47:42 AM5/28/17
to
<hanc...@bbs.cpcn.com> wrote:
> On Friday, May 26, 2017 at 3:09:52 PM UTC-4, Richard Thiebaud wrote:
>
>>>> I just want to note that the veterans told me that the S/360-30, despite
>>>> its limitations, still outperformed the 1401, the machine it replaced.
>>>> The 1311 disk was upgraded to be the 2311, and supposedly was faster
>>>> and held more data, so the 2311 offered more.
>>>>
>>> He he! In FACT, the 360/30 made a really GREAT 1401 replacement, and a
>>> number of them were used in that mode exclusively. Although the 360/30 was
>>> a dog, by 360 standards, you have to realize the 1401, especially, was a
>>> whole generation earlier machine, mostly limited by the speed of its core
>>> memory. Since the 14xx were character machines, the byte-wide memory and
>>> data paths on the 360/30 were no handicap, comparatively.
>>>
>>> Jon
>>>
>>
>> If my 45-year-old memory is correct, a 360/30 in 1401 emulation mode was
>> 2 to 3 times faster than a 1401.
>
> Not sure, but I think the improvement was more like 1.5 to 2 times faster.
> Emulation, though in hardware, was still rather inefficient.
>
> I heard the _typical_ smallest /30 was 32k.

I think so, that's mostly what I saw, with a few 64K. I think they came as
small as 16K, but I can't imagine what would run on it, except maybe a
dedicated application. I believe the smallest DOS Supervisor you could
generate was 4K. I'm not even sure the assembler would run in 12K..

> Programming that in native
> mode, allowing full utilization of S/360 features and the full space
> on disk, would allow faster performance.
>
> The big question I don't know is price. Imagine it's 1967. Here
> we have a 1401 16k site paying rent. What would the rental for a
> 360-30 32k cost them? What would it give them?
>
> Many companies were growing in those days, and had increased volume
> of data to push their existing applications, so they had more need
> just from that. But in addition, many companies wanted to computerize
> more applications than perhaps the core accounting and payroll.
>

When I worked for CTG, when it was Marx-Baer, that's mostly what we did.
The 1401 systems weren't worth porting, since they were so heavily
card-oriented and dependent on offline steps. We came in and built new
systems from scratch, back in the days when companies had the crazy idea
that they systems should fit the way they did business, rather than
changing everything to fit some purchased system.

--
Pete

Peter Flass

unread,
May 28, 2017, 6:47:42 AM5/28/17
to
You could use random files and hashIng. DOS DBOMP (bill of materials
processor) implemented a database by using hard links (direct disk
addresses) between records. Core was too small to allow for anything but a
top-level index, to say nothing of a relational system that keeps
everything in memory.

--
Pete

jmfbahciv

unread,
May 28, 2017, 10:49:48 AM5/28/17
to
I wonder where this notion. That changes everything..Time to unwind my
thinking and try to remember my conclusions based on this incorrect
assumption.

/BAH

jmfbahciv

unread,
May 28, 2017, 10:49:48 AM5/28/17
to
I thought it did. I equate Tenex and TOPS-20 (same sources).

/BAH

Anne & Lynn Wheeler

unread,
May 28, 2017, 1:09:29 PM5/28/17
to
Quadibloc <jsa...@ecn.ab.ca> writes:
> Yes. Multics was an operating system for the GE 625 and/or 645, I believe.

some of the CTSS people
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System

went to the 5th flr to do multics
https://en.wikipedia.org/wiki/Multics

and others went to the IBM science center on the 4th flr
http://www.garlic.com/~lynn/subtopic.html#545tech

MULTICS project was single-level store, page-mapped filesystem, virtual
memory segments. IBM bid modified 360 and lost out to GE bid machine
with those hardware features (as "standard product")
https://en.wikipedia.org/wiki/GE-645

The 645 was a modified 635 processor that provided hardware support for
the Multics operating system developed at MIT.

... snip ...

a lot more history
http://multicians.org/index.html
and
http://multicians.org/history.html

Multics ran on specialized expensive CPU hardware that provided a
segmented, paged, ring-structured virtual memory. The supervisor
implemented symmetric multiprocessing with shared physical and virtual
memory. Standard Honeywell mainframe peripherals and memory were
used. The operating system was programmed in PL/I.

... snip ...

the science center thot it would be the focus for the MIT bid and also
center for virtual memory expertise. A lot more history details here
http://www.leeandmelindavarian.com/Melinda/neuvm.pdf

So, only weeks after his arrival in Cambridge, Rasmussen had to deal
with MIT's very negative reaction to System/360. Within days of the
System/360 announcement, the chief S/360 architect, Gene Amdahl, came to
Cambridge to meet with Professor Corbato and his colleagues, but that
meeting seems only to have made matters worse. As a loyal IBMer,
Rasmussen was deeply embarrassed by IBM's failure to heed the advice of
such an important customer, and he became determined to make things
right, to do whatever was necessary to make System/360 right for MIT and
other customers.

...

The machine that IBM proposed to Project MAC was a S/360 that had been
modified to include the "Blaauw Box". This machine was also bid to
Bell Labs at about the same time. It was never built, however, because
both MIT and Bell Labs chose another vendor. MIT's stated reason for
rejecting IBM's bid was that it wanted a processor that was a main-line
product, so that others could readily acquire a machine on which to run
Multics. It was generally believed, however, that displeasure with IBM's
attitude toward time-sharing was a factor in Project MAC's decision.
Losing Project MAC and Bell Labs had important consequences for IBM.

... snip ...

IBM eventually does come out with 360/67 but too late. I refer to
standard TSS/360 product for 360/67 at one point had something like
1200 at a time when science center had 12 people doing cp/67-cms.
http://www.garlic.com/~lynn/2017f.html#25 MVS vs HASP vs JES (was 2821)

being in same bldg only one flr apart there was some rivalry ... also
would see each other frequently for lunch ... in lunch room on 1st flr
of 545 tech sq ... or other places along main street or down in central
sq or harvard sq.

more multics history
http://multicians.org/history.html

Elliott Organick's book, The Multics System, an Examination of its
Structure, describes the system as it was in about 1968. MIT started
providing timesharing service on Multics to users in fall of 1969. GE
sold the next system to the US Air Force, and the military use of
Multics led to some of the system's security features. Honeywell sold
more systems to government, and to auto makers, universities, and
commercial data processing services.

... snip ...

Much later I post in multics usenet some old email about USAF coming by
looking for 20 4341s, but it then grew to 120 4341s
http://www.garlic.com/~lynn/2001m.html#email790404
in this post
http://www.garlic.com/~lynn/2001m.html#12 Multics Nostalgia

much more drift, this also mentions that MULTICS shipped (first
commercial) RDBMS product before IBM did
http://multicians.org/history.html

Codd was at San Jose Research which did the first SQL/relational
product, system/R implementing on VM370 370/145 ... some past posts
http://www.garlic.com/~lynn/submain.html#systemr

there was heavy opposition inside IBM to RDBMS ... which was all
involved in implementing the next mainstream DBMS product (code-name)
"EAGLE". While the rest of IBM was pre-occupied with "EAGLE", we
(by this time I had transferred to SJR) managed to do tech transfer
("under the radar") to Endicott and release as SQL/DS.
https://en.wikipedia.org/wiki/IBM_SQL/DS

Later when "EAGLE" implodes, there is request for how fast System/R
(SQL/DS) can be ported to MVS. It is eventually released as DB2,
initially for decision support only:
https://en.wikipedia.org/wiki/IBM_DB2

Even later, when we are working on HA/CMP cluster scaleup ...
http://www.garlic.com/~lynn/subtopic.html#hacmp
referenced here in this Jan1992 meeting with Oracle in ellison's
conference room
http://www.garlic.com/~lynn/95.html#13

The oracle EVP in the meeting, mentions that when he was at (IBM) STL he
did the SQL/DS technology transfer from Endicott back to STL for DB2.
However, within a few weeks of the Ellison meeting, cluster scaleup was
transferred to Kingston, announced as IBM supercomputer (for technical
and scientific *ONLY*), and we were told we couldn't work on anything
with more than four processors. Possibly contributing to the decision
was that the (mainframe) DB2 group was claiming that if we (HA/CMP) were
allowed to proceed, we would be at least five years ahead of them.
related old email
http://www.garlic.com/~lynn/lhwemail.html#medusa

Quadibloc

unread,
May 29, 2017, 9:39:50 AM5/29/17
to
On Sunday, May 28, 2017 at 11:09:29 AM UTC-6, Anne & Lynn Wheeler wrote:

> IBM eventually does come out with 360/67 but too late.

Too late for Multics, but at least it did some good; it led to MTS.

John Savard

Rich Alderson

unread,
May 31, 2017, 7:00:56 PM5/31/17
to
645, which was a 635 with some additional hardware and a change in instruction
size from 12 to 13 bits (IIRC).

After the sale to Honeywell, the GE 600s became Honeywell 6000s. The top of
the line GCOS (< GECOS) system was the 6170; the Multics variant was the 6180.

The final hardware was the DPS-8 (GCOS) vs DPS-8/M (Multics).

--
Rich Alderson ne...@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen

Rich Alderson

unread,
May 31, 2017, 7:02:50 PM5/31/17
to
jmfbahciv <See....@aol.com> writes:

> Quadibloc wrote:

>> Yes. Multics was an operating system for the GE 625 and/or 645, I believe.

> I wonder where this notion. That changes everything..Time to unwind my
> thinking and try to remember my conclusions based on this incorrect
> assumption.

Probably because all the GE/Honeywell large systems were a 36-bit architecture,
although very unlike the PDP-10. Cf. the Univac 1100/2200 family and the IBM
704x/709x family.

Rich Alderson

unread,
May 31, 2017, 7:07:54 PM5/31/17
to
jmfbahciv <See....@aol.com> writes:

> Morten Reistad wrote:

>> Tops10, ITS, Tenex, Tops20. Multics never ran on pdp10s.

> I thought it did. I equate Tenex and TOPS-20 (same sources).

Well, sort of. TENEX was the beginning of TOPS-20, but DEC added a lot of
things, took out some things, and changed some things incompatibly.

Most important add was the COMND% JSYS. Most important drop was user-mode
JSYS (that is, JSYS with a non-zero AC field).

These are my opinions.

Rich Alderson

unread,
May 31, 2017, 7:13:28 PM5/31/17
to
Peter Flass <peter...@yahoo.com> writes:

> <hanc...@bbs.cpcn.com> wrote:

>> I heard the _typical_ smallest /30 was 32k.

> I think so, that's mostly what I saw, with a few 64K. I think they came as
> small as 16K, but I can't imagine what would run on it, except maybe a
> dedicated application. I believe the smallest DOS Supervisor you could
> generate was 4K. I'm not even sure the assembler would run in 12K..

BOS ("Basic Operating System"), which was cut down with respect to TOS/DOS.
My mother learned FORTRAN on BOS while working at Ekco in Wheeling, Illinois,
at her boss's insistence. (Wish I had known, since I was learning the same
language on a 1401 in school!)

jmfbahciv

unread,
Jun 1, 2017, 10:28:18 AM6/1/17
to
Rich Alderson wrote:
> jmfbahciv <See....@aol.com> writes:
>
>> Quadibloc wrote:
>
>>> Yes. Multics was an operating system for the GE 625 and/or 645, I believe.
>
>> I wonder where this notion. That changes everything..Time to unwind my
>> thinking and try to remember my conclusions based on this incorrect
>> assumption.
>
> Probably because all the GE/Honeywell large systems were a 36-bit
architecture,
> although very unlike the PDP-10. Cf. the Univac 1100/2200 family and the
IBM
> 704x/709x family.
>

Maybe that was it. Which OS ran the multi-CPU system which had mixed
KAs and KIs? Was that ITS?


/BAH

Rich Alderson

unread,
Jun 2, 2017, 9:23:39 PM6/2/17
to
I think you're think of WAITS, which ran on the system(s) at the Stanford
Artificial Intelligence Laboratory. There wasn't a KI in the mix. They
started with a PDP-6, added a 10/50 which became the master, and eventually a
1080. The PDP-6 disappeared after SAIL donated it to DEC for the Computer
Museum (with a stop in Anaheim for the 1984 DECUS Fall Symposia), and the KA-10
went away in 1989. The system was retired in 1991.

As it happens, just today I got telnet working on WAITS on a 1095 here at the
museum, after 3 years of (sometimes long interrupted) work getting an OS with
no distribution media installed and running. Have a look at my web site:

http://www.panix.com/~alderson/
0 new messages