Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Familiar

50 views
Skip to first unread message

Scott Ford

unread,
Jun 14, 2012, 2:03:08 PM6/14/12
to
All:
 
Check this out, boy this looks vaguely familiar like CICS or DB2 ..
 
http://en.wikipedia.org/wiki/File:Hadoop_1.png

Scott J Ford
Software Engineer
http://www.identityforge.com

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to list...@listserv.ua.edu with the message: INFO IBM-MAIN

Rob Schramm

unread,
Jun 15, 2012, 3:09:30 PM6/15/12
to
DB2 is more comparable.

As far as I can tell Hadoop is more targeted at data large enough (like a
data warehouse) to start to cause a traditional database "issues". There
is some information about using Hadoop with Cobol. There are some folks
using this in conjunction with z today.

Rob Schramm
Senior Systems Consultant
Imperium Group

Scott Ford

unread,
Jun 16, 2012, 1:15:57 PM6/16/12
to
Rob,

Based on what I saw and my limited knowledge , I agree. Interesting how other apps mimic z/os

Scott ford
www.identityforge.com

Anne & Lynn Wheeler

unread,
Jun 16, 2012, 2:09:31 PM6/16/12
to
scott_...@YAHOO.COM (Scott Ford) writes:
> Check this out, boy this looks vaguely familiar like CICS or DB2 ..
>  
> http://en.wikipedia.org/wiki/File:Hadoop_1.png

note that CICS was originally to avoid having to use as few os/360
resources as possible ... because os/360 processing was horrendously
heavy weight and bloated. CICS had originally been developed at customer
site before being selected for releasing as product.

disclaimer: univ. library got an ONR grant to do online catalog ...
part of the money was used to get a 2321 datacell. The project was also
selected to betatest for the original CICS product. As undergraudate, in
the 60s, I got tasked with debugging and supporting CICS in the
betatest. misc. past posts mentioning CICS and/or BDAM
http://www.garlic.com/~lynn/submain.html#cics

original relational was Codd in bldg. 28 and the original relational/sql
implementation was system/r on vm370 370/145 in bldg. 28. misc. past
posts mentioning system/r
http://www.garlic.com/~lynn/submain.html#systemr

However, nearly everybody else in the industry released RDBMS product
before IBM got around to it. In part because the company was so focused
on new flagship DBMS EAGLE product, under the radar (so to speak) we
were able to do technology transfer from bldg. 28 to Endicott for
release as SQL/DS. Later after EAGLE imploded there was request about
how fast could a port of be done to MVS (which eventually became DB2).

In this reference about meeting on HA/CMP cluster scaleup, early Jan1992
in Ellison conference room
http://www.garlic.com/~lynn/95.html#13

we are working with oracle for RDBMS cluster scaleup for HA/CMP ...
misc. old email mentioning cluster scaleup for both commercial and
scientific/numeric-intensive
http://www.garlic.com/~lynn/lhwemail.html#medusa

the issue was that Oracle's Unix product had cluster built in since it
also ran on vax/cluster. I had to provide an HA/CMP API that had some of
the look&feel of vax/cluster ... but there was also a list things that
vax/cluster had done "wrong" ... that I got to correct (plus draing on
the mainframe experience). The IBM non-mainframe "DB2" of the period was
in the process of being developed for OS2 and had none of the high-end,
high-throughput features that were required. However, the mainframe
people do complain if I'm allowed to continue ... it will be a minimum
of five years ahead of where mainframe DB2 is.

In any case, end of Jan1992, possibly only hrs after the last email
mentioned above ... the cluster scaleup is transferred and we are told
we can't work on anything with more than four processors. A couple
weeks later there is this for numeric-intensive *ONLY* (17Feb1992)
http://www.garlic.com/~lynn/2001n.html#6000clusters1
and later in the spring reference to it coming as complete *SURPRISE*
to the company (11May1992)
http://www.garlic.com/~lynn/2001n.html#6000clusters2

and then much more recently for commercial/DBMS
http://www.garlic.com/~lynn/2009p.html#43 From The Annals of Release No Software Before Its Time

referencing (not quite 20yrs later):

DB2 announces technology that trumps Oracle RAC and Exadata
http://freedb2.com/2009/10/10/for-databases-size-does-matter/
IBM pureScale Technology Redefines Transaction Processing Economics.
New DB2 Feature Sets the Bar for System Performance on More than 100 IBM
Power Systems
http://www-03.ibm.com/press/us/en/pressrelease/28593.wss

--
virtualization experience starting Jan1968, online at home since Mar1970

Anne & Lynn Wheeler

unread,
Jun 18, 2012, 9:30:42 AM6/18/12
to
ly...@GARLIC.COM (Anne & Lynn Wheeler) writes:
> In any case, end of Jan1992, possibly only hrs after the last email
> mentioned above ... the cluster scaleup is transferred and we are told
> we can't work on anything with more than four processors. A couple
> weeks later there is this for numeric-intensive *ONLY* (17Feb1992)
> http://www.garlic.com/~lynn/2001n.html#6000clusters1
> and later in the spring reference to it coming as complete *SURPRISE*
> to the company (11May1992)
> http://www.garlic.com/~lynn/2001n.html#6000clusters2

20yrs later

IBM wins top spot on supercomputer list
http://www.tgdaily.com/hardware-features/64103-ibm-wins-top-spot-on-supercomputer-list

from above:

The IBM BlueGene/Q system, named Sequoia, is installed at the Department of Energy's Lawrence Livermore National Laboratory and hit 16.32 petaflop/s on the Linpack benchmark using 1,572,864 cores. It's also one of the most energy efficient systems on the list.

... snip ...

it has more than two times the number of cores of #2 on the list and getting slightly more than 1.5 times the petaflops

by coincidence the last email
http://www.garlic.com/~lynn/2006x.html#email920129
in the referenced list
http://www.garlic.com/~lynn/lhwemail.html#medusa

includes discussion of a meeting at LLNL earlier in the week, that I wasn't able to attend ... but a couple of people at the meeting (from other vendors) came by to fill me in on what happened.

Anne & Lynn Wheeler

unread,
Jun 18, 2012, 9:34:17 AM6/18/12
to
20yrs later

IBM wins top spot on supercomputer list
http://www.tgdaily.com/hardware-features/64103-ibm-wins-top-spot-on-supercomputer-list

from above:

The IBM BlueGene/Q system, named Sequoia, is installed at the Department of Energy's Lawrence Livermore National Laboratory and hit 16.32 petaflop/s on the Linpack benchmark using 1,572,864 cores. It's also one of the most energy efficient systems on the list.

... snip ...

it has more than two times the number of cores of #2 on the list and
getting slightly more than 1.5 times the petaflops

by coincidence the last email
http://www.garlic.com/~lynn/2006x.html#email920129
in the referenced list
http://www.garlic.com/~lynn/lhwemail.html#medusa

includes discussion of a meeting at LLNL earlier in the week, that I wasn't able to attend ... but a couple of people at the meeting (from other vendors) came by to fill me in on what happened.


Shmuel Metz , Seymour J.

unread,
Jun 18, 2012, 3:41:09 PM6/18/12
to
In <m3pq8zm...@garlic.com>, on 06/16/2012
at 02:07 PM, Anne & Lynn Wheeler <ly...@GARLIC.COM> said:

>note that CICS was originally to avoid having to use as few
>os/360 resources as possible ... because os/360 processing was
>horrendously heavy weight and bloated.

Was that the reason, or was it because PCP and MFT[1] did not have
ATTACH?

[1] MFT eventually got ATTACH, but that came later.

--
Shmuel (Seymour J.) Metz, SysProg and JOAT
Atid/2 <http://patriot.net/~shmuel>
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

Anne & Lynn Wheeler

unread,
Jun 18, 2012, 4:56:37 PM6/18/12
to
shmue...@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
> Was that the reason, or was it because PCP and MFT[1] did not have
> ATTACH?
>
> [1] MFT eventually got ATTACH, but that came later.

re:
http://www.garlic.com/~lynn/2012h.html#78 Familiar

possibly before I saw it, don't really know.

betatest for product was 1969 ... not sure what they were doing (at site
where it was developed) prior to that ... so it would have been at least
Release 18 (and university had moved from MFT to MVT at release 15/16;
aka release 15 had slipped so far ... that it eventually shipped from
IBM as double release).

however, major heavy weight (besides avoiding TCB tasking) was
OPEN/CLOSE for task. CICS did batch open at startup ... disk accesses
and pathlength for OPEN/CLOSE would have swamped typical task disk
accesses and execution time. First bug I shot was OPEN ...
implementation stuffed some bits in DCB fields for specific BDAM
options. Library was using different set of BDAM options, the OPEN
would fail and couldn't get it started. I had to zap some instructions
to stop the DCB field fiddling in the CICS code.

misc. past posts mentioning BDAM &/or CICS:
http://www.garlic.com/~lynn/submain.html#cics

--
virtualization experience starting Jan1968, online at home since Mar1970

Paul Gilmartin

unread,
Jun 18, 2012, 5:42:54 PM6/18/12
to
On Mon, 18 Jun 2012 09:34:27 -0400, Shmuel Metz (Seymour J.) wrote:
>
>[1] MFT eventually got ATTACH, but that came later.
>
??? How could that possibly work?

Doesn't "MFT" stand for "Multiprogramming with a Fixed number of Tasks"?

The instant one does an ATTACH, doesn't the number of tasks change?

(Or did MFT simulate ATTACH by dispatching an idle member of the fixed
task pool?)

--gil

Tony Harminc

unread,
Jun 18, 2012, 5:57:37 PM6/18/12
to
On 18 June 2012 09:34, Shmuel Metz (Seymour J.) <shmue...@patriot.net> wrote:
> In <m3pq8zm...@garlic.com>, on 06/16/2012
>   at 02:07 PM, Anne & Lynn Wheeler <ly...@GARLIC.COM> said:
>
>>note that CICS was originally to avoid having to use as few
>>os/360 resources as possible ... because os/360 processing was
>>horrendously heavy weight and bloated.
>
> Was that the reason, or was it because PCP and MFT[1] did not have ATTACH?

I don't know CICS, but HASP also used few of the available OS
services. Here's an excerpt from a little 1970-ish course handbook
(SR23-3697-0) that explains why, with some not so gentle jabs at the
performance of the OS services. I'm sure many of the CICS reasons were
similar:

<quote>
HASP STRUCTURE
The primary goal in the design of any execution support system such as
HASP must be the efficient manipulation of the various resources
required for processing. The first design steps must then include the
determination of what resources will be required and the careful
application
of sound programming design techniques to achieve an efficient and
consistent solution to the allocation of these resources.

A study would reveal that HASP requires the following resources:
1. Main Storage
2. Direct-Access Space
3. Input/Output Units
4. Central Processing Unit Time
5. Input/Output Channel and Unit Time
6. Programs
7. Jobs
8. Interval Timer

Since these resources are essentially the basic facilities provided by
the Operating System, it would at first seem that these facilities
would
be sufficient to meet the requirements of HASP. Further studies show,
however, that the philosophies of the Operating System's services are
not
always consistent with the design requirements of a system such as HASP.

For instance, the main storage services provided by the Operating
System are very flexible and comprehensive but fail to meet the
requirements of HASP in the following areas:

• As requests for main storage are serviced, memory becomes fragmented
in such a way that eventually a request for storage cannot be serviced
for lack of contiguous memory even though the total amount of storage
available far exceeds the requested quantity.
• As the amount of available storage decreases, the requestor becomes
more susceptible to being placed in an OS WAIT state or being ABENDed.
These conditions are both intolerable to HASP.
• The primary use of main storage in HASP is for buffering space for
input/output purposes. These input/output purposes require that an
Input/Output Block be associated with each segment of main storage
which the Operating System Main Storage Supervisor, only naturally,
does not provide. This means that HASP would have to construct such a
block for each main storage segment it required.

In a similar fashion the Direct-Access Device Space Manager (DADSM)
provides flexible and comprehensive services for normal job processing
requirements but fails to meet the requirements of HASP in the
following areas:

• Because of the data set concept employed by DADSM, the "hashing" or
"fragmentation" problem described above also impacts the allocation of
direct-access space.
• The data set concept complicates the simultaneous allocation of
storage across many volumes (for selector channel overlap).
• The DADSM limit of extents per volume tends to cause volume
switching, and the associated time delays are intolerable to HASP.
• DADSM consists of non-resident routines which must be loaded for
each direct-access space allocation service. Because of the frequent
allocation requirements, the associated overhead involved in the
loading of these routines would degrade the performance of HASP to a
certain extent.

Since the unit-record Input/output units which the scheduler allocates
to the jobs being processed in other partitions must be available for
use by HASP, HASP must be responsible for the allocation of its own
input/output units.

The Operating System Task Supervisor Is responsible for the allocation
of Central Processing Unit (CPU) time to all tasks in the system. The
different functions of HASP (reading cards, printing, punching, etc.)
could be defined as individual OS tasks except for the following
considerations:

• Defining each function as a separate task would prohibit HASP from
being used with anything other than a variable-task system.
• Inter-task communication and synchronization is many times more
complex than intra-task communication and synchronization.

The Operating System Input/Output Supervisor is responsible for the
allocation of all input/output channel and unit time. It completely
meets all requirements and is used by HASP for all input/output
scheduling.

The Operating System Interval Timer Supervisor provides complete
interval timer management services but limits these services to one
user per task. Since HASP has many functions which have simultaneous
interval timer requirements, an interface must be provided which will
grant unlimited access to the OS Interval Timer Supervisor.
</quote>

Tony H.

Rich Greenberg

unread,
Jun 18, 2012, 6:28:48 PM6/18/12
to
In article <5581573438612873.WA...@listserv.ua.edu> you write:
>On Mon, 18 Jun 2012 09:34:27 -0400, Shmuel Metz (Seymour J.) wrote:
>>
>>[1] MFT eventually got ATTACH, but that came later.
>>
>??? How could that possibly work?
>
>Doesn't "MFT" stand for "Multiprogramming with a Fixed number of Tasks"?
>
>The instant one does an ATTACH, doesn't the number of tasks change?
>
>(Or did MFT simulate ATTACH by dispatching an idle member of the fixed
>task pool?)

Gil et al,
"Fixed number of Tasks" was really a misuse of the term "tasks".
It was really a fixed number of partitions. Eact partition could run a
"job" or a "started task" (this may not be the correct terminology, card
reader reading cards into spool or a writer, driving spool files to a
printer or punch, etc)

Each job or task could multiprogram within itself, and once ATTACH was
added, multithread.

As a historical note, early MFT versions only had 1 initiator. All but
the last were never ending jobs, HASP being a typical one. You IPL'd
and started a job in P1. It did a WAITR macro and the initiator moved
to P2 etc.

You could move the initiator back down with an operator command if the
never ending job ended, and you could restart it or another, again with
operator commands.

--
Rich Greenberg Sarasota, FL, USA richgr atsign panix.com + 1 941 378 2097
Eastern time. N6LRT I speak for myself & my dogs only. VM'er since CP-67
Canines: Val,Red,Shasta,Zero,Casey & Cinnar (At the bridge) Owner:Chinook-L
Canines: Red & Max (Siberians) Retired at the beach Asst Owner:Sibernet-L

Anne & Lynn Wheeler

unread,
Jun 19, 2012, 8:14:02 AM6/19/12
to

ly...@GARLIC.COM (Anne & Lynn Wheeler) writes:
> however, major heavy weight (besides avoiding TCB tasking) was
> OPEN/CLOSE for task. CICS did batch open at startup ... disk accesses
> and pathlength for OPEN/CLOSE would have swamped typical task disk
> accesses and execution time. First bug I shot was OPEN ...
> implementation stuffed some bits in DCB fields for specific BDAM
> options. Library was using different set of BDAM options, the OPEN
> would fail and couldn't get it started. I had to zap some instructions
> to stop the DCB field fiddling in the CICS code.

re:
http://www.garlic.com/~lynn/2012h.html#78 Familiar
http://www.garlic.com/~lynn/2012i.html#7 Familiar

one of the things i started doing around release 11 was totally taking
apart the stage2 output from stage1 sysgen and reordering it so the
execution sequence would carefully place files and pds members on disk
for optimal disk arm seeking. for the univ. student workload, this
gained approx. three times increase in throughput. one of the major wins
was ordering the multitude of svclib open/close pds members

long ago and far away, part of presentation I made at
fall '68 SHARE in Atlanta
http://www.garlic.com/~lynn/94.html#18

That spring and summer, I had significantly rewritten sections of cp67
code (in this benchmark with mft14, reduced cp67 processing time from
534 cpu seconds to 113 cpu seconds).

CP67 never did quite make it to production at the univ., I just got to
play with it on weekends. Mostly the 360/67 ran in 360/65 mode with
os/360. One of the aggravations with carefully PDS member ordering was
PTFs replacing PDS members and destroying the careful ordering. Lots of
PTF activity over time could cut throughput in half ... and if a new
release build wasn't imminent, I would have to rebuild the current
release to get throughput back.

in any case, even with careful ordering ... open/close process still
involved fetching a large number of PDS members ... doing on a
transaction basis would have totally destroyed CICS thruput.

Anne & Lynn Wheeler

unread,
Jun 19, 2012, 8:17:00 AM6/19/12
to

Shmuel Metz , Seymour J.

unread,
Jun 19, 2012, 7:45:14 PM6/19/12
to
In <m3mx40b...@garlic.com>, on 06/18/2012
at 04:54 PM, Anne & Lynn Wheeler <ly...@GARLIC.COM> said:

>betatest for product was 1969

When did design start?

--
Shmuel (Seymour J.) Metz, SysProg and JOAT
Atid/2 <http://patriot.net/~shmuel>
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

Shmuel Metz , Seymour J.

unread,
Jun 19, 2012, 7:45:32 PM6/19/12
to
In <201206182227...@panix5.panix.com>, on 06/18/2012
at 06:27 PM, Rich Greenberg <ric...@PANIX.COM> said:

>Gil et al, "Fixed number of Tasks" was really a misuse of the term
>"tasks". It was really a fixed number of partitions.

Unless the operator knew about the DEFINE command.

>As a historical note, early MFT versions only had 1 initiator.

No. In fact, your subsequent text shows that you know it's not true,
since you mention SHIFT and WAITR. Fortunately MFT II did away with
the PRSCB.

--
Shmuel (Seymour J.) Metz, SysProg and JOAT
Atid/2 <http://patriot.net/~shmuel>
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

Shmuel Metz , Seymour J.

unread,
Jun 19, 2012, 7:46:05 PM6/19/12
to
In
<CAArMM9QZHn7172T7zVTkpApp...@mail.gmail.com>,
on 06/18/2012
at 05:56 PM, Tony Harminc <to...@HARMINC.NET> said:

>I don't know CICS, but HASP also used few of the available OS
>services. Here's an excerpt from a little 1970-ish course handbook
>(SR23-3697-0) that explains why, with some not so gentle jabs at the
>performance of the OS services. I'm sure many of the CICS reasons
>were similar:

Some of the alleged reasons are bogus, and the excerpt leaves out an
important legitimate reason ;-)

> The primary use of main storage in HASP is for buffering space for
>input/output purposes. These input/output purposes require that an
>Input/Output Block be associated with each segment of main storage

No.

>In a similar fashion the Direct-Access Device Space Manager (DADSM)
>provides flexible and comprehensive services for normal job
>processing requirements but fails to meet the requirements of HASP
>in the following areas:

The excerpt doesn't mention the DADSM overhead; HASP can allocate
space in SPOOL much more quickly, even if the DADSM modules are
resident.

>Since the unit-record Input/output units which the scheduler
>allocates to the jobs being processed in other partitions must be
>available for use by HASP, HASP must be responsible for the
>allocation of its own input/output units.

No. The scheduler would not normally allocate unit record equipment to
a user job except for PCP, and when it does, every other job must keep
its hands off of it. Without HASP, unit record equipment is normally
allocated to reader and writer tasks, not to user jobs.

--
Shmuel (Seymour J.) Metz, SysProg and JOAT
Atid/2 <http://patriot.net/~shmuel>
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

Shmuel Metz , Seymour J.

unread,
Jun 19, 2012, 7:46:46 PM6/19/12
to
In <5581573438612873.WA...@listserv.ua.edu>, on
06/18/2012
at 04:42 PM, Paul Gilmartin <PaulGB...@AIM.COM> said:

>Doesn't "MFT" stand for "Multiprogramming with a Fixed number of
>Tasks"?

Don't confuse etymology with semantics. Not only did more recent
releases of OS/360 support ATTACH for MFT, OS/360 also supported
operator definition and deletion of partitions, so all that was fixed
was the upper limit.

>The instant one does an ATTACH, doesn't the number of tasks change?

Yes, just as many lefties are adroit and not sinister.

>(Or did MFT simulate ATTACH by dispatching an idle member of the
>fixed task pool?)

I've never heard of a fixed task pool.

--
Shmuel (Seymour J.) Metz, SysProg and JOAT
Atid/2 <http://patriot.net/~shmuel>
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

0 new messages