Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

IBM Programmer Aptitude Test

2,223 views
Skip to first unread message

Paul A Palmer

unread,
Jan 2, 1994, 4:32:52 PM1/2/94
to
Reading an old (1971) journal article, I saw a reference to the IBM Programmer
Aptitude Test. The reference in the article was:

McNamara, W. J., & Hughes, J. L.
Manual for the revised programmer aptitude test.
White Plains, New York: IBM, 1969.

From the (brief) description in the article, the test apparently has 3 parts:
completion of number sequences, geometric paired comparisons, and word
problems similar to those in junior high school mathematics.

My questions are:

1. Has anyone out there actually taken this test?

2. How would I go about getting a copy?

Thanks in advance.

--
Paul Palmer
Department of Mathematics E-mail: pal...@math.orst.edu
Kidder Hall 368
Oregon State University, Corvallis, Oregon 97331-4605

Kelly Bert Manning

unread,
Jan 3, 1994, 3:19:30 AM1/3/94
to

I've never seen this but I have a vauge recollection of seeing it
mentioned in a Communications of the ACM article from about 10 years
ago which discussed a number of supposed predictors of successes in
programming. My recollection is that at that time the PAT didn't have
a partiularly good correllation with success at first year C.Sc., but
that may not mean much.

--

James R Ebright

unread,
Jan 3, 1994, 9:38:18 AM1/3/94
to
In article <CJ1oG...@suncad.camosun.bc.ca> ua...@freenet.Victoria.BC.CA writes:
...

>My recollection is that at that time the PAT didn't have
>a partiularly good correllation with success at first year C.Sc., but
>that may not mean much.

What it means is the PAT didn't measure how well you jump through the
professor's hoops ;)

Of course, grades don't have a good correlation with success in the real
world :) (Academics just *hate* it when I point that out.)

The last time I saw a PAT was 1972. It reminded me a lot of the math
part of an IQ test.

--
A/~~\A 'moo2u from osu' Jim Ebright e-mail: jr...@osu.edu
((0 0))_______ "Education ought to foster the wish for truth,
\ / the \ not the conviction that some particular creed
(--)\ OSU | is the truth." -- Bertrand Russell

Jay Maynard

unread,
Jan 3, 1994, 10:15:10 AM1/3/94
to
In article <2g9akq$8...@charm.magnus.acs.ohio-state.edu>,

James R Ebright <jebr...@magnus.acs.ohio-state.edu> wrote:
>Of course, grades don't have a good correlation with success in the real
>world :) (Academics just *hate* it when I point that out.)

Not to mention the fact that there has been zero correlation in the folks I've
worked with over the years between degree status/major/GPA and real-world
ability. Some of the best programmers I've had the pleasure of working with
have been non-degreed or held degrees in non-computing fields, and some of the
worst have been CS grads with 4.0 GPAs. (I must admit to some bias here, as
well; I am non-degreed, and have an extremely low tolerance for the kind of
bullshit required to survive four years of college.)

>The last time I saw a PAT was 1972. It reminded me a lot of the math
>part of an IQ test.

I had to take one in 1985; my headhunter told me I aced it, and that that got
me the job. Unfortunately, that one only lasted 11 months before the oil bust
got me...
--
Jay Maynard, EMT-P, K5ZC, PP-ASEL | Never ascribe to malice that which can
jmay...@oac.hsc.uth.tmc.edu | adequately be explained by stupidity.
"A good flame is fuel to warm the soul." -- Karl Denninger

David D. Miller

unread,
Jan 4, 1994, 1:12:31 PM1/4/94
to

I took this test just last year - oops, make that in 1992. The test does have
the 3 parts you mention, and IBM requires a prospective employee to take the
test as a pre-condition of employment (at least for those with CS-related
(as opposed to, say, manufacturing) jobs). The test wasn't terribly difficult
- about junior-high level (that's about the 7th year of education for our
non-US readers). The folks at the employment office said that "they" (meaning
IBM corporate) required it, and it was just a formality. I never saw my
score, or heard anything more about the test after the day I took it.
I don't know how to get a copy, or if that's even possible.
--
David D. Miller | "Nothing sucks like a Vax."
AIX Information Development |
ddmi...@austin.ibm.com | - British vacuum cleaner advertisement
Not IBM's opinions | (circa 1987).

Michael Covington

unread,
Jan 6, 1994, 12:33:25 AM1/6/94
to
In article <2g9akq$8...@charm.magnus.acs.ohio-state.edu> jebr...@magnus.acs.ohio-state.edu (James R Ebright) writes:

>Of course, grades don't have a good correlation with success in the real
>world :) (Academics just *hate* it when I point that out.)

We are acutely aware of it and can't do much about it. We take flak from
2 kinds of people:
(a) Student who just wants to be trained for his first job and never
learn anything that will be applicable more than 6 months from now;
(b) Student who only wants to learn the theory and never do any
practical work.

But perhaps our biggest enemy is the notion -- ingrained in academia and
utterly antithetical to the workplace -- that getting something 70% right
is good enough.

I've taught classes of people who track that 70% level the way a London
taxi driver tracks the preceding car's rear bumper. Some people will
learn 70% of anything, but never 71%.

--
< Michael A. Covington, Assc Rsch Scientist, Artificial Intelligence Programs >
< The University of Georgia, Athens, GA 30606-7415 USA mcov...@ai.uga.edu >
<>< ----------------------------------------------------------------------- ><>
< For info about U.Ga. degree programs, email GRA...@UGA.CC.UGA.EDU (not me) >

Doug Burger

unread,
Jan 6, 1994, 6:14:00 PM1/6/94
to

In article <2gg7r5$ku3#hobbes.cc.uga.edu> mcov...@aisun3.ai.uga.edu
(Michael Covington) writes:

MC> But perhaps our biggest enemy is the notion -- ingrained in academia and
MC> utterly antithetical to the workplace -- that getting something 70% right
MC> is good enough.

Hm, maybe that explains why my company thinks 70% retention rate
of its customers is perfectly normal.

It may be "antithetical to the workplace", but it's a crying
shame how many workplaces work that way.

doug

P.S. Hello Dr., from a former student of yours...

---
. OLX 2.1 TD . Don't question authority; it doesn't know either.

fra...@dfwair.net

unread,
Jul 19, 2014, 1:31:29 PM7/19/14
to
I took this test in 1962 when I was a senior in high school. I did well enough to get two summer job offers - one from IBM and one from the Ford Scientific Research Laboratory. I took the job with Ford because it involved programming for real applications (the IBM job involved being an assistant to a man who repaired accounting machines). The test evaluated my ability to think in a logical manner and solve puzzles. While certainly not comprehensive by today's standards, it did work fairly well from my perspective. I ended up with a 40+ year career in software development.

Anne & Lynn Wheeler

unread,
Jul 19, 2014, 3:09:48 PM7/19/14
to

fra...@dfwair.net writes:
> I took this test in 1962 when I was a senior in high school. I did
> well enough to get two summer job offers - one from IBM and one from
> the Ford Scientific Research Laboratory. I took the job with Ford
> because it involved programming for real applications (the IBM job
> involved being an assistant to a man who repaired accounting
> machines). The test evaluated my ability to think in a logical manner
> and solve puzzles. While certainly not comprehensive by today's
> standards, it did work fairly well from my perspective. I ended up
> with a 40+ year career in software development.

I went to recruitment day and took the IBM programmer aptitude test just
before I graduated. The IBMer said that I didn't get high enuf score to
be offerred a job. I then explained that I had already been working as
primary ibm operating system support at the univ, had been brought in to
help setup boeing computer services as full-time employee ... and had to
chose between staying with boeing or accepting job offer with the ibm
cambridge science center (at "staff" level ... skipping beginning,
associate and the other lower levels). He couldn't reconcile my
score and the job offer from the science center (course it didn't make
any difference). posts mentioning science center
http://www.garlic.com/~lynn/subtopic.html#545tech

Recent posts that major IBM products had been original developed at
customer or internal datacenters and then moved to a (software)
"development group" for support and maintenance ... the transition to
"object code only" in the 80s ... greatly curtailed much of that
innovation:
http://www.garlic.com/~lynn/2014.html#19 the suckage of MS-DOS, was Re: 'Free Unix!
http://www.garlic.com/~lynn/2014h.html#74 The Tragedy of Rapid Evolution?
http://www.garlic.com/~lynn/2014h.html#79 EBFAS
http://www.garlic.com/~lynn/2014h.html#80 The Tragedy of Rapid Evolution?
http://www.garlic.com/~lynn/2014h.html#99 TSO Test does not support 65-bit debugging?
http://www.garlic.com/~lynn/2014i.html#5 "F[R]eebie" software
http://www.garlic.com/~lynn/2014i.html#6 TSO Test does not support 65-bit debugging?
http://www.garlic.com/~lynn/2014i.html#7 You can make your workplace 'happy'

other recent refs:
http://www.garlic.com/~lynn/2014c.html#31 How many EBCDIC machines are still around?
http://www.garlic.com/~lynn/2014e.html#19 The IBM Strategy
http://www.garlic.com/~lynn/2014e.html#23 Is there any MF shop using AWS service?
http://www.garlic.com/~lynn/2014e.html#69 Before the Internet: The golden age of online services
http://www.garlic.com/~lynn/2014f.html#36 IBM Historic computing
http://www.garlic.com/~lynn/2014f.html#73 Is end of mainframe near ?
http://www.garlic.com/~lynn/2014g.html#62 Interesting and somewhat disturbing article about IBM in BusinessWeek. What is your opinion?
http://www.garlic.com/~lynn/2014g.html#63 Costs of core
http://www.garlic.com/~lynn/2014i.html#13 IBM & Boyd
http://www.garlic.com/~lynn/2014i.html#31 Speed of computers--wave equation for the copper atom? (curiosity)

--
virtualization experience starting Jan1968, online at home since Mar1970

Rod Speed

unread,
Jul 19, 2014, 4:04:27 PM7/19/14
to
That is a 20 year old post you replied to.

<fra...@dfwair.net> wrote in message
news:b6365909-73d6-4b2f...@googlegroups.com...

jmfbahciv

unread,
Jul 20, 2014, 9:07:54 AM7/20/14
to
Anne & Lynn Wheeler wrote:
>
> fra...@dfwair.net writes:
>> I took this test in 1962 when I was a senior in high school. I did
>> well enough to get two summer job offers - one from IBM and one from
>> the Ford Scientific Research Laboratory. I took the job with Ford
>> because it involved programming for real applications (the IBM job
>> involved being an assistant to a man who repaired accounting
>> machines). The test evaluated my ability to think in a logical manner
>> and solve puzzles. While certainly not comprehensive by today's
>> standards, it did work fairly well from my perspective. I ended up
>> with a 40+ year career in software development.
>
> I went to recruitment day and took the IBM programmer aptitude test just
> before I graduated. The IBMer said that I didn't get high enuf score to
> be offerred a job. I then explained that I had already been working as
> primary ibm operating system support at the univ, had been brought in to
> help setup boeing computer services as full-time employee ... and had to
> chose between staying with boeing or accepting job offer with the ibm
> cambridge science center (at "staff" level ... skipping beginning,
> associate and the other lower levels). He couldn't reconcile my
> score and the job offer from the science center (course it didn't make
> any difference). posts mentioning science center
> http://www.garlic.com/~lynn/subtopic.html#545tech

Did you ever find out what kinds of questions caused the low score?

<snip>

/BAH

Anne & Lynn Wheeler

unread,
Jul 20, 2014, 9:52:25 AM7/20/14
to
jmfbahciv <See....@aol.com> writes:
> Did you ever find out what kinds of questions caused the low score?

re:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test

never did, but the IBMer doing interview was incredulous when I told
him that I already had offer from science center
http://www.garlic.com/~lynn/subtopic.html#545tech

was fairly senior position ... and not entry level. possibly conjecture
was the test was oriented to finding those that fit the "Man Month"
profile:
http://en.wikipedia.org/wiki/The_Mythical_Man-Month

recent refs:
http://www.garlic.com/~lynn/2014g.html#99 IBM architecture, was Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
http://www.garlic.com/~lynn/2014h.html#103 TSO Test does not support 65-bit debugging?
http://www.garlic.com/~lynn/2014i.html#41 How Comp-Sci went from passing fad to must have major

I have mentioned in the past being blamed for online computer
conferencing on the internal network in the late 70s & early 80s
(folklore is that when the executive committee was told about online
computer conferencing & the internal network, 5of6 wanted to fire me).
internal network posts
http://www.garlic.com/~lynn/subnetwork.html#internalnet

I've also mentioned that somewhat as result of online computer
conferencing, a researcher was assigned to study how I communicated.
They sat in the back of my office for 9months, took notes on how I
communicated, face-to-face, telephone, went with me to meetings and got
copies of all my incoming & outgoing email as well as logs of all my
instant messages (almost tempted to reference gov. evesdropping). The
result was a number of papers and at least one book as well as stanford
PHD (joint with language and computer AI, winograd was advisor on
computer AI side). some past posts
http://www.garlic.com/~lynn/subnetwork.html#cmc

the researcher had previously spent some time as English as Second Language
instructor, and once commented that my use of English was
characteristic of non-native speaker.

Dan Espen

unread,
Jul 20, 2014, 11:28:50 AM7/20/14
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:

> fra...@dfwair.net writes:
>> I took this test in 1962 when I was a senior in high school. I did
>> well enough to get two summer job offers - one from IBM and one from
>> the Ford Scientific Research Laboratory. I took the job with Ford
>> because it involved programming for real applications (the IBM job
>> involved being an assistant to a man who repaired accounting
>> machines). The test evaluated my ability to think in a logical manner
>> and solve puzzles. While certainly not comprehensive by today's
>> standards, it did work fairly well from my perspective. I ended up
>> with a 40+ year career in software development.
>
> I went to recruitment day and took the IBM programmer aptitude test just
> before I graduated. The IBMer said that I didn't get high enuf score to
> be offerred a job.

I went to a technical school to learn programming.
I did well in the school, earning an A and acing all the tests.

Then my employer at the time sent me to HR to take the aptitude
test. They told me I didn't pass but asked me to take it again.
Then they told me I didn't pass again, but offered me a programming
position anyway.

The thing is, I usually do well on those types of tests.

Years later it occurred to me, maybe they were lying.
Maybe I did do well on the test. Maybe too well, leading
them to the second test.

I'll never know. Either the test completely failed to
measure my ability, or maybe they were just having their way
with me.

--
Dan Espen

Anne & Lynn Wheeler

unread,
Jul 20, 2014, 11:47:56 AM7/20/14
to
Dan Espen <des...@verizon.net> writes:
> I went to a technical school to learn programming.
> I did well in the school, earning an A and acing all the tests.
>
> Then my employer at the time sent me to HR to take the aptitude
> test. They told me I didn't pass but asked me to take it again.
> Then they told me I didn't pass again, but offered me a programming
> position anyway.
>
> The thing is, I usually do well on those types of tests.
>
> Years later it occurred to me, maybe they were lying.
> Maybe I did do well on the test. Maybe too well, leading
> them to the second test.
>
> I'll never know. Either the test completely failed to
> measure my ability, or maybe they were just having their way
> with me.

re:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test

shortly after joining IBM ... I guess I started to be a problem ...
during the "Future System" period, I refused to work on FS, continued
to work on 370 and would even periodically ridicule FS
http://www.garlic.com/~lynn/submain.html#futuresys

there were a few similar instances even before getting blamed for online
computer conferencing. about the same time as the online computer
conferencing flap ... I wrote an open door claiming that I was vastly
underpaid, even including references. I got back written response from
head of HR that my complete employment history had been reviewed and I
was making exactly what I was suppose to. I then took my original and
their response and wrote a response that I had been asked to interview
new hires for a new group that would work under my technical direction
and HR was making the new hires offers that were 30% more than I was
currently making. I never got a response from HR ... but within few
weeks, I got a 30% raise ... aka it wasn't a 30% raise to put me at my
correct salary level, it was 30% raise to bring me up level with what
they were offering the new hires. past refs:
http://www.garlic.com/~lynn/2009h.html#74 My Vintage Dream PC
http://www.garlic.com/~lynn/2010c.html#82 search engine history, was Happy DEC-10 Day
http://www.garlic.com/~lynn/2010f.html#79 The 2010 Census
http://www.garlic.com/~lynn/2010m.html#66 Win 3.11 on Broadband
http://www.garlic.com/~lynn/2011f.html#0 coax (3174) throughput
http://www.garlic.com/~lynn/2011g.html#2 WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?
http://www.garlic.com/~lynn/2011g.html#12 Clone Processors
http://www.garlic.com/~lynn/2012k.html#28 How to Stuff a Wild Duck
http://www.garlic.com/~lynn/2012k.html#42 The IBM "Open Door" policy
http://www.garlic.com/~lynn/2014c.html#65 IBM layoffs strike first in India; workers describe cuts as 'slaughter' and 'massive'
http://www.garlic.com/~lynn/2014h.html#81 The Tragedy of Rapid Evolution?

periodically during my career, people would remind me that "business
ethics" was an *oxymoron*.

other past posts referencing being told that "business ethics" is an
*oxymoron*
http://www.garlic.com/~lynn/2007j.html#72 IBM Unionization
http://www.garlic.com/~lynn/2009.html#53 CROOKS and NANNIES: what would Boyd do?
http://www.garlic.com/~lynn/2009e.html#37 How do you see ethics playing a role in your organizations current or past?
http://www.garlic.com/~lynn/2009o.html#36 U.S. students behind in math, science, analysis says
http://www.garlic.com/~lynn/2009o.html#52 Revisiting CHARACTER and BUSINESS ETHICS
http://www.garlic.com/~lynn/2009o.html#57 U.S. begins inquiry of IBM in mainframe market
http://www.garlic.com/~lynn/2009r.html#50 "Portable" data centers
http://www.garlic.com/~lynn/2010b.html#38 Happy DEC-10 Day
http://www.garlic.com/~lynn/2010f.html#20 Would you fight?
http://www.garlic.com/~lynn/2010g.html#0 16:32 far pointers in OpenWatcom C/C++
http://www.garlic.com/~lynn/2010g.html#44 16:32 far pointers in OpenWatcom C/C++
http://www.garlic.com/~lynn/2011b.html#59 Productivity And Bubbles
http://www.garlic.com/~lynn/2012k.html#28 How to Stuff a Wild Duck
http://www.garlic.com/~lynn/2012k.html#42 The IBM "Open Door" policy

Walter Bushell

unread,
Jul 20, 2014, 12:15:51 PM7/20/14
to
In article <m361ist...@garlic.com>,
Anne & Lynn Wheeler <ly...@garlic.com> wrote:

> there were a few similar instances even before getting blamed for online
> computer conferencing. about the same time as the online computer
> conferencing flap ... I wrote an open door claiming that I was vastly
> underpaid, even including references. I got back written response from
> head of HR that my complete employment history had been reviewed and I
> was making exactly what I was suppose to. I then took my original and
> their response and wrote a response that I had been asked to interview
> new hires for a new group that would work under my technical direction
> and HR was making the new hires offers that were 30% more than I was
> currently making. I never got a response from HR ... but within few
> weeks, I got a 30% raise ... aka it wasn't a 30% raise to put me at my
> correct salary level, it was 30% raise to bring me up level with what
> they were offering the new hires. past refs:

This happens all the time. In fields where pay is going up rapidly or
periods of high inflation, companies will frequently pay new hires
more than the people hired last year or the year before etcetera.

They figure they can (usually) get away with small raises for their
current employees, but have to meet the market for newbies.

WHAT? You were expecting justice? From a corporation?

Anne & Lynn Wheeler

unread,
Jul 20, 2014, 12:27:38 PM7/20/14
to

Walter Bushell <pr...@panix.com> writes:
> This happens all the time. In fields where pay is going up rapidly or
> periods of high inflation, companies will frequently pay new hires
> more than the people hired last year or the year before etcetera.
>
> They figure they can (usually) get away with small raises for their
> current employees, but have to meet the market for newbies.
>
> WHAT? You were expecting justice? From a corporation?

re:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test

I had included in the original open door a copy of (then recent) SJMN
series on pay in silicon valley ... basically job hopping played
significant component ... if you had been with the same company more
than 2yrs, you were underpaid ... but it didn't have case where nearly
20yrs in the business, was making 30% less than new hire offers (much
more egregious than any of the examples).

however, recently in news has been several silicon valley companies
convicted for salary fixing and aggreements to not poach each others
workers.

Google and Apple Settle Lawsuit Alleging Wage-Fixing
http://time.com/76655/google-apple-settle-wage-fixing-lawsuit/
pple, Google Settle Wage-Fixing and Hiring Conspiracy Case
http://www.vanityfair.com/online/daily/2014/04/apple-google-settle-wage-fixing-hiring-case
Tech giants settle wage-fixing allegations for a reported $324M
http://nypost.com/2014/04/24/tech-giants-settle-wage-fixing-allegations-for-a-reported-324m/
Fixing a Salary Negotiation Mistake Before the Job Offer
http://www.salary.com/advice/layouthtmls/advl_display_Cat8_Ser202_Par304.html
Apple, others officially agree to $325M settlement in Silicon Valley
wage fixing case
http://appleinsider.com/articles/14/05/23/apple-others-officially-agree-to-325m-settlement-in-silicon-valley-wage-fixing-case
Pixar, LucasFilm, DreamWorks Animation In Alleged Wage-Fixing Cartel
To Boost Profit
http://nikkifinke.com/pixar-lucasfilm-dreamworks-animation-wage-fixing-conspiracy/
Tech giants lose round in wage-fixing suit
http://www.cnet.com/news/judge-denies-request-for-summary-judgment-in-tech-firm-wage-suit/

Dan Espen

unread,
Jul 20, 2014, 7:16:43 PM7/20/14
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:

> Dan Espen <des...@verizon.net> writes:
>> I went to a technical school to learn programming.
>> I did well in the school, earning an A and acing all the tests.
>>
>> Then my employer at the time sent me to HR to take the aptitude
>> test. They told me I didn't pass but asked me to take it again.
>> Then they told me I didn't pass again, but offered me a programming
>> position anyway.
>>
>> The thing is, I usually do well on those types of tests.
>>
>> Years later it occurred to me, maybe they were lying.
>> Maybe I did do well on the test. Maybe too well, leading
>> them to the second test.
>>
>> I'll never know. Either the test completely failed to
>> measure my ability, or maybe they were just having their way
>> with me.
>
> shortly after joining IBM ... I guess I started to be a problem ...
> during the "Future System" period, I refused to work on FS, continued
> to work on 370 and would even periodically ridicule FS

You weren't the problem. IBM management was the problem.
As history has proved, you were right all along.
Why you hung around was the mystery.
You had more sense than the fools that surrounded you.

> there were a few similar instances even before getting blamed for online
> computer conferencing. about the same time as the online computer
> conferencing flap

Hey, I got "blamed" for lots of things.
But I didn't quite look at it that way, especially
if whatever I did got used. So much work gets done
in spite of management, I look at it as just the way things
are done.

> ... I wrote an open door claiming that I was vastly
> underpaid, even including references. I got back written response from
> head of HR that my complete employment history had been reviewed and I
> was making exactly what I was suppose to. I then took my original and
> their response and wrote a response that I had been asked to interview
> new hires for a new group that would work under my technical direction
> and HR was making the new hires offers that were 30% more than I was
> currently making. I never got a response from HR ... but within few
> weeks, I got a 30% raise ... aka it wasn't a 30% raise to put me at my
> correct salary level, it was 30% raise to bring me up level with what
> they were offering the new hires. past refs:

Interesting.
Seems like they really did appreciate your skills.
But still I would have taken the raise and parlayed it into another
raise. The one that comes with a job change.

As a consultant, I once saved the client so much money
they called up my company and asked them to come down and they
told them what I'd done, and how much they appreciated it.

Real nice. Got me a 40% raise.

Unfortunately, the client's rate went up the same amount.
They held on to me, anyway.
I only left that account because I finished everything they
could think of.

A year later they finally figured out something new, so I went
back for another 6 months. One of my best assignments
on one of my favorite machines, the IBM System/34.

I and the team I worked with ran rings around the mainframe
they used in headquarters.

> periodically during my career, people would remind me that "business
> ethics" was an *oxymoron*.

Maybe so, but as an employee, and especially when I was consulting
I considered absolute honesty to be imperative.
And the consulting company I worked for backed me up every time.

--
Dan Espen

Anne & Lynn Wheeler

unread,
Jul 20, 2014, 8:43:49 PM7/20/14
to
Dan Espen <des...@verizon.net> writes:
> Hey, I got "blamed" for lots of things.
> But I didn't quite look at it that way, especially
> if whatever I did got used. So much work gets done
> in spite of management, I look at it as just the way things
> are done.

re:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test

one of the customers that i would drop in on (and got to know pretty
well, sit around and kabitz with datacenter manager) had enormously huge
football field of ibm mainframes ... maybe not like renton or spook base
... but still pretty large. The local IBM branch manager had horribly
offended the customer ... and as "revenge" they were going to be the
first commercial true blue customer to install a clone processor (this
vendor had been selling into education & scientific market ... but had
yet to break into the commercial market).

I got asked to spend 6months on site at the customer account. The claim
was the branch manager was good sailing buddy of the CEO ... and when
the customer is the first commercial account to install clone processor
... it would ruin the branch manager's career. I was suppose to be there
for six months to make it look like it was a technical issue
(distracting any reflection on the branch manager) ... however i knew
from the customer that there wasn't going to be anything that stops them
from istalling the processor from clone vendor (although it would be the
only one in a vast sea of true blue machines). I was told that if I
didn't do it, I could kiss goodby to any career in the company.

One of the reasons i stayed was there was more toys than anywhere else
in the world. One of my hobbies was doing enhanced production operating
systems for internal datacenters ... and i could walk into almost any
corporate interal datacenter in the world and be allowed to play. I also
got to play disk engineer in bldgs. 14&15 ... or dozens of other things
... all below top executive radar.

past posts getting to play disk engineer
http://www.garlic.com/~lynn/sutopic.html#disk
one of my long time internal operating system customers was the world
wide online sales&marketing system system HONE ... some past posts
http://www.garlic.com/~lynn/subtopic.html#hone

FS was suppose to completely replace 370 ... and internal politics was
killing off 370 efforts ... which is credited with giving clone vendors
market foothold
http://www.garlic.com/~lynn/submain.html#futuresys

in the wake of FS failure, there was mad rush to get products back into
pipeline ... 3033 (168 remapped to 20% faster chips) and 3081 were
kicked off in parallel. A couple of us got the 3033 processor engineers
to work on a 16-way design in their spare time. Everybody in high-end
mainframe land (POK) thot it was really great until somebody told the
head of POK it could be decades before the POK favorite son operating
system had 16-way support. Then we were asked to never visit POK again
and the processor engineers were instructed to never get distracted
again. However, I could still sneak into POK and go bike riding with
processor engineers.

recent posts mentioning renton datacenter at that time i was there
had upwards of $300M ibm mainframe equipment ...
http://www.garlic.com/~lynn/2014c.html#31 How many EBCDIC machines are still around?
http://www.garlic.com/~lynn/2014e.html#8 The IBM Strategy
http://www.garlic.com/~lynn/2014e.html#9 Boyd for Business & Innovation Conference
http://www.garlic.com/~lynn/2014e.html#19 The IBM Strategy
http://www.garlic.com/~lynn/2014e.html#23 Is there any MF shop using AWS service?
http://www.garlic.com/~lynn/2014f.html#80 IBM Sales Fall Again, Pressuring Rometty's Profit Goal
http://www.garlic.com/~lynn/2014g.html#57 Interesting and somewhat disturbing article about IBM in BusinessWeek. What is your opinion?
past posts mentioning the branch manager that horribly
offended one of his customers:
http://www.garlic.com/~lynn/2005f.html#63 Moving assembler programs above the line
http://www.garlic.com/~lynn/2007b.html#32 IBMLink 2000 Finding ESO levels
http://www.garlic.com/~lynn/2011c.html#19 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011c.html#28 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011d.html#12 I actually miss working at IBM
http://www.garlic.com/~lynn/2011l.html#19 Selectric Typewriter--50th Anniversary
http://www.garlic.com/~lynn/2011m.html#31 computer bootlaces
http://www.garlic.com/~lynn/2011m.html#61 JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel)
http://www.garlic.com/~lynn/2012f.html#21 Word Length
http://www.garlic.com/~lynn/2012k.html#8 International Business Marionette
http://www.garlic.com/~lynn/2013e.html#10 The Knowledge Economy Two Classes of Workers
http://www.garlic.com/~lynn/2013l.html#22 Teletypewriter Model 33

misc. recent posts mentioning 16-way
http://www.garlic.com/~lynn/2013h.html#6 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013h.html#14 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013m.html#70 architectures, was Open source software
http://www.garlic.com/~lynn/2013n.html#19 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
http://www.garlic.com/~lynn/2013n.html#59 'Free Unix!': The world-changing proclamation made30yearsagotoday
http://www.garlic.com/~lynn/2014d.html#59 Difference between MVS and z / OS systems
http://www.garlic.com/~lynn/2014e.html#11 Can the mainframe remain relevant in the cloud and mobile era?
http://www.garlic.com/~lynn/2014f.html#21 Complete 360 and 370 systems found
http://www.garlic.com/~lynn/2014h.html#6 Demonstrating Moore's law

jmfbahciv

unread,
Jul 21, 2014, 8:52:38 AM7/21/14
to
Anne & Lynn Wheeler wrote:
> jmfbahciv <See....@aol.com> writes:
>> Did you ever find out what kinds of questions caused the low score?
>
> re:
> http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
>
> never did, but the IBMer doing interview was incredulous when I told
> him that I already had offer from science center
> http://www.garlic.com/~lynn/subtopic.html#545tech
>
> was fairly senior position ... and not entry level. possibly conjecture
> was the test was oriented to finding those that fit the "Man Month"
> profile:
> http://en.wikipedia.org/wiki/The_Mythical_Man-Month

There were psych questions in the test?

>
> recent refs:
> http://www.garlic.com/~lynn/2014g.html#99 IBM architecture, was Fifty Years
of nitpicking definitions, was BASIC,theProgrammingLanguageT
> http://www.garlic.com/~lynn/2014h.html#103 TSO Test does not support 65-bit
debugging?
> http://www.garlic.com/~lynn/2014i.html#5 "F[R]eebie" software
> http://www.garlic.com/~lynn/2014i.html#41 How Comp-Sci went from passing fad
to must have major
>
> I have mentioned in the past being blamed for online computer
> conferencing on the internal network in the late 70s & early 80s
> (folklore is that when the executive committee was told about online
> computer conferencing & the internal network, 5of6 wanted to fire me).
> internal network posts
> http://www.garlic.com/~lynn/subnetwork.html#internalnet

Yea. NOthing like innovation to scare them.

>
> I've also mentioned that somewhat as result of online computer
> conferencing, a researcher was assigned to study how I communicated.
> They sat in the back of my office for 9months, took notes on how I
> communicated, face-to-face, telephone, went with me to meetings and got
> copies of all my incoming & outgoing email as well as logs of all my
> instant messages (almost tempted to reference gov. evesdropping). The
> result was a number of papers and at least one book as well as stanford
> PHD (joint with language and computer AI, winograd was advisor on
> computer AI side). some past posts
> http://www.garlic.com/~lynn/subnetwork.html#cmc
>
> the researcher had previously spent some time as English as Second Language
> instructor, and once commented that my use of English was
> characteristic of non-native speaker.

Was the area where you grew up French-based?

/BAH

Anne & Lynn Wheeler

unread,
Jul 21, 2014, 9:17:05 AM7/21/14
to

re:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test

other archaeological tales

not long after graduating and joining the science center ... other
recent ref
http://www.garlic.com/~lynn/2014i.html#31 Speed of computers--wave equation for the copper atom? (curiosity)

the company hired a new CSO ... as was common in the period commercial
CSO coming from gov. service, specializing in physical security (in this
case head of presidential detail). even tho I had relatively recently
started with company ... was considered one of the most knowledgeable on
computer security ... was asked to run around with the new CSO
... providing some detail about computer security (and a little bit of
physical security rubbing off) ... before the incident involving CEO's
sailing buddy and first install of clone processor in true blue
commercial account.

for other drift ... I didn't learn about these guys until later
http://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

for related drift ... recent post mentioning HSDT & link encryptors
http://www.garlic.com/~lynn/2014i.html#49 Sale receipt--obligatory

I really hated what had to pay for T1 link encryptors (and was
effectively near impossible to getting anything faster) ... and got
involved in doing our own. Objective was under $100 to produce and
handle at least 3mbyte/sec (not 3mbit/sec). Initially the corporate
crypto group said it significantly reduced standard crypto strength. It
took me 3months to figure out how to explain to them what was going on
(significantly increased standard crypto strength). It was hollow
victory ... got told could build as many as I wanted ... but they all
had to be shipped to address in maryland (and I couldn't use any). It
was when I realized there was three kinds of crypto: 1) the kind they
don't care about, 2) the kind you can't do, 3) the kind you can only do
for them.

past posts mentioning 3kinds crypto:
http://www.garlic.com/~lynn/2008h.html#87 New test attempt
http://www.garlic.com/~lynn/2008i.html#86 Own a piece of the crypto wars
http://www.garlic.com/~lynn/2009p.html#32 Getting Out Hard Drive in Real Old Computer
http://www.garlic.com/~lynn/2010i.html#27 Favourite computer history books?
http://www.garlic.com/~lynn/2010o.html#43 Internet Evolution - Part I: Encryption basics
http://www.garlic.com/~lynn/2010p.html#19 The IETF is probably the single element in the global equation of technology competition than has resulted in the INTERNET
http://www.garlic.com/~lynn/2011g.html#20 TELSTAR satellite experiment
http://www.garlic.com/~lynn/2011g.html#60 Is the magic and romance killed by Windows (and Linux)?
http://www.garlic.com/~lynn/2011g.html#69 Is the magic and romance killed by Windows (and Linux)?
http://www.garlic.com/~lynn/2011h.html#0 We list every company in the world that has a mainframe computer
http://www.garlic.com/~lynn/2011n.html#63 ARPANET's coming out party: when the Internet first took center stage
http://www.garlic.com/~lynn/2011n.html#85 Key Escrow from a Safe Distance: Looking back at the Clipper Chip
http://www.garlic.com/~lynn/2012.html#63 Reject gmail
http://www.garlic.com/~lynn/2012i.html#70 Operating System, what is it?
http://www.garlic.com/~lynn/2012k.html#47 T-carrier
http://www.garlic.com/~lynn/2013d.html#1 IBM Mainframe (1980's) on You tube
http://www.garlic.com/~lynn/2013g.html#31 The Vindication of Barb
http://www.garlic.com/~lynn/2013i.html#69 The failure of cyber defence - the mindset is against it
http://www.garlic.com/~lynn/2013k.html#77 German infosec agency warns against Trusted Computing in Windows 8
http://www.garlic.com/~lynn/2013k.html#88 NSA and crytanalysis
http://www.garlic.com/~lynn/2013m.html#10 "NSA foils much internet encryption"
http://www.garlic.com/~lynn/2013o.html#50 Secret contract tied NSA and security industry pioneer
http://www.garlic.com/~lynn/2014.html#9 NSA seeks to build quantum computer that could crack most types of encryption
http://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
http://www.garlic.com/~lynn/2014e.html#25 Is there any MF shop using AWS service?
http://www.garlic.com/~lynn/2014e.html#27 TCP/IP Might Have Been Secure From the Start If Not For the NSA

Anne & Lynn Wheeler

unread,
Jul 21, 2014, 9:29:31 AM7/21/14
to
jmfbahciv <See....@aol.com> writes:
> Was the area where you grew up French-based?

re:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test

nope, but my mother says I was almost 3 before I talked
Message has been deleted

grey...@mail.com

unread,
Jul 21, 2014, 11:55:03 AM7/21/14
to
On 2014-07-21, Anne & Lynn Wheeler <ly...@garlic.com> wrote:
> jmfbahciv <See....@aol.com> writes:
>> Was the area where you grew up French-based?
>
> re:
> http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
> http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
> http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
> http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
> http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
> http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
>
> nope, but my mother says I was almost 3 before I talked
>

And then never shut up: :)

I think of the young non-working husbands who I see in the supermarkets,
little girl standing in the basket, constant chatter, usually prefixed
by "Mammy buys that" ('That' is usually sweet, sticky, and most unlikely
to be bought by Mammy)


--
maus
.
.
...

Walter Bushell

unread,
Jul 21, 2014, 3:40:37 PM7/21/14
to
In article <53cd2454...@news.eternal-september.org>,
gree...@gmail.com (greenaum) wrote:

> On Sun, 20 Jul 2014 09:52:25 -0400, Anne & Lynn Wheeler
> <ly...@garlic.com> sprachen:
>
> >I've also mentioned that somewhat as result of online computer
> >conferencing, a researcher was assigned to study how I communicated.
> >They sat in the back of my office for 9months, took notes on how I
> >communicated, face-to-face, telephone, went with me to meetings and got
> >copies of all my incoming & outgoing email as well as logs of all my
> >instant messages (almost tempted to reference gov. evesdropping). The
> >result was a number of papers and at least one book as well as stanford
> >PHD
>
> Yup, I remember when that came up, mentioning your unusual brain.
> Fortunately the right kind of unusual! It's believed by some that Lynn
> was born with a paper-tape reader on the back of his head.
>

I'm sure he's upgraded to a micro SD by now, or perhaps a wifi link.

Jon Elson

unread,
Jul 24, 2014, 4:52:05 PM7/24/14
to
Dan Espen wrote:

> Anne & Lynn Wheeler <ly...@garlic.com> writes:
>
>> Dan Espen <des...@verizon.net> writes:

>> shortly after joining IBM ... I guess I started to be a problem ...
>> during the "Future System" period, I refused to work on FS, continued
>> to work on 370 and would even periodically ridicule FS
>
> You weren't the problem. IBM management was the problem.
> As history has proved, you were right all along.
The specs for FS were totally insane, for the technology available
at the time (Motorola 10K ECL or any equivalent). So, should
FS have been canceled as it could NEVER reach the goal, or kept
alive, as it would have been a very powerful machine? Was it
an all-out attempt to make a supercomputer which would sell maybe
less than a dozen units? Or, was it the basis of the next generation
of IBM mainframes?

The 370 series was a practical architecture, although the performance
of some of the lower models seems like it must have been intentionally
crippled to not interfere with the /15x and /16x machine.

Jon

Anne & Lynn Wheeler

unread,
Jul 24, 2014, 6:39:31 PM7/24/14
to
Jon Elson <jme...@wustl.edu> writes:
> The specs for FS were totally insane, for the technology available
> at the time (Motorola 10K ECL or any equivalent). So, should
> FS have been canceled as it could NEVER reach the goal, or kept
> alive, as it would have been a very powerful machine? Was it
> an all-out attempt to make a supercomputer which would sell maybe
> less than a dozen units? Or, was it the basis of the next generation
> of IBM mainframes?
>
> The 370 series was a practical architecture, although the performance
> of some of the lower models seems like it must have been intentionally
> crippled to not interfere with the /15x and /16x machine.

re:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#55 IBM Programmer Aptitude Test

FS specs had a lot of blue sky ideas ... some of them not even having
any idea about how they might be implemented. since it was suppose to
completely replace 370 ... internal politics during the period was
suspended and/or killing off 370 efforts. some past posts
http://www.garlic.com/~lynn/submain.html#futuresys

some other refs:

Discussion of old FS evaluation
http://www.jfsowa.com/computer/memo125.htm
FS description and discussion
http://people.cs.clemson.edu/~mark/fs.html
wiki entry
http://en.wikipedia.org/wiki/IBM_Future_Systems_project

FS design/architecture was divided into something like 13
sections/areas. My wife worked for head of one of the sections and had
some responsibility for dealing with other sections ... and was
repeatedly surprised/astounded by the lack of any substance backing up
some of their fantasies.

part of FS was sort of object with potentially five levels of
indirection (& storage access; aka an "hardware" ADD instruction which
would handle whether the operands were decimal, floating point, integer,
etc ... or even the same). one of the final nails in FS coffin was study
by the (IBM) Houston science center ... that if a FS machine was made
out of the fastest available hardware ... and an application from
370/195 was moved over to it ... it would have throughput of 370/145
(about factor of 30 times slowdown).

another feature was it was to be "single level store" architecture
... somewhat carried over from tss/360. at the univ. I got to play with
cp67/cms on weekends and sometimes had to share the machine with IBM SE
playing with TSS/360. At one point we did synthetic benchmark for
Fortran edit, compile, link and execute. I got better throughput and
interactive response for 35 simulated users on cp67/cms than he did for
four simulated users on tss/360 (with exact same hardware). I've
periodically claimed that a lot of what i did for cp67/cms paged-mapped
filesystem in the early 60s took into account of "what not to do" from
observing tss/360 (i could easily get three times the native cp67/cms
filesystem throughput). this contributed to my references to
periodically ridiculing the FS efforts (and continued to work on 360 and
then moving to vm370/cms ... during the FS period). posts mentions doing
cp67/cms paged-mapped filesystem
http://www.garlic.com/~lynn/submain.html#mmap
also part of recent discussion over in ibm-main
http://www.garlic.com/~lynn/2014i.html#66 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
http://www.garlic.com/~lynn/2014i.html#67 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
http://www.garlic.com/~lynn/2014i.html#68 z/OS physical memory usage with multiple copies of same load module at different virtual addresses

This goes into major motivation for FS was countermeasure to clone
controllers ... that FS would have such tight integration between
processor and controllers that it would make it extremely difficult for
clones to keep up (but much of the actual specification to accomplish
that was totally lacking)
http://www.ecole.org/en/seances/CM07
other posts mentioning clone controller work
http://www.garlic.com/~lynn/subtopic.html#360pcm

A related subject is the end of ACS/360 (which also gets into tiered
processor performance)
http://people.cs.clemson.edu/~mark/acs_end.html

mentions that it was killed because management was afraid that it would
advance the state of the art too fast and they would loose control of
the market. at the end of above, it goes into some of acs/360 features
finally showing up more than 20yrs later in es/9000.

the person responsible leaves and starts his own clone processor
company. accounts of the lack of 370 products during the FS period is
then credited with giving clone processors a market foothold. This
recent post (in this thread) mentions that it was initially with univ. &
scientific ... before breaking into the true blue commercial market.
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test

the folklore is that some of the FS people retreat to Rochester and do
the system/38 ... significantly simplifying a lot of FS features ...
and not having to worry about throughput in the market that they were
selling to. For instance one of the simplifications was that they
treated all connected disks as a common storage pool for single system
filesystem (with any file potentially having scatter allocation across
all available disks). As a result, everything had to be backed up as an
integral whole. A common failure of the time was single disk failure
... but because of the common storage pool paradigm ... the one disk
would be replaced ... and then a complete system restore would be needed
(could easily take 24hrs elapsed time).
http://en.wikipedia.org/wiki/IBM_System/38
and
http://www-03.ibm.com/ibm/history/exhibits/rochester/rochester_4009.html

the followon was as/400 which was replacement for s/34, s/36 and s/38
(and dropped some of the s/38 FS features).
http://en.wikipedia.org/wiki/IBM_System_i

Quadibloc

unread,
Jul 24, 2014, 6:51:25 PM7/24/14
to
On Thursday, July 24, 2014 4:39:31 PM UTC-6, Anne & Lynn Wheeler wrote:

> the folklore is that some of the FS people retreat to Rochester and do
> the system/38 ... significantly simplifying a lot of FS features ...
> and not having to worry about throughput in the market that they were
> selling to.

Well, the AS/400 and such did appear to include some of the features and philosophy associated with the Future System. So, while FS was too ambitious for its time, some of its basic ideas were sound enough to be worth keeping.

The IBM 360/85, despite performing well, thanks to cache, was a poor seller, but that didn't stop the 370/165 and the 3033 from being based on its microarchitecture.

It's wasteful to throw stuff away if it can still be used.

John Savard

Anne & Lynn Wheeler

unread,
Jul 24, 2014, 7:35:53 PM7/24/14
to
Quadibloc <jsa...@ecn.ab.ca> writes:
> Well, the AS/400 and such did appear to include some of the features
> and philosophy associated with the Future System. So, while FS was too
> ambitious for its time, some of its basic ideas were sound enough to
> be worth keeping.
>
> The IBM 360/85, despite performing well, thanks to cache, was a poor
> seller, but that didn't stop the 370/165 and the 3033 from being based
> on its microarchitecture.
>
> It's wasteful to throw stuff away if it can still be used.

re:
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test

FS threw in nearly every idea from computer & academic literature of the
period ... even if they had absolutely no idea what it met and/or how to
implement (little or nothing original with FS). It is not surprising
that some of it was eventually made to work (on the other hand, lots of
it would never work ... but they had little idea had to differentiate
the two, goes way beyond "too ambitious for its time")

165 to 168 was moving from 2mic memory to less than 1/2mic memory and
optimizing the microcode so they reduced 370 instruction emulation from
2.1 machine cycles to 1.6 machine cycles per 370 instruction.

168-1 to 168-3 was doubling the cache size from 16kbytes to 32kbytes.

168-3 to 3033 started out being 168-3 design mapped to 20% faster chips
... some other stuff eventually got 3033 up to 1.5times the 168-3.

note that both 3033 and 3081 were concurrently, part of the mad rush
after the failure of Future System, to get stuff back into 370 product
pipeline (using warmed over 370 technology) ... more here (and compared
poorly with clone processors):
http://www.jfsowa.com/computer/memo125.htm

note that all during this period the manufacturing costs for 370/158 was
at the knee of the (POK high-end) cost/performance curve. it was one
reason why the 370/158 engine was selected for the 303x channel
director.

however, the 4341 using even newer technology came in at even lower
better knew of the manufacturing cost/performance curve. while 4341 was
individually faster than 3033 ... clusters of 4341 beat 3033 on every
metic (aggregate performance, floor space, price/performance,
environmentals, etc). at one point the head of pok was so threatened by
4341 threat to 3033 that at one point, he managed to get the allocation
of a critical 4341 manufacturing component cut in half.

clusters of 4341s beat 3033 in datacenters ... as well as being the
leading edge of distributed computing tsunami ... large corporations
installing hundreds at a time out in departmental areas (departmental
conference rooms inside ibm became a scarce commodity because of being
taken over by 4341s).

old 4300 email
http://www.garlic.com/~lynn/lhwemail.html#43xx

i've frequently commented that John may have done 801/risc to be the
exact opposite of FS complexity ... including FS high level abstraction
with enormous processing required in the microcode below the instruction
interface (including large number of storage references to resolve each
instruction operand, aka the reference building FS machine out of
370/195 technology results in factor of 30 times slowdown). Now almost
every production architecture is either RISC or CISC with hardware level
layer that translates instructions into RISC micro-ops for actual
execution.

hanc...@bbs.cpcn.com

unread,
Jul 24, 2014, 9:48:41 PM7/24/14
to
On Thursday, July 24, 2014 6:51:25 PM UTC-4, Quadibloc wrote:

> Well, the AS/400 and such did appear to include some of the features and philosophy associated with the Future System. So, while FS was too ambitious for its time, some of its basic ideas were sound enough to be worth keeping.

Having used the AS/400, I did not think much of the "single level store" concept. If the machine was lightly used it could work, but if the machine had heavy use performance was terrible because the single level store did not make efficient use of available resources. Kind of like the early days of virtual storage when the system would 'thrash' with too much paging to disk.

Ironically, _today_ in the Z world, we have _evolved_ to more of a single store world. This is because disk and core-memory have become so cheap that stuff that used to be put out to cheaper off line slow storage can now be affordably stored in high speed on line storage. Much of this is transparent to the application programmer, with the operating system or CICS automatically using fast resources when available.


As to "FS", the IBM System 360 history book has a lot of information on it.

On the surface, one could wonder why they didn't think more of "how" the whole thing was supposed to work with the technology available of the time; FS would require enormous overhead. But, in the early days of S/360, they weren't sure of how everything would work either, but eventually they got it all running (at a very high price in delays and sweat). So, maybe they figured FS would somehow work itself out, too.

hanc...@bbs.cpcn.com

unread,
Jul 24, 2014, 9:51:17 PM7/24/14
to
On Thursday, July 24, 2014 4:52:05 PM UTC-4, Jon Elson wrote:

> The specs for FS were totally insane, for the technology available at the time (Motorola 10K ECL or any equivalent). So, should FS have been canceled as it could NEVER reach the goal, or kept alive, as it would have been a very powerful machine? Was it an all-out attempt to make a supercomputer which would sell maybe less than a dozen units? Or, was it the basis of the next generation of IBM mainframes?

FS was not a super-computer, but the basis for the next generation of IBM mainframes. It was to revolutionize the I.T. world as System/360 did.

Walter Bushell

unread,
Jul 25, 2014, 7:19:38 AM7/25/14
to
In article <8c290ca3-d49b-4959...@googlegroups.com>,
Quadibloc <jsa...@ecn.ab.ca> wrote:

> Well, the AS/400 and such did appear to include some of the features and
> philosophy associated with the Future System. So, while FS was too ambitious
> for its time, some of its basic ideas were sound enough to be worth keeping.

"Parts of it were excellent."

--
Never attribute to stupidity that which can be explained by greed. Me.

Anne & Lynn Wheeler

unread,
Jul 25, 2014, 10:37:01 AM7/25/14
to

re:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#55 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#70 IBM Programmer Aptitude Test

related post this morning over in ibm-main
http://www.garlic.com/~lynn/2014i.html#71 z/OS physical memory usage with multiple copies of same load module at different virtual addresses

mentioning single-level-store (not just s/38) ... both tss/360 and this
multics reference
http://en.wikipedia.org/wiki/Multics

from above:

Multics implemented a single level store for data access, discarding the
clear distinction between files (called segments in Multics) and process
memory. The memory of a process consisted solely of segments which were
mapped into its address space. To read or write to them, the process
simply used normal CPU instructions, and the operating system took care
of making sure that all the modifications were saved to disk. In POSIX
terminology, it was as if every file was mmap()ed; however, in Multics
there was no concept of process memory, separate from the memory used to
hold mapped-in files, as Unix has. All memory in the system was part of
some segment, which appeared in the file system; this included the
temporary scratch memory of the process, its kernel stack, etc.

... snip ...

the s/38 common filesystem pool scaled poorly ... just having to
save/restore all data as single integral whole, was barely tolerable
with a few disks ... but large mainframe system with 300 disks would
require days for the operation.

other recent posts mentioning s/38
http://www.garlic.com/~lynn/2014b.html#11 Mac at 30: A love/hate relationship from the support front
http://www.garlic.com/~lynn/2014b.html#68 Salesmen--IBM and Coca Cola
http://www.garlic.com/~lynn/2014b.html#84 CPU time
http://www.garlic.com/~lynn/2014c.html#75 Bloat
http://www.garlic.com/~lynn/2014c.html#76 assembler
http://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
http://www.garlic.com/~lynn/2014e.html#48 Before the Internet: The golden age of online service
http://www.garlic.com/~lynn/2014e.html#50 The mainframe turns 50, or, why the IBM System/360 launch was the dawn of enterprise IT
http://www.garlic.com/~lynn/2014e.html#53 The mainframe turns 50, or, why the IBM System/360 launch was the dawn of enterprise IT
http://www.garlic.com/~lynn/2014g.html#96 IBM architecture, was Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
http://www.garlic.com/~lynn/2014g.html#97 IBM architecture, was Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
http://www.garlic.com/~lynn/2014i.html#9 With hindsight, what would you have done?

Dan Espen

unread,
Jul 25, 2014, 1:14:21 PM7/25/14
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:
> the s/38 common filesystem pool scaled poorly ... just having to
> save/restore all data as single integral whole, was barely tolerable
> with a few disks ... but large mainframe system with 300 disks would
> require days for the operation.

You've said this many times.
Makes no sense to me at all.
When we backed up our data on an IBM System 34,
we backed up the application data.
It didn't matter at all what volume the data was on.

Trying to back up everything on disk would be
a huge waste of time besides being useless for an
actual restore. We needed to run our daily application
cycles to completion, then back up the application.
This would not necessarily take place all at once.
Each application got backed up.

--
Dan Espen

Shmuel Metz

unread,
Jul 25, 2014, 3:08:40 PM7/25/14
to
In <lqu39d$fen$1...@dont-email.me>, on 07/25/2014
at 01:14 PM, Dan Espen <des...@verizon.net> said:

>Trying to back up everything on disk would be
>a huge waste of time besides being useless for an
>actual restore.

WTF? At NSF we routinely backed up everything on a regular basis, with
incremental backups in between.

--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to spam...@library.lspace.org

Dan Espen

unread,
Jul 25, 2014, 3:54:39 PM7/25/14
to
Shmuel (Seymour J.) Metz <spam...@library.lspace.org.invalid> writes:

> In <lqu39d$fen$1...@dont-email.me>, on 07/25/2014
> at 01:14 PM, Dan Espen <des...@verizon.net> said:
>
>>Trying to back up everything on disk would be
>>a huge waste of time besides being useless for an
>>actual restore.
>
> WTF? At NSF we routinely backed up everything on a regular basis, with
> incremental backups in between.

Hey, I can only tell you what we did for backup on our System/34
systems.

And, it worked well.

Thinking some more, I suppose some mainframe applications
don't really have a daily cycle and backup point.

--
Dan Espen

Anne & Lynn Wheeler

unread,
Jul 25, 2014, 4:27:55 PM7/25/14
to

Dan Espen <des...@verizon.net> writes:
> Trying to back up everything on disk would be a huge waste of time
> besides being useless for an actual restore. We needed to run our
> daily application cycles to completion, then back up the application.
> This would not necessarily take place all at once. Each application
> got backed up.

re:
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test

common failure mode was single disk failure. because of scatter
allocation ... all files could have pieces across all disks ... a
single disk failure resulted in impacting *ALL* files ... required
restoring everything from scratch just to get a running system ... all
system files and all user files (nothing could be salvaged from
non-failed disks since arbitrary file pieces would be missing).

guy that i sometimes worked with when I got to play disk engineer
over in bldgs 14/15
http://www.garlic.com/~lynn/subtopic.html#disk

filed original patent for raid in 1977
http://en.wikipedia.org/wiki/RAID

i never actually operated s/38 ... but was told several times that the
operational restore problems for s/38 with single disk failure was
sufficiently traumatic that it motivated s/38 to ship the first raid
support.

other posts in thread
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#55 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#70 IBM Programmer Aptitude Test

Dan Espen

unread,
Jul 25, 2014, 4:54:56 PM7/25/14
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:

> Dan Espen <des...@verizon.net> writes:
>> Trying to back up everything on disk would be a huge waste of time
>> besides being useless for an actual restore. We needed to run our
>> daily application cycles to completion, then back up the application.
>> This would not necessarily take place all at once. Each application
>> got backed up.
>
> common failure mode was single disk failure. because of scatter
> allocation ... all files could have pieces across all disks ... a
> single disk failure resulted in impacting *ALL* files ... required
> restoring everything from scratch just to get a running system ... all
> system files and all user files (nothing could be salvaged from
> non-failed disks since arbitrary file pieces would be missing).

Oops, you're right.
I'm thinking too small.

I'm not sure how many hard disks there even were in the System/34
and our backup options were limited to magazines of diskettes.
I think a magazine held 10 of the 8 inch floppies.
We couldn't back up the whole system if we wanted to.
Well, we could but it would take forever and a bunch of
magazine changes.

Our CE once mentioned, we should take a look at the disk error
statistics on the system. After operating the system for years
the counters were still at zero.

He said all his systems were like that.

So, I guess we're destined to never see single level storage
on a large scale system. Even if AS/400 users manage to get by.

--
Dan Espen

Anne & Lynn Wheeler

unread,
Jul 25, 2014, 5:44:40 PM7/25/14
to

Dan Espen <des...@verizon.net> writes:
> So, I guess we're destined to never see single level storage
> on a large scale system. Even if AS/400 users manage to get by.

re:
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#73 IBM Programmer Aptitude Test

problem wasn't directly single level storage ... it was that s/38
simplified the infrastructure management by treating all disks as a
common allocation pool ... and just doing scatter allocation.

also RAID can go a long way to masking single disk failures.

vm370 spool had a similar/analogous problem ... doing scatter allocation
and treating all spool areas as common pool. thi wasn't bad for early
configurations with spool on single disk ... but increasingly became a
problem as configurations scaled up. if any disk failed ... all spool
files were lost. vm370 spool had other issues, it had checkpoint for
clean shutdown ... allowing relative fast restart. However, if it did
have clean shutdown, it required a warm restart ... which could require
30-60 minutes for large configuration ... and while vm370 would do
automatic restart in well under 5mins normally ... it waited on spool
being up before restart finished ... so system was unavailable during
long warm restart.

i've mentioned before that I had a throughput issue in HSDT
http://www.garlic.com/~lynn/subnetwork.html#hsdt
old hsdt email
http://www.garlic.com/~lynn/lhwemail.html#hsdt

with vm370 spool because vm370 RSCS/VNET used spool for storage. It used
a synchronous 4k (page) block read/write interface ... so was serialized
while it waiting for disk transfer. With other activity in system also
using spool system, RSCS/VNET might be limited to 5-8 4k block
transfer/sec (20k-30k/sec, something that might be ween with a couple
full-duplex 56kbit links). HSDT had multiple full-duplex T1 (and faster)
links (and while supporting TCP/IP, also ran RSCS/VNET) ... a
full-duplex T1 requires 300kbytes/sec sustained.

So for HSDT, i decided to rewrite spool to allow RSCS/VNET to get
upwards of 1mbyte/sec-3mbyte/sec spool sustained throughput. This
required asynchronous 4k block transfer interface ... with contiguous
allocation, multiple block transfers, write behind and read
aheads. Contiguous allocation had option for drive affinity (all blocks
on same disk). I also did mechanism so vm370 could be up and available
before spool file recovery was complete ... and warm start ran
enormously faster (in case of non-clean checkpoint). Also supported
moving all data off target drive concurrent running of live system as
part of taking drive offline for maintenance as well as adding drives on
the fly (somewhat akin later done for some hardware RAID subsystems as
part recovery).

this is old email trying to get the spool changes into the internal
network "backbone" nodes that were starting to have multiple 56kbit
links. however, at this time, the communication group was on a
misinformation campaign to convince the corporation to covernt
the internal network to SNA (internal network meetings change to
exclude technical people and only involve management)
http://www.garlic.com/~lynn/2011.html#email8703006

other old vnet/rscc email
http://www.garlic.com/~lynn/lhwemail.html#vnet

past internal network posts
http://www.garlic.com/~lynn/subnetwork.html#internalnet

I did majority of spool changes writting in vs/pascal running in
virtual address spaces ... and with some slight of hand programming
tricks ... pathlength ran faster than assembler code running as
part of kernel ... some past posts
http://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)
http://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
http://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
http://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX
http://www.garlic.com/~lynn/2004g.html#19 HERCULES
http://www.garlic.com/~lynn/2004p.html#3 History of C
http://www.garlic.com/~lynn/2005d.html#38 Thou shalt have no other gods before the ANSI C standard
http://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
http://www.garlic.com/~lynn/2006.html#35 Charging Time
http://www.garlic.com/~lynn/2007c.html#21 How many 36-bit Unix ports in the old days?
http://www.garlic.com/~lynn/2007g.html#45 The Complete April Fools' Day RFCs
http://www.garlic.com/~lynn/2008g.html#22 Was CMS multi-tasking?
http://www.garlic.com/~lynn/2009h.html#63 Operating Systems for Virtual Machines
http://www.garlic.com/~lynn/2009o.html#12 Calling ::routines in oorexx 4.0
http://www.garlic.com/~lynn/2010k.html#26 Was VM ever used as an exokernel?
http://www.garlic.com/~lynn/2010k.html#35 Was VM ever used as an exokernel?
http://www.garlic.com/~lynn/2011e.html#25 Multiple Virtual Memory
http://www.garlic.com/~lynn/2011e.html#29 Multiple Virtual Memory
http://www.garlic.com/~lynn/2012g.html#18 VM Workshop 2012
http://www.garlic.com/~lynn/2012g.html#23 VM Workshop 2012
http://www.garlic.com/~lynn/2012g.html#24 Co-existance of z/OS and z/VM on same DASD farm
http://www.garlic.com/~lynn/2013n.html#91 rebuild 1403 printer chain

Peter Flass

unread,
Jul 25, 2014, 6:50:45 PM7/25/14
to
I understand Multics had this problem originally, and they eventually
redesigned the filesystem to fix it.

--
Pete

Peter Flass

unread,
Jul 25, 2014, 6:50:46 PM7/25/14
to
IME it was often quicker to back up a whole pack with physical backup
rather than several datasets with logical backup.

--
Pete

Dan Espen

unread,
Jul 25, 2014, 7:44:20 PM7/25/14
to
Got me with that one:

IME - In My Experience

I'm still struggling to grasp all the implications.

Suffice it to say, my only experience with even worrying about
backup was with the System/34 system I designed/programmed/installed.

If you're datasets are scattered all over multiple volumes,
you need to back up all the volumes to have something useful.
You can't very well restore one volume if anything has happened
on the other volumes.

z/OS now supports single datasets with extents on multiple
volumes. I guess you have to be careful how you do that.
That must complicate the process.

I hope I never have to deal with the issue.
Sounds like a nightmare.

--
Dan Espen

Anne & Lynn Wheeler

unread,
Jul 25, 2014, 7:51:40 PM7/25/14
to
Peter Flass <peter...@yahoo.com> writes:
> I understand Multics had this problem originally, and they eventually
> redesigned the filesystem to fix it.

re:
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#73 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#74 IBM Programmer Aptitude Test

reference to cp67/cms crashing & restarting 27 times in single day ...
because of the crash and auto system restart (tech sq ... but across the
courtyard from 545):
http://www.multicians.org/thvv/360-67.html

(It is a tribute to the CP/CMS recovery system that we could get 27
crashes in in a single day; recovery was fast and automatic, on the
order of 4-5 minutes. Multics was also crashing quite often at that
time, but each crash took an hour to recover because we salvaged the
entire file system. This unfavorable comparison was one reason that the
Multics team began development of the New Storage System.)

... snip ...

i had done ascii/tty terminal support as undergraduate in the 60s which
was picked up and distributed as part of standard release. I had done a
one byte arithmetic hack (since no terminals supported more than 255
length). Down the road, harvard got some kind of new tty device (i think
plotter) that supported line lengths longer than 255 ... USL did quick
hack to make the length something like 1200 (or more?) ... but didn't
fix the one byte arithmetic ... so lengths were incorrectly calculated
resulting in the crashes.


Multics had problem with both salvaging filesystem after crash
(something like unix fsck or vm370 spool warm start w/o checkpoint
start) ... as well scatter allocation
http://www.multicians.org/nss.html

In the initial design of the Multics file system, disk addresses were
assigned in increasing order, as if all the drives of a given device
type made up one big disk. We didn't think a lot about this approach, it
was just the easiest. One consequence of this address policy was that
files tended to have their pages stored on multiple disk drives, and all
drives were utilized about equally on average.

... snip ...

Peter Flass

unread,
Jul 26, 2014, 7:23:50 AM7/26/14
to
Dan Espen <des...@verizon.net> wrote:
> Peter Flass <peter...@yahoo.com> writes:
>
>> Dan Espen <des...@verizon.net> wrote:
>>> Anne & Lynn Wheeler <ly...@garlic.com> writes:
>>>> the s/38 common filesystem pool scaled poorly ... just having to
>>>> save/restore all data as single integral whole, was barely tolerable
>>>> with a few disks ... but large mainframe system with 300 disks would
>>>> require days for the operation.
>>>
>>> You've said this many times.
>>> Makes no sense to me at all.
>>> When we backed up our data on an IBM System 34,
>>> we backed up the application data.
>>> It didn't matter at all what volume the data was on.
>>>
>>> Trying to back up everything on disk would be
>>> a huge waste of time besides being useless for an
>>> actual restore. We needed to run our daily application
>>> cycles to completion, then back up the application.
>>> This would not necessarily take place all at once.
>>> Each application got backed up.
>>
>> IME it was often quicker to back up a whole pack with physical backup
>> rather than several datasets with logical backup.
>
> Got me with that one:
>
> IME - In My Experience
>
> I'm still struggling to grasp all the implications.

Physical copy copied a whole cylinder at a time, minimized seek time, and
wrote a single file to tape. Logical copy had to do a VTOC lookup for each
dataset to be copied, seek to the start, and then read each extent in order
with seeks for each; it usually wrote a separate tape file for each dataset
backed up.

>
> Suffice it to say, my only experience with even worrying about
> backup was with the System/34 system I designed/programmed/installed.
>
> If you're datasets are scattered all over multiple volumes,
> you need to back up all the volumes to have something useful.
> You can't very well restore one volume if anything has happened
> on the other volumes.
>
> z/OS now supports single datasets with extents on multiple
> volumes. I guess you have to be careful how you do that.
> That must complicate the process.
>
> I hope I never have to deal with the issue.
> Sounds like a nightmare.

Normally you'd define a separate volume group (forget the correct term) for
various groups of datasets, in some logical organization, so you wouldn't
have a single dataset spread all over your DASD farm but maybe over two or
three packs.

--
Pete

jmfbahciv

unread,
Jul 26, 2014, 8:39:06 AM7/26/14
to
There are two kinds of backups: file-based backups and physical
disk backups. The one you designed was file-based and specific
to a "user". Operations, which needed to babysit and entire
system, had to have some way of backuping the system in case
the fit hit the shan. Note that a file system rarely crashed
but one physical disk did.

/BAH

Anne & Lynn Wheeler

unread,
Jul 26, 2014, 10:18:53 AM7/26/14
to
Dan Espen <des...@verizon.net> writes:
> z/OS now supports single datasets with extents on multiple
> volumes. I guess you have to be careful how you do that.
> That must complicate the process.

z/OS has an enormous list of issues.

it still only supports ckd disks ... which haven't been manufactored for
decades ... all being simulated on large (fixed-block) disk subsystems
that make extensive use of virtual volumes and raid (with hardware raid
responsible for masking single disk failures). the virtual simulated
3390 data organization may have little to do with the actual physical
layout on real disks.

the ckd disks simulated are some flavor of 3390 with some slight of hand
that supports max. size that tends to be small multiples of real 3390s
(but enormously smaller than than the real disks being used) ... 3390
3gb, 9gb, 27gb, and 54gb.

recent 3390 "model A" ... DS8000 release 4 LIC, configuration supports
3390 devices between 1 to 268,434,453 (simulated) 3390 cylinders (max
225tb), z/os v1r10 & v1r11 only supports up to 262,668 max 3390
(simulated) cylinders (223gb).

ds8870 ref
http://www-03.ibm.com/systems/storage/disk/ds8000/specifications.html

risc power7 processors, max. 1tbyte memory, up to 3,072TB disk
(supporting a variety of real industry standard disks).

I've posted recently about z196 max i/o benchmark that used 104 FICONS
to some number of (presumably ds8000) disk subsystems that got 2M
IOPS. FICON is a heavy-weight mainframe channel emulation layer built on
industry standard fibre channel that enormously reduces the throughput
of native FCS throughput. About the same time as the z196 benchmark
there was announcement of FCS for e5-2600 claiming over million IOPS
(two such FCS would have higher throughput than 104 FICON). z196 has
other issues, the claim is that max. i/o instruction SSCH/sec is 2.2M
with all system support processors (SSPs) running 100% cpu utilization
... but the recommendation for normal operation that SSPs utilization be
kept to 70% or less (1.5M SSCH/sec). posts mentioning FICON
http://www.garlic.com/~lynn/submisc.html#ficon

I haven't seen any published benchmarks for the current ec12 ... but
ec12 announcement material was it would have only 30% higher i/o
throughput than z196.

posts in thread:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#55 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#70 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#73 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#74 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#76 IBM Programmer Aptitude Test
Message has been deleted

Anne & Lynn Wheeler

unread,
Jul 26, 2014, 11:23:17 AM7/26/14
to
Peter Flass <peter...@yahoo.com> writes:
> IME it was often quicker to back up a whole pack with physical backup
> rather than several datasets with logical backup.

recent reference to having done cmsback in the late 70s
for internal installations
http://www.garlic.com/~lynn/2014i.html#58 How Comp-Sci went from passing fad to must have major
some old cmsback email
http://www.garlic.com/~lynn/lhwemail.html#cmsback
and some past posts
http://www.garlic.com/~lynn/submain.html#backup

it went through a couple internal releases and then had support for
client platforms ... and released as workstation datasave facility
(WDSF).

it did incremental new/changed file backup. internally it started out
being used for people that had accidentially erased/corrupted a file or
wanted an earlier version of a file. it then started being used to
reduce nightly full pack/drive backups to once a week. a single disk
failure would restore the most recent full pack/drive backup and then
restore latest more recent incremental new/changed files also on the
same disk.

it then morphed into ADSM (adstar storage manager) during period where
the disk division was reorganized and rebranded in prepartion for
spinning off into separate company. gerstner was then brought in ... and
he reversed the breakup
http://www.garlic.com/~lynn/submisc.html#gerstner

but then later sold-off the disk division anyway ... at which time some
amount of the disk division software was kept and moved into different
organization ... ADSM morphing into TSM

posts in this thread:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#55 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#70 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#73 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#74 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#76 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#78 IBM Programmer Aptitude Test

jmfbahciv

unread,
Jul 27, 2014, 9:19:01 AM7/27/14
to
Morten Reistad wrote:
> In article <PM0004FF1...@aca32507.ipt.aol.com>,
> There is a third level. The i-node level, introduced by unix(?[1])
> and embedded in posix and NFS.
>
> Every file, directory etc. is an i-node. They have a number on
> the device they reside on, and have some kind of type and may
> have data content (or it may not, e.g. a soft link).
>
> dump & restore handles these kinds of backups. They are an intermediate
> level between a file system (tar, cpio etc) backup and a physical
> level backup. They have advantages of both. Individual files can
> be restored, and it restores the whole file system as it was, with
> all the magic hard and soft links exactly as they were. (Whereas
> tar/cpio etc. will restore two hard links to the same file as
> two files, and will mangle some device nodes. Some versions also
> save symbolic (not absolute, numeric) user&group names, which
> may differ on a restored system.
>
> dump also does not save unused blocks, not does restore need
> to load them. The save is by i-node sequence, which is generally
> quite linear on the disk, at least sufficiently to matter
> substantially on the backup, and especially the restore times.
>
> You will have to build the file system first, and the restored
> file system must be upwardly compatible with the restored one.
> I.e. efs2->efs4 will work,
>
> The i-node file system design is kind of hard to wrap your head
> around, but once you have you wonder why the other ones don't do
> this.

What other OSes had soft links between files?
>
> -- mrr
>
> [1] was this really introduced by unix? It is the first instance
> I can find, but there may be others, pre ca 1972 instances.

The inode is almost equivalent to the files which were the xxx.UFD,
xxx.MFD or xxx. SFD functionality on TOPS-10.

/BAH
Message has been deleted

jmfbahciv

unread,
Jul 28, 2014, 8:42:12 AM7/28/14
to
Morten Reistad wrote:
> In article <PM0004FF2...@ac8107c9.ipt.aol.com>,
> jmfbahciv <See....@aol.com> wrote:
>>Morten Reistad wrote:
>>> In article <PM0004FF1...@aca32507.ipt.aol.com>,
>>> jmfbahciv <See....@aol.com> wrote:
>>>>Dan Espen wrote:
>>>>> Peter Flass <peter...@yahoo.com> writes:
>>>>>
>>>>>> Dan Espen <des...@verizon.net> wrote:
>
>>> The i-node file system design is kind of hard to wrap your head
>>> around, but once you have you wonder why the other ones don't do
>>> this.
>>
>>What other OSes had soft links between files?
>
> In 1972? I don't know. (btw, the soft link was a later add-on, the
> first i-node system only had the hard link)

The only things I can think of which had soft-links between files
is something like ISAM or .HGH and .LOW file pairs. GETSEGs were
set up in the code on TOPS-10; the info wasn't in the file directory
entry block.
>
>>> -- mrr
>>>
>>> [1] was this really introduced by unix? It is the first instance
>>> I can find, but there may be others, pre ca 1972 instances.
>>
>>The inode is almost equivalent to the files which were the xxx.UFD,
>>xxx.MFD or xxx. SFD functionality on TOPS-10.
>
> Whic tells me that you haven't understood the i-node design. It
> is about layering the formatted storage and the file system as
> distinct entities on top of each other. The tops10 file system
> never had any such layering. Nor did tops20, multics[1], or
> any of the early OSes. I don't know about the classics like ctss.


I was thinking about where the data to soft link files would have
to be stored and saved. Have you ever opened a xxx.?FD file?
The data in that "file" would also have to be saved during a
BACKUP which didn't mirror the physical disk. I'm NOT talking
about the subsequent functionality which the OS and users use.


> [1] unless you count the segments and segdirs as a middle layer.
>

hanc...@bbs.cpcn.com

unread,
Jul 28, 2014, 10:34:40 AM7/28/14
to
On Thursday, July 24, 2014 9:48:41 PM UTC-4, hanc...@bbs.cpcn.com wrote:
> As to "FS", the IBM System 360 history book has a lot of information on it.

I checked the book last night and I strongly recommend it to anyone interested in FS. It describes the technical and marketing environment that inspired FS initial research and then large investment. It also describes in detail the layered approach of the FS architecture, something not quite well understood by the players and often changing; and the reasons for termination. There was too much detail to summarize here.

FS was killed because, in essence, they recognized (1) the advances in technology--cheap memory and powerful CPUs--were not advancing fast enough to make FS practical; (2) demand for S/370 products was stronger than expected, (3) 360-370 became the de-facto standard architecture for the industry, and (4) FS was extremely complicated and completion was seen too far away.




Anne & Lynn Wheeler

unread,
Jul 28, 2014, 11:50:22 AM7/28/14
to
hanc...@bbs.cpcn.com writes:
> FS was killed because, in essence, they recognized (1) the advances in
> technology--cheap memory and powerful CPUs--were not advancing fast
> enough to make FS practical; (2) demand for S/370 products was
> stronger than expected, (3) 360-370 became the de-facto standard
> architecture for the industry, and (4) FS was extremely complicated
> and completion was seen too far away.

re:
http://www.garlic.com/~lynn/submain.html#futuresys

possible spun as favorably as possible ... modulo:

1) some amount of it hadn't even been specified ... just some high-level
ideas and then "where's the beaf" ... many areas were possibly years
away from finding whether they were even practical (as opposed to simply
lacking sufficiently advanced technology)

2) (ibm houston science center) simulation that showed a 370/195
application run on a FS machine made out of the same technology as
370/195, would have throughput of 370/145 (30 times slow-down). could
only be marketed to much less throughput sensitive market ... like s/38
(which wasn't even a 370 market).

3) FS internal politics were killing off 370 product activity, then the
lack of 370 products gave the 370 clone vendors a market foothold
(killing off internal competition left the market wide-open to external
competition)

4) acs-end describes executives killing off acs/360 because it would
advance the computer state of the art too fast and IBM would loose
control of the market (also mentions features from acs/360 not showing
up until more than 20yrs later in es/9000)
http://people.cs.clemson.edu/~mark/acs_end.html

combination would imply that they wanted enormous advances in cheap
memory and powerful CPUs ... but not necessarily available to users.

some more here:
http://www.jfsowa.com/computer/memo125.htm

including some discussion of Brooks "Mythical Man Month" ... and 3081
(370) in the 80s being made out of warmed over FS technology ... only
three times faster than 168 ... but required so much hardware that 16
168s could have been built (could build sixteen 168s for the same cost
of 3081 ... and have five times the throughput).

there was something similar earlier in the late 70s with 3033 and 4341
... 3033 also using warmed over FS technology. multiple 4341s had
aggregate higher throughput, lower cost, better price/performance,
smaller sq ft and smaller environmental footprint (in the
datacenters). they were also the leading edge of the distributed
computing tsunami ... large corporations buying hundreds at a time and
putting out in departmental areas.

Shmuel Metz

unread,
Jul 28, 2014, 7:51:07 AM7/28/14
to
In <lquq4n$c81$1...@dont-email.me>, on 07/25/2014
at 07:44 PM, Dan Espen <des...@verizon.net> said:

>z/OS now supports single datasets with extents on multiple volumes.

Now? Multivolume data sets have been around since OS/360. Are you
perhaps thinking of striped data sets?

>That must complicate the process.

To some extent.

>Sounds like a nightmare.

Not nearly as much as using floppies for backup. What's the emoticon
for runs away shrieking in disgust and terror?

Shmuel Metz

unread,
Jul 27, 2014, 10:11:59 PM7/27/14
to
In <lqug70$dpn$1...@dont-email.me>, on 07/25/2014
at 04:54 PM, Dan Espen <des...@verizon.net> said:

>our backup options were limited to magazines of diskettes.

That explains a lot; mainframe backups were on tape, and one reel[1]
held as much as hundreds of 8" floppies. If we had to use floppies
then backup would have been a nightmare.

>So, I guess we're destined to never see single level storage on a
>large scale system.

Perhaps not, but I don't see backups as being an obstacle.

[1] The later cartridges, of course, held more.

Dan Espen

unread,
Jul 29, 2014, 12:18:40 AM7/29/14
to
Shmuel (Seymour J.) Metz <spam...@library.lspace.org.invalid> writes:

> In <lquq4n$c81$1...@dont-email.me>, on 07/25/2014
> at 07:44 PM, Dan Espen <des...@verizon.net> said:
>
>>z/OS now supports single datasets with extents on multiple volumes.
>
> Now? Multivolume data sets have been around since OS/360. Are you
> perhaps thinking of striped data sets?

Yes I am.
Sorry again.
I guess I should look things up again before I post.

Extended format datasets are always multi-volume,
so I'm at least near the ballpark.

>>That must complicate the process.
>
> To some extent.

I sure wouldn't want to deal with it.

Just took a look in ISMF.
I see where a storage class called striped but
I don't see the link to a volume group.

How many volume groups would a site need?

>>Sounds like a nightmare.
>
> Not nearly as much as using floppies for backup.

Those magazines where a thing of beauty.
To be fair, we never had an I/O problem with the magazines.
You slide in your diskettes and leave them in so you don't
have to handle the floppies.

At our main site we had to back up all the application data
for 3 other System/34 plants running the same application.

We needed the space on all 10 diskettes at the main site.

> What's the emoticon
> for runs away shrieking in disgust and terror?

:( --!--?--*-->

--
Dan Espen

Dan Espen

unread,
Jul 29, 2014, 12:33:52 AM7/29/14
to
Shmuel (Seymour J.) Metz <spam...@library.lspace.org.invalid> writes:

> In <lqug70$dpn$1...@dont-email.me>, on 07/25/2014
> at 04:54 PM, Dan Espen <des...@verizon.net> said:
>
>>our backup options were limited to magazines of diskettes.
>
> That explains a lot; mainframe backups were on tape, and one reel[1]
> held as much as hundreds of 8" floppies. If we had to use floppies
> then backup would have been a nightmare.

All a question of how much data you need to back up.
I remember now, the Sys/34 had 2 magazine slots.
So we put 20 diskettes into the machine, by loading
2 magazines.

You could allocate with the control language using an ID like
M1.03 (magazine 1, diskette 3). When I left, we had all 20
diskettes in use, but not more than 50% full.

--
Dan Espen

Peter Flass

unread,
Jul 29, 2014, 6:09:52 AM7/29/14
to
Shmuel (Seymour J.) Metz <spam...@library.lspace.org.invalid> wrote:
> In <lquq4n$c81$1...@dont-email.me>, on 07/25/2014
> at 07:44 PM, Dan Espen <des...@verizon.net> said:
>
>> z/OS now supports single datasets with extents on multiple volumes.
>
> Now? Multivolume data sets have been around since OS/360. Are you
> perhaps thinking of striped data sets?
>
>> That must complicate the process.
>
> To some extent.
>
>> Sounds like a nightmare.
>
> Not nearly as much as using floppies for backup. What's the emoticon
> for runs away shrieking in disgust and terror?

I forget what machine we were talking about, presumably not the AS/400, but
it would seem like there would have been a tape drive available that
someone didn't want to pay for.

--
Pete

Anne & Lynn Wheeler

unread,
Jul 29, 2014, 8:41:43 AM7/29/14
to
Dan Espen <des...@verizon.net> writes:
> All a question of how much data you need to back up.
> I remember now, the Sys/34 had 2 magazine slots.
> So we put 20 diskettes into the machine, by loading
> 2 magazines.
>
> You could allocate with the control language using an ID like
> M1.03 (magazine 1, diskette 3). When I left, we had all 20
> diskettes in use, but not more than 50% full.

ibm invented 8in floppies to use for loading microcode for 3830 disk
control unit ... then were used in various 370 models for loading
microcode for processors.
http://en.wikipedia.org/wiki/History_of_the_floppy_disk

string 8 3330 drives, eight removable 100mbyte disks, upgraded to double
capacity 200mbytes/disks (808 tracks up from 404 tracks)
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3330.html

a string of eight 3330s could connect directly to a 3830 controller or
to a string switch ... and a string switch could connect to two
different 3830s controllers.

3830 had two channel interface, allowing connecting to two different
370s concurrently.

using string switch, it was possible to access 3330 from up to four
different 370s. was possible to add a 2nd two channel interface
to 3830 allowing connection to up to eight 370s
http://bitsavers.trailing-edge.com/pdf/ibm/dasd/3330/GA26-1592-5_Reference_Manual_for_IBM_3830_Storage_Control_Model_1_and_IBM_3330_Disk_Storage_Nov76.pdf
and over on wayback machine
https://archive.org/details/bitsavers_ibm38xx383efApr72_6929160

these were removable disks ... so installation might have much larger
number of (200mbyte) disks that there were drives.

IBM also did 3850 that had some number of 3330 drives connected to
automated cartridge library that could move data back and forth between
cartridge and disk
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3850.html
which could have up to 4720 tape cartridges
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3850b.html


there were a number of complexes like lockheed dialog ... an early
online system circa 1980 that had 300 drives that were connected to two
different 370 processors at datacenter in silicon valley
http://www.historyofinformation.com/expanded.php?id=1069
http://en.wikipedia.org/wiki/Roger_K._Summit
http://en.wikipedia.org/wiki/Dialog_%28online_database%29

online before the internet
http://www.infotoday.com/searcher/jun03/ardito_bjorner.shtml
Roger Summit
http://www.infotoday.com/searcher/oct03/SummitWeb.shtml

also (dialog sold to proquest and old URLs gone 404)
https://web.archive.org/web/20140327061241/http://dialog.com/about/history/
and
http://web.archive.org/web/20121011155818/http://support.dialog.com/publications/chronolog/200206/1020628.shtml


past posts mentioning dialog
http://www.garlic.com/~lynn/99.html#150 Q: S/390 on PowerPC?
http://www.garlic.com/~lynn/2001g.html#33 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
http://www.garlic.com/~lynn/2001m.html#51 Author seeks help - net in 1981
http://www.garlic.com/~lynn/2002g.html#3 Why are Mainframe Computers really still in use at all?
http://www.garlic.com/~lynn/2002h.html#0 Search for Joseph A. Fisher VLSI Publication (1981)
http://www.garlic.com/~lynn/2002l.html#53 10 choices that were critical to the Net's success
http://www.garlic.com/~lynn/2002l.html#61 10 choices that were critical to the Net's success
http://www.garlic.com/~lynn/2002m.html#52 Microsoft's innovations [was:the rtf format]
http://www.garlic.com/~lynn/2002n.html#67 Mainframe Spreadsheets - 1980's History
http://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
http://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode
http://www.garlic.com/~lynn/2006n.html#55 The very first text editor
http://www.garlic.com/~lynn/2007k.html#60 3350 failures
http://www.garlic.com/~lynn/2009m.html#88 Continous Systems Modelling Package
http://www.garlic.com/~lynn/2009q.html#24 Old datasearches
http://www.garlic.com/~lynn/2009q.html#44 Old datasearches
http://www.garlic.com/~lynn/2009q.html#46 Old datasearches
http://www.garlic.com/~lynn/2009r.html#34 70 Years of ATM Innovation
http://www.garlic.com/~lynn/2010j.html#55 Article says mainframe most cost-efficient platform
http://www.garlic.com/~lynn/2011j.html#47 Graph of total world disk space over time?
http://www.garlic.com/~lynn/2014e.html#39 Before the Internet: The golden age of online services

Dan Espen

unread,
Jul 29, 2014, 9:08:01 AM7/29/14
to
IBM System/34.

Don't think so. I don't remember having that option.
This page doesn't mention tape:

http://en.wikipedia.org/wiki/IBM_System/34

--
Dan Espen

Anne & Lynn Wheeler

unread,
Jul 29, 2014, 9:30:15 AM7/29/14
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:
> ibm invented 8in floppies to use for loading microcode for 3830 disk
> control unit ... then were used in various 370 models for loading
> microcode for processors.
> http://en.wikipedia.org/wiki/History_of_the_floppy_disk

re:
http://www.garlic.com/~lynn/2014i.html#90 IBM Programmer Aptitude Test

development for the 3880 disk controller had some problems. the
microcode development system was this large application running on MVS
system with limited turn-around. the idea was to get the microcode
development system ported off MVS to vm370/cms and moved out to 4341s in
the departmental areas ... eliminating the datacenter bottleneck.

the other bottleneck was there was a limited number of floppy disk
writters. the floppy disk drives in disk controllers were purely
read/only ... the solution was to get some number of floppy r/w drives
to go along with the port of the development system to vm370/cms to
significantly improve development turn-around and productivity.

old email referencind moving MDB/MDS from MVS to vm370/cms and getting
r/w floppy drives
http://www.garlic.com/~lynn/2006v.html#email791010c
in this post
http://www.garlic.com/~lynn/2006v.html#17 Ranking of non-IBM mainframe builders

and this email about getting MDB/MDS moved to vm370/cms
http://www.garlic.com/~lynn/2006p.html#email810128
in this post
http://www.garlic.com/~lynn/2006p.html#40 25th Anniversary of the Personal Computer

other old 4341 email
http://www.garlic.com/~lynn/lhwemail.html#4341

when I transferred to San Jose Research, they let me wander around other
locations in the San Jose area, one of the places was the disk
engineering lab. at the time they were doing development testing using
dedicated, stand-alone mainframe processing time, prescheduled 7x24
around the clock. At one time they had attempted to use MVS for
concurrent testing, however in that environment MVS had 15min
mean-time-between-failure ... requiring manual restart of MVS. I offered
to redo i/o supervisor to make it bullet proof and never fail
... allowing any number of on-demand, concurrent testing (greatly
improving productivity). after that they would periodically drag me in
to look at other issues. past posts getting to play disk engineer in
bldgs. 14&15
http://www.garlic.com/~lynn/subtopic.html#disk

later as getting close to ship 3880 disk controllers ... field engineer
had regression testing of 57 typically expected test ... old email ref
http://www.garlic.com/~lynn/2007.html#email801015
in this post
http://www.garlic.com/~lynn/2007.html#2 The Elements of Programming Style

MVS was still failing (requiring manual restart) for all 57 cases and in
2/3rds of cases, after restart there was no indication of what caused
the failure.

other posts in this thread:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#55 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#70 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#73 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#74 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#76 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#78 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#79 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#87 IBM Programmer Aptitude Test

Peter Flass

unread,
Jul 29, 2014, 12:09:33 PM7/29/14
to

Dan Espen

unread,
Jul 29, 2014, 12:30:11 PM7/29/14
to
Interesting.

Reading the fine print, it attached to the communications port.
Probably used Bi-Sync. (All our systems communicated over
Bi-Sync.) You'd have to write your own backup programs but
I'd guess Mitron had some software.
We used RPG for Bi-Sync stuff. Pretty simple.

It wasn't until System/36 that tapes showed up according to
Wikipedia.

--
Dan Espen

Shmuel Metz

unread,
Jul 29, 2014, 10:36:17 AM7/29/14
to
In <m3d2cop...@garlic.com>, on 07/29/2014
at 08:41 AM, Anne & Lynn Wheeler <ly...@garlic.com> said:

>ibm invented 8in floppies to use for loading microcode for 3830 disk
>control unit ... then were used in various 370 models for loading
>microcode for processors.

Also used for running diagnostics, at least on the 3155.

Shmuel Metz

unread,
Jul 29, 2014, 10:31:30 AM7/29/14
to
In
<1063584717428320977.257...@news.eternal-september.org>,
on 07/29/2014
at 10:09 AM, Peter Flass <peter...@yahoo.com> said:

>I forget what machine we were talking about,

S/34, which I believe is a successor to the S/3. Certainly on the
small side, but the idea of relying on floppies for backup leaves me
glad that it's not my dog.

Shmuel Metz

unread,
Jul 29, 2014, 10:28:22 AM7/29/14
to
In <lr77b2$tej$1...@dont-email.me>, on 07/29/2014
at 12:18 AM, Dan Espen <des...@verizon.net> said:

>I see where a storage class called striped

Isn't that in the data class?

>How many volume groups would a site need?

There is no "one size fits all."

Dan Espen

unread,
Jul 29, 2014, 8:24:34 PM7/29/14
to
Shmuel (Seymour J.) Metz <spam...@library.lspace.org.invalid> writes:

> In
> <1063584717428320977.257...@news.eternal-september.org>,
> on 07/29/2014
> at 10:09 AM, Peter Flass <peter...@yahoo.com> said:
>
>>I forget what machine we were talking about,
>
> S/34, which I believe is a successor to the S/3. Certainly on the
> small side, but the idea of relying on floppies for backup leaves me
> glad that it's not my dog.

Yep, S/3 then S/32 then S/34.

In fact the application I developed was started by someone else
on a S/32. The machine was totally inadequate to the task and
was replaced before I started. A S/32 looks like a desk with
a tiny little screen on it.

--
Dan Espen

Dan Espen

unread,
Jul 29, 2014, 8:29:13 PM7/29/14
to
Shmuel (Seymour J.) Metz <spam...@library.lspace.org.invalid> writes:

> In <lr77b2$tej$1...@dont-email.me>, on 07/29/2014
> at 12:18 AM, Dan Espen <des...@verizon.net> said:
>
>>I see where a storage class called striped
>
> Isn't that in the data class?

No, I found it in "storage" class in ISMF.
I'm not actually sure our site uses any striped datasets.
We're a software development shop. We only set
up what our customers use, and sometimes not even then.
We don't run any "production".

>>How many volume groups would a site need?
>
> There is no "one size fits all."

Sure, but if I'm trying to squeeze every ounce of throughput
out of a system, I'd want my striped datasets on as many volumes
as possible, but I wouldn't want volume copy to get overly
complicated.

--
Dan Espen

Charlie Gibbs

unread,
Jul 31, 2014, 10:27:56 AM7/31/14
to
On 2014-07-28, Shmuel Metz <spam...@library.lspace.org.invalid> wrote:

> In <lqug70$dpn$1...@dont-email.me>, on 07/25/2014
> at 04:54 PM, Dan Espen <des...@verizon.net> said:
>
>> our backup options were limited to magazines of diskettes.
>
> That explains a lot; mainframe backups were on tape, and one reel[1]
> held as much as hundreds of 8" floppies. If we had to use floppies
> then backup would have been a nightmare.

Sperry pushed for floppy-based software distribution on their
OS/3-based System 80 line. Yes, it was a nightmare, stuffing
45 floppies into the drive (even if we had the auto-feeding
drive, which we lovingly referred to as the "autocruncher").
It was even worse if you had a read error on disk 26, when
the update procedure offered no retry option. Not that that
would help if the error was, as you found out a couple of
hours later, unrecoverable. Once or twice we took a copy
of the bad floppy from another customer's set, even though
that was theoretically a firing offence.

--
/~\ cgi...@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!

hanc...@bbs.cpcn.com

unread,
Jul 31, 2014, 10:47:53 AM7/31/14
to
On Thursday, July 31, 2014 10:27:56 AM UTC-4, Charlie Gibbs wrote:

> Once or twice we took a copy
> of the bad floppy from another customer's set, even though
> that was theoretically a firing offence.

Why was that an offense? That was standard procedure in industry. It wasn't stealing a license or anything, just making a copy of something legitimate for a legitimate purpose.


Charlie Gibbs

unread,
Jul 31, 2014, 1:20:52 PM7/31/14
to
True, but this was at the time when people were first getting really
weird about software piracy. If we were to toe the line from HQ,
we'd be telling customers they'd be dead in the water for several
days while a new official copy was cut and delivered. This was
clearly unacceptable, so "don't ask, don't tell" became the
watchword in the trenches.
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted

Osmium

unread,
Aug 3, 2014, 11:50:47 AM8/3/14
to
"Osmium" wrote:

> I seriously doubt that anyone other than a "futurist" ever thought about
> "tracks per inch" very much, it's a kind of stupid metric . In short,
> speculation does not make it so.

PS. I left out a very important point. One does not need any credentials
to make a web page or to be cited on the web.


Peter Flass

unread,
Aug 3, 2014, 3:39:26 PM8/3/14
to
That's both positive and negative. On the one hand anyone with something
worthwhile to say can get "published" without going thru an entrenched
bureaucracy and/or censorship. On the other hand, any crackpot can have
his ideas put on a par with recognized experts.

--
Pete

Shmuel Metz

unread,
Aug 3, 2014, 3:50:37 PM8/3/14
to
In <lrlibh$vpc$1...@dont-email.me>, on 08/03/2014
at 02:52 PM, c...@bobby.df.lth.se (Christian Brunschen) said:

>Tape track density appears to be measured in TPI, Tracks per Inch;

Water is wet; the open-reel tape drives had track densities on the
order of 18/inch, not 1600. 1600 was a common bit density for a few
years, until 6250 BPI came along.

>Tape track density appears to be measured in TPI, Tracks per Inch;
>bit density in BPI, Bits per Inch. Have a look for instance at
><http://www.wtec.org/loyola/hdmem/final/ch4.pdf> .

That document shows that in a track density of 750 tpi, quite short of
the claimed 1600 TPI. Tos say nothing of the fact that it is three
decades later than the tapes in question.

>Also for instance this description of a "1600 TPI" tape reader,
><http://www.computinghistory.org.uk/det/16968/Digi-Data-1600-tpi-tape-reader/>

It's common for promotional literature to have substantial errors;
consult wikipedia (<http://en.wikipedia.org/wiki/9_track_tape>) or the
manual for any 9-track PE tape drive, e.g.,
<http://bitsavers.org/pdf/ibm/28xx/2803_2804/A22-6866-4_2400_Tape_Unit_2803_2804_Tape_Controls_Component_Description_Sep68.pdf>,
and see for yourself.
Message has been deleted
Message has been deleted
Message has been deleted

Osmium

unread,
Aug 3, 2014, 4:32:00 PM8/3/14
to
"Christian Brunschen" wrote:


> And while it may not be particularly correct, the term "tpi" has been
> seen in use (perhaps used somewhat interchangeably with "bpi")
> elsewhere as well (see for instance
> <http://www.sunhelp.org/pipermail/rescue/2004-April/101947.html>).
>
> I'm not saying it's a correct term; just that a cursory internet
> search indicates that the term seems to have been in used in the same
> eay that Morten was using it.

OK, that was short enough to read. I think you are confusing typing errors
and brain farts with what is going on. This guy clearly speaks of
nine-track and 6250 tpi. Now I happen to know that there was, in fact, a
common bit density of 6250 bpi used with nine-track tapes. Wouldn't it be
an odd coincidence if there were also a 6250 TPI nine-track tape?
Especially since 6250 is so much greater than 9? Most people, when pressed,
would probably say the nine-track tape was 18 TPI (tracks per inch).

Anyone who has not had a brain fart before hitting send is, almost by
definition, a newbie. If I can get down to five day or so, it has been a
very good day.


Anne & Lynn Wheeler

unread,
Aug 3, 2014, 11:18:34 PM8/3/14
to
Shmuel (Seymour J.) Metz <spam...@library.lspace.org.invalid> writes:
> Except that they weren't contemporaneous. A fair comparison is 2014
> nonrotating storage to 2014 tapes.

recent thread about SONY new tape over in ibm-main
http://www.garlic.com/~lynn/2014f.html#64 non-IBM: SONY new tape storage - 185 Terabytes on a tape
http://www.garlic.com/~lynn/2014f.html#65 non-IBM: SONY new tape storage - 185 Terabytes on a tape
http://www.garlic.com/~lynn/2014g.html#16 non-IBM: SONY new tape storage - 185 Terabytes on a tape
http://www.garlic.com/~lynn/2014g.html#75 non-IBM: SONY new tape storage - 185 Terabytes on a tape
http://www.garlic.com/~lynn/2014g.html#79 non-IBM: SONY new tape storage - 185 Terabytes on a tape

part of the issue in the thread were transfer rates of the new tape
generation and not announced for ibm mainframe (potentially because the
transfer rates were too high).

news items
http://www.extremetech.com/computing/181560-sony-develops-tech-for-185tb-tapes-3700-times-more-storage-than-a-blu-ray-disc
http://www.gizmag.com/sony-185-tb-magnetic-tape-storage/31910/
http://www.latimes.com/business/technology/la-fi-tn-sony-185-tb-cassette-tape-storage-record-20140505-story.html

posts in this thread:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#55 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#70 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#73 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#74 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#76 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#78 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#79 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#87 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#90 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#91 IBM Programmer Aptitude Test

--
virtualization experience starting Jan1968, online at home since Mar1970

stoat

unread,
Aug 4, 2014, 1:56:25 AM8/4/14
to
On 4/08/14 8:32 am, Osmium wrote:
> "Christian Brunschen" wrote:
>
>
>> And while it may not be particularly correct, the term "tpi" has been
>> seen in use (perhaps used somewhat interchangeably with "bpi")
>> elsewhere as well (see for instance
>> <http://www.sunhelp.org/pipermail/rescue/2004-April/101947.html>).
>>
>> I'm not saying it's a correct term; just that a cursory internet
>> search indicates that the term seems to have been in used in the same
>> eay that Morten was using it.

I suspect that tpi actually stands for transitions per inch,referring to
the encoding, and would more-or-less be equivalent to bits per inch.
>
> OK, that was short enough to read. I think you are confusing typing errors
> and brain farts with what is going on. This guy clearly speaks of
> nine-track and 6250 tpi. Now I happen to know that there was, in fact, a
> common bit density of 6250 bpi used with nine-track tapes. Wouldn't it be
> an odd coincidence if there were also a 6250 TPI nine-track tape?
> Especially since 6250 is so much greater than 9? Most people, when pressed,
> would probably say the nine-track tape was 18 TPI (tracks per inch).
>
> Anyone who has not had a brain fart before hitting send is, almost by
> definition, a newbie. If I can get down to five day or so, it has been a
> very good day.
>
>

--brian

--
Wellington
New Zealand
Message has been deleted

Ahem A Rivet's Shot

unread,
Aug 4, 2014, 4:13:54 AM8/4/14
to
On Sun, 03 Aug 2014 16:02:08 -0400
Shmuel (Seymour J.) Metz <spam...@library.lspace.org.invalid> wrote:

> I don't have one. USB data keys are much more convenient. But my 8 GB
> and 16 GB data keys hold a lot less than a 180 GB tape.

Those are old data keys, the biggest USB key I've seen to date is a
1TB USB 3.0 device it's expensive though at over €800, OTOH 128GB keys can
be had down to €35 (there's an enormous variation in price - some go for
over €100 for 128GB). I've started to see 128GB micro SD cards too which I
think is the highest density data storage currently available, much higher
density than 160GB tapes (although the tapes do have the edge in price).

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Message has been deleted

Rob Warnock

unread,
Aug 4, 2014, 9:48:49 AM8/4/14
to
stoat <fa...@fake.org> wrote:
+---------------
| I suspect that tpi actually stands for transitions per inch,referring to
| the encoding, and would more-or-less be equivalent to bits per inch.
+---------------

Indeed. ISTR that early 7- & 9-track tapes were "556 BPI" and "800 BPI"
NRZI (Non-Return to Zero, Inverting), but when density on 9-track tapes
started going up vendors used fancier encodings -- PE (Phase Encoding),
GCR (Group Code Recording), etc. -- the terminology changed from using
"BPI" to "TPI" (as you suggest, meaning [magnetic flux] Transitions Per
Inch) and "FCI" (Flux Changes per Inch). IIRC, 1600 TPI used PE and
6250 FCI used GCR.

By using proper encoding (PE, RLL, GCR) that did not mandate flux changes
or transitions to occur on regular bit boundaries, it was possible to
encode a higher effective "user data bits per inch" than a naive count
of flux changes or transitions per inch would suggest, which may be why
they dropped the "BPI" usage.


-Rob

-----
Rob Warnock <rp...@rpw3.org>
627 26th Avenue <http://rpw3.org/>
San Mateo, CA 94403

Shmuel Metz

unread,
Aug 4, 2014, 11:39:25 AM8/4/14
to
In <20140804091354.9545...@eircom.net>, on 08/04/2014
at 09:13 AM, Ahem A Rivet's Shot <ste...@eircom.net> said:

> Those are old data keys,

Yes; as I wrote elsewhere, my local Microcenter is giving them out
free. I wouldn't buy one with less than (nominal) 32 GB, and it won't
be long before the price drops enough that I won't buy anything
smal;le than 64.

>the biggest USB key I've seen to date is a
>1TB USB 3.0 device it's expensive though at over ᅵ800,

A lot more expensive than a 4 TB tape.

Anne & Lynn Wheeler

unread,
Aug 4, 2014, 1:48:19 PM8/4/14
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:
> the s/38 common filesystem pool scaled poorly ... just having to
> save/restore all data as single integral whole, was barely tolerable
> with a few disks ... but large mainframe system with 300 disks would
> require days for the operation.

re:
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test

one of the issues with IBM channels was big disk farms. total channel
run lengths were restricted to 200ft support 3330 800kbyte/sec transfer
(which includes hand-shake for every byte transferred).

to move to 3880 disk controller , the went to "data streaming" which
support multiple byte transfer per hand-shake ... this allowed extended
maximum channel length to 400ft and 3mbyte/sec transfer rates (however,
the slower processor in 3880, significantly increased latency for
command & control processing operations compared to 3830 controller).

some of the big datacenters would have processor in middle of room with
200ft channel radius out in every direction ... about 125k sq ft. area
for disk farm. some datacenters were constrained enough that they
started doing devices on multiple floors arrayed around the processor.

big datacenters also tended to have multiple processors in
"loosely-coupled" configuration ... 3330 disks could connect to two
different 3830 controllers with string switch and each 3830 controller
could have four channel interfaces (allowing disk to be access by eight
different channels/processors). center of disk farm would then be a
circle (rather than point) ... with overlapping radius ... limiting
max. disk farm physical area for connectivity to all processors.

3880 & datastreaming channel then extended the radius to 400ft (channel
run) or about 502k sq ft. area (twice the radius, four times the area)
containing disk farm. however, disk data density also went way up
... enormously increasing the total amount of data in some of these old
mainframe datacenters.

one of issues I've periodically mentioned doing channel extender and
fiber channel standard ... was moving the i/o program out to the remote
end to eliminate the end-to-end latency operations ... everything could
be continuously streamed, concurrently in both directions .... getting
aggregate, sustained data transfer much closer to media transfer rate.

recent posts mentioning 3830/3880 disk controlleres:
http://www.garlic.com/~lynn/2014c.html#88 Optimization, CPU time, and related issues
http://www.garlic.com/~lynn/2014d.html#90 Enterprise Cobol 5.1
http://www.garlic.com/~lynn/2014i.html#68 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
http://www.garlic.com/~lynn/2014i.html#90 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#91 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#96 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
http://www.garlic.com/~lynn/2014i.html#97 The SDS 92, its place in history?
http://www.garlic.com/~lynn/2014j.html#17 The SDS 92, its place in history?

posts mentioning FICON
http://www.garlic.com/~lynn/submisc.html#ficon

posts mentioning channel extender
http://www.garlic.com/~lynn/submisc.html#channel.extender

Charlie Gibbs

unread,
Aug 4, 2014, 4:03:35 PM8/4/14
to
On 2014-08-04, stoat <fa...@fake.org> wrote:

> On 4/08/14 8:32 am, Osmium wrote:
>
>> "Christian Brunschen" wrote:
>>
>>> And while it may not be particularly correct, the term "tpi" has been
>>> seen in use (perhaps used somewhat interchangeably with "bpi")
>>> elsewhere as well (see for instance
>>> <http://www.sunhelp.org/pipermail/rescue/2004-April/101947.html>).
>>>
>>> I'm not saying it's a correct term; just that a cursory internet
>>> search indicates that the term seems to have been in used in the same
>>> eay that Morten was using it.
>
> I suspect that tpi actually stands for transitions per inch,referring to
> the encoding, and would more-or-less be equivalent to bits per inch.

That makes more sense than many of the things that have appeared in
this thread so far. On the other hand, it could be another of those
erroneous expressions that have unfortunately gained traction, like
"9600-baud modem" or "DB-9 serial connector".

Alan Bowler

unread,
Oct 23, 2014, 5:32:42 PM10/23/14
to
On 2014-07-25 6:50 PM, Peter Flass wrote:
>
> IME it was often quicker to back up a whole pack with physical backup
> rather than several datasets with logical backup.

True, if your backing up everything, AND is the pack is near full,
then physical (image) backups of the pack are faster than
a full logical (by file) backup.

Some drawbacks:

1) it is even less time to do occasional full backups, with
more frequent incremental backups of what has changed.
2) of selected files is difficult to impossible from a tape
image of a full disk. In our experience, at Waterloo,
individual files needed to be restored because a user
fumble fingered something far more often that because
of disk failure
3) Hardware and software failures can and do cause damage
to file system structures that proceed to propagate, and
cause more damage (e.g. dual allocations).
If your image backup was done after the initial damage,
but before it was noticed, the restored files system
will again fall apart.
4) Image backups generally require that the pack(s),
and usually the system be offline during the backup
so partially you don't grab a image of file system
structures that are inconsistent (partially updated).
Logical backups can be done on a running system.
5) Logical backups can be restored to other disk
configurations.

Peter Flass

unread,
Oct 24, 2014, 7:29:57 AM10/24/14
to
Alan Bowler <atbo...@thinkage.ca> wrote:
> On 2014-07-25 6:50 PM, Peter Flass wrote:
>>
>> IME it was often quicker to back up a whole pack with physical backup
>> rather than several datasets with logical backup.
>
> True, if your backing up everything, AND is the pack is near full,
> then physical (image) backups of the pack are faster than
> a full logical (by file) backup.
>
> Some drawbacks:
>
> 1) it is even less time to do occasional full backups, with
> more frequent incremental backups of what has changed.

This takes longer to restore a whole pack, as you have to process a number
of tapes, although as you say in point 2 this is less frequent than
restoring individual files.

> 2) of selected files is difficult to impossible from a tape
> image of a full disk. In our experience, at Waterloo,
> individual files needed to be restored because a user
> fumble fingered something far more often that because
> of disk failure

> 3) Hardware and software failures can and do cause damage
> to file system structures that proceed to propagate, and
> cause more damage (e.g. dual allocations).
> If your image backup was done after the initial damage,
> but before it was noticed, the restored files system
> will again fall apart.

I never observed this on zOS, because "file structures" are much simpler.

> 4) Image backups generally require that the pack(s),
> and usually the system be offline during the backup
> so partially you don't grab a image of file system
> structures that are inconsistent (partially updated).
> Logical backups can be done on a running system.

IBM fixed this with "snapshot." The file is marked for backup and
subsequent writes go to alternate locations. You can then back up the
original file whenever you want and then free the hold and the flush the
old versions of the tracks that were updated.

> 5) Logical backups can be restored to other disk
> configurations.


--
Pete

Charles Richmond

unread,
Oct 24, 2014, 4:37:20 PM10/24/14
to
"Peter Flass" <peter...@yahoo.com> wrote in message
news:181839362435841638.6721...@news.eternal-september.org...
> Alan Bowler <atbo...@thinkage.ca> wrote:
>> On 2014-07-25 6:50 PM, Peter Flass wrote:
>>>
>>> IME it was often quicker to back up a whole pack with physical backup
>>> rather than several datasets with logical backup.
>>
>> True, if your backing up everything, AND is the pack is near full,
>> then physical (image) backups of the pack are faster than
>> a full logical (by file) backup.
>>
>> Some drawbacks:
>>
>> 1) it is even less time to do occasional full backups, with
>> more frequent incremental backups of what has changed.
>
> This takes longer to restore a whole pack, as you have to process a number
> of tapes, although as you say in point 2 this is less frequent than
> restoring individual files.
>

Once at a PPoE, I accidentally deleted the source file of my FORTRAN77
program. Unfortunately, I had typed the whole thing in earlier in the day,
so there was *no* tape backup of the file. Fortunately, I had saved the
compiler listing file. So I just wrote a little program that would edit the
crap off the compiler listing and re-create my source file. That beat the
heck out of typing the whole source again!!!

--

numerist at aquaporin4 dot com

Peter Flass

unread,
Oct 25, 2014, 8:05:57 AM10/25/14
to
BTDT.

--
Pete

linl...@gmail.com

unread,
May 5, 2019, 11:49:43 PM5/5/19
to
I took what was likely a similar test, as part of a COBOL class I was taking at a Junior College, and was told that I did fairly well.
I took the IBM Aptitude Test twice, in 1969, for 2 different companies, and got virtually the same score the 2nd time as I did the first time.
The description is correct, 75 questions, 3 parts, math, word problems, and picture series.
I believe my score was 69, which, I'm told, was very good (but I've nothing to compare it to).
As far as I know, one has to go to an IBM site to have the test administered, and it's not available anywhere else.

Peter Flass

unread,
May 6, 2019, 2:44:02 PM5/6/19
to
Do they still offer it? Programmer aptitude tests were all the rage for a
while in the late 60s and early 70s, but then it was decided they weren’t
that predictive of success in the real world. I can’t recall nw if I ever
took one.

--
Pete
It is loading more messages.
0 new messages