Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

If IBM Hadn't Bet the Company

76 views
Skip to first unread message

Quadibloc

unread,
Feb 5, 2011, 7:02:26 PM2/5/11
to
The original IBM 8000 design is on Bitsavers, and has been for some
time.

I took a look at it. The floating-point add-on used a floating-point
format similar to that of the STRETCH, and it had its own instruction
set, different from that of the main computer.

Since the IBM 1401 was IBM's largest-selling computer, presumably the
way for IBM to avoid "betting the company" would have been to devise
an upgraded 1401 that would have replaced all of IBM's _other_ product
lines, except for the 1401. Could that even be conceivable?

In taking another look at the 1401 architecture, I found out something
I hadn't noticed. The 1410 as well as the 7010 could run 1401
programs, but they had to be in a compatibility mode to do so.

Normally, while the 1401 used three characters for the address fields
in an instruction, the 1410 and the 7010 used five characters for the
address fields. The 1401 used the zone bits in those characters to
address more than 1,000 characters of memory; the 1410 and 7010 used
zone bits in two of the five characters of an address to indicate the
use of an index register.

Since Chen-Ho encoding hadn't been invented back then, the fact that 5
times 7 is 35, and so putting five 1401 characters in a single 7090
word wouldn't be too inefficient, doesn't seem to lend itself to
convenient exploitation.

Of course, *two* isn't that large a number. If IBM simply produced
upwards-compatible extensions of the 1401/1410/7010 lineups, and the
704/709/7040/7044/7090/7094 lineups, and dropped the 705/7080 and
7070/7074 lineups, wouldn't that have also worked?

For the 1410/7010 architecture, the zone bits in the three remaining
digits of the address could be used to change maximum storage from
100,000 characters to 6,400,000 characters.

For the 7094 architecture, the 15-bit address field could have been
changed to contain a 3-bit base register field and a 12-bit
displacement, giving addressing capabilities not unlike those of the
IBM 360.

The IBM 7070 had a word length of 10 decimal digits plus sign, and it
was much less popular than the 1401, so combining the 7070 and the
7090 into a single machine wouldn't have been an option.

John Savard

des...@verizon.net

unread,
Feb 5, 2011, 10:03:36 PM2/5/11
to
Quadibloc <jsa...@ecn.ab.ca> writes:

> Since the IBM 1401 was IBM's largest-selling computer, presumably the
> way for IBM to avoid "betting the company" would have been to devise
> an upgraded 1401 that would have replaced all of IBM's _other_ product
> lines, except for the 1401. Could that even be conceivable?

I always thought the IBM 1410 was IBMs attempt to build a follow on
to the 1401 but the Wikipedia timeline shows the timeline as
1401, 1410, 1440, 1460.

(Well those are the ones I know because I worked on them.)

> In taking another look at the 1401 architecture, I found out something
> I hadn't noticed. The 1410 as well as the 7010 could run 1401
> programs, but they had to be in a compatibility mode to do so.
>
> Normally, while the 1401 used three characters for the address fields
> in an instruction, the 1410 and the 7010 used five characters for the
> address fields. The 1401 used the zone bits in those characters to
> address more than 1,000 characters of memory; the 1410 and 7010 used
> zone bits in two of the five characters of an address to indicate the
> use of an index register.
>
> Since Chen-Ho encoding hadn't been invented back then, the fact that 5
> times 7 is 35, and so putting five 1401 characters in a single 7090
> word wouldn't be too inefficient, doesn't seem to lend itself to
> convenient exploitation.

One of the nicest things about working on a 14xx was that arithmetic
and addressing was base 10. Chen-Ho would have been a disaster.
The design was programmer friendly.

> Of course, *two* isn't that large a number. If IBM simply produced
> upwards-compatible extensions of the 1401/1410/7010 lineups, and the
> 704/709/7040/7044/7090/7094 lineups, and dropped the 705/7080 and
> 7070/7074 lineups, wouldn't that have also worked?

Yes, for some values of "worked".

> For the 1410/7010 architecture, the zone bits in the three remaining
> digits of the address could be used to change maximum storage from
> 100,000 characters to 6,400,000 characters.

Or they could have continued on and gone from 3 character to 5 character
to 7 character addressing.

None of this addresses lower case, but in the time frame of System 360,
few peripherals were lower case ready. The printers were slow and the
display tubes were uppercase.

What was most significant about System 360 to me was that the
architecture had a level of complexity that pretty much required
a business to use IBMs system software.

I don't know if that was intentional or not. Unit record and tape
were simple enough, but the CKD disk and 3270 were both bears to
write system software for.

Prior to System 360 IBM provided a DISK IOCS for their disk drives
that most shops had the sense to ignore like the plague.

As a programmer, I much preferred working in base 10 and even
setting word marks. You could squeeze a lot of function into
a small amount of memory. I'd estimate that for something you
could do on 14xx in 8K in Autocoder you'd need a minimum of 32K
with Assembler.

Quadibloc

unread,
Feb 5, 2011, 10:41:04 PM2/5/11
to
On Feb 5, 8:03 pm, des...@verizon.net wrote:

> One of the nicest things about working on a 14xx was that arithmetic
> and addressing was base 10.  Chen-Ho would have been a disaster.
> The design was programmer friendly.

I was thinking about using it in a place where it would not have been
visible to the programmer, so that 16,000 characters could use up only
16,384 characters in a memory that was also binary addressable. Thus
allowing a decimal architecture and a binary architecture to
efficiently use the same core bank.

John Savard

des...@verizon.net

unread,
Feb 5, 2011, 10:59:19 PM2/5/11
to
Quadibloc <jsa...@ecn.ab.ca> writes:

Not sure I follow.

Both the 360/30 and 360/40 allowed for binary and decimal addressing,
since both supported 14xx through emulation.

If I recall correctly, the 360/30 used 16K of 8 bit memory to represent
16K of 14xx memory. No memory was wasted. The 360/40 left gaps of
unused storage I suppose to remove the need for base 10 addressing.

IBM could have made 14xx emulation a standard feature on all
System/360s. No word marks really made System/360 a different machine
for programmers.

Joe Morris

unread,
Feb 6, 2011, 8:12:05 AM2/6/11
to
<des...@verizon.net> wrote:

> IBM could have made 14xx emulation a standard feature on all
> System/360s. No word marks really made System/360 a different machine
> for programmers.

IBM could have done a lot of things with the S/360 design, but I'll have to
disagree with the ieda that it would have been better had 14xx emulation
been part of the basic architecture. Looking at the /360 line as you go up
the model list you find less and less microprogramming; adding emulation
features to the standard architecture definition would have been extremely
expensive for the high-end boxes, which typically would be found in shops
where it would serve no purpose whatever. Note that 14xx emulation was
available in the low-end machines (which often did replace 14xx boxes) and
7090 emulation was available on the /65. (I don't recall what, if any,
emulation was available on the /50, and I don't recall any emulation
features on the /75 and above. Lynn?)

And for my purposes at the PPOE where we had a /40, I could not have cared
less about word marks: we had users on a 7040 who moved their applications
to the /40: what bothered these users was the difference in
standard-precision floating point. The only 14xx application that was
brought over to the /40 was the SYSIN/SYSOUT spooling utility (a total
rewrite of the IBM-provided IOUP utility that I've mentioned in a recent
posting).

And even with emulation as an extra-cost option you had the problem of shops
that upgraded their hardware from a 14xx to a low-end S/360, then continued
to run their 14xx programs for years as if nothing had changed. (One shop
near my PPOE was still doing that in the 1970s...with the added silliness of
punching out cartons of cards on a /40 emulating a 1401, manually sorting
them, reading them back in, and discarding the cards...) Making emulation
an extra-cost item encouraged the bean counters to look for shops to move
away from emulation, at least on leased machines.

Joe Morris


Anne & Lynn Wheeler

unread,
Feb 6, 2011, 10:40:58 AM2/6/11
to

the claim is that the 75 was a "hard-wired" 65. Originally the models
were 360/60 & 360/70 ... with 1microsecond memory. some enhancement
resulted in 750ns memory and the models were renamed 360/65 & 360/75 (to
reflect the faster memory) and the designation 360/60 & 360/70 were
dropped (I don't believe any 60s & 70s were actually shipped). The
360/67 was 360/65 with virtual memory hardware added.

then you get into 91, 95, & 195.

I never used 360/50 and don't have any recollection of any statement
about emulation features on 50. I do know that there was special
microcode assist for CPS (conversational programming system, ran under
os/360). Also, science center had wanted a 360/50 to build prototype
virtual memory (pending availability of 360/67) but had to settle for
360/40 (doing cp/40 before morphing to cp/67 when 360/67 became
available) because all the available 360/50s were going to FAA ATC.
There would be statement in 360/50 functional characteristic ... but
there isn't one up at bitsaver.

there is some folklore that the big cutover from lease to purchase in
the early 70s was some executive about ready to retire and wanted to
really boost their bonus as he was going out the door.

long ago and far away I was told that in the gov. hearings into ibm
... somebody from RCA(?) testified that all the computer vendors
realized by the late 50s that the single most important feature was to
have a single compatible architecture across the complete machine line
(businesses were in period of large growth ... start with smaller
machine and then having to move up to larger machines ... really big
inhibitor to market was software application development, being able to
re-use applications significantly helped selling larger machines). The
statement wase that IBM had the only upper management that were able to
force the individual plant managers (responsible for different product
lines) to conform to common architecture. While IBM may have lost out in
some competition for specific machines in niche markets ... being the
only vendor selling the single, most important feature ... common
architecture ... allowed IBM to dominate the market (only vendor selling
the single, most important feature ... would have allowed them to get
nearly everything else wrong and still be able to dominate the
competition).

--
virtualization experience starting Jan1968, online at home since Mar1970

des...@verizon.net

unread,
Feb 6, 2011, 2:53:17 PM2/6/11
to
"Joe Morris" <j.c.m...@verizon.net> writes:

> <des...@verizon.net> wrote:
>
>> IBM could have made 14xx emulation a standard feature on all
>> System/360s. No word marks really made System/360 a different machine
>> for programmers.
>
> IBM could have done a lot of things with the S/360 design, but I'll have to
> disagree with the ieda that it would have been better had 14xx emulation
> been part of the basic architecture.

Just to be clear, IBM could have made emulation a standard feature.
Could have, not should have.

> Looking at the /360 line as you go up
> the model list you find less and less microprogramming; adding emulation
> features to the standard architecture definition would have been extremely
> expensive for the high-end boxes, which typically would be found in shops
> where it would serve no purpose whatever.

Only true assuming that S/360 had to be nothing like 14xx.

At the time I had no problem envisioning a future where all printing and
display is uppercase. Case really isn't necessary to communications.
There are other ways to indicate proper nouns and the beginning of
sentences. Upper case is easy to read and display.

Word marks are harder to justify. They sure did allow for a lot of
fancy tricks that saved huge amounts of storage. Calculating with
100s of digits is no problem. But they were a bit quirky. Forget to
set one and all kinds of strange things could happen.

The biggest issue for me was that the 1311s (14xx Disks) were FBA with
easy to work with 100 character sectors and the sectors were addressable
with sequential addresses. Writing your own disk access was trivially
easy. The follow on 2311, 2314, the drums, the noodle pickers were all
different. A huge compatibility mess which was 100% unnecessary.

On the display side the 2260 was reasonably easy to use but primitive
but the 3270 was nuts. It was like it was designed badly on purpose.

> And for my purposes at the PPOE where we had a /40, I could not have cared
> less about word marks: we had users on a 7040 who moved their applications
> to the /40: what bothered these users was the difference in
> standard-precision floating point. The only 14xx application that was
> brought over to the /40 was the SYSIN/SYSOUT spooling utility (a total
> rewrite of the IBM-provided IOUP utility that I've mentioned in a recent
> posting).
>
> And even with emulation as an extra-cost option you had the problem of shops
> that upgraded their hardware from a 14xx to a low-end S/360, then continued
> to run their 14xx programs for years as if nothing had changed. (One shop
> near my PPOE was still doing that in the 1970s...with the added silliness of
> punching out cartons of cards on a /40 emulating a 1401, manually sorting
> them, reading them back in, and discarding the cards...) Making emulation
> an extra-cost item encouraged the bean counters to look for shops to move
> away from emulation, at least on leased machines.

I can't really remember the last time I worked on a 14xx emulation
system but it had to be at least 10 years after the S/360 announcement.
Those old systems took a LONG time to go away. Some of them were card
punching nightmares but I left behind some pretty sophisticated 1440
disk stuff that I'm sure my successors were struggling to fit in the
360/25 they got.

Message has been deleted

John Levine

unread,
Feb 6, 2011, 4:44:12 PM2/6/11
to
>Just to be clear, IBM could have made emulation a standard feature.

Looking at my IBM System/360 System Summary (GA22-6810-12), it says:

Model 22: no emulators (was a crippled /30, as I recall)

Model 25: 1401/1460, 1401/1440/1460 DOS, and 360/20. No 14xx
emulators if it has the scientific (floating point) instruction set or
integrated communications

Model 30: 1401/1460, 1440, 1401/1440/1460 DOS (needs 24K RAM) or 1620

Model 40: 1401/1460, 1401/1440/1460 DOS, 1410/7010

Model 50: 1410/7010, 7070/7074 (needs 256K RAM)

Model 65: 7070/7074, 7080, 709/7040/7044/7094/7094 II (needs 512K RAM)

Model 67: 709/7040/7044/7094/7094 II

Model 75 and up: no emulators

In each case, the emulator was supposed to run faster than the machine
it was emulating, which explains why no 7080 or 709x on the Model 50.

Remember that part of the goal of the 360 was to unify the software so
there would be one operating system. That failed for a variety of
reasons (see "The Mythical Man Month") but the last thing they wanted
to do was to keep supporting the software on all of the older machines
forever.

Regards,
John Levine, jo...@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. http://jl.ly

Quadibloc

unread,
Feb 6, 2011, 11:15:33 PM2/6/11
to
On Feb 6, 8:40 am, Anne & Lynn Wheeler <l...@garlic.com> wrote:

> the claim is that the 75 was a "hard-wired" 65. Originally the models
> were 360/60 & 360/70 ... with 1microsecond memory. some enhancement
> resulted in 750ns memory and the models were renamed 360/65 & 360/75 (to
> reflect the faster memory) and the designation 360/60 & 360/70 were
> dropped (I don't believe any 60s & 70s were actually shipped). The
> 360/67 was 360/65 with virtual memory hardware added.

While most of this is quite true, I would be very surprised if the 75
was a "hard wired 65". The two machines may have used the same type of
core memory, but they were designed by separate design teams. (It was
originally hoped to use microprogrammed control for the Model 70 as
well, but none could be found that was fast enough.)

John Savard

Roland Hutchinson

unread,
Feb 7, 2011, 12:31:03 AM2/7/11
to
On Sun, 06 Feb 2011 14:53:17 -0500, despen wrote:

> At the time I had no problem envisioning a future where all printing and
> display is uppercase. Case really isn't necessary to communications.
> There are other ways to indicate proper nouns and the beginning of
> sentences. Upper case is easy to read and display.

i think research shows that all lower case is far easier to read than all
upper case. a persistent legend has it that telegrams were printed in
upper case rather than lower only because samuel morse could not
countenance the thought of printing the word "god" (in reference to the
deity of the monothestic faiths) without an uppercase g, and his decision
stuck us all in upper case for a century or so.

then along came UNIX(tm) and we started writing _almost_ everything in
lower case.


--
Roland Hutchinson

He calls himself "the Garden State's leading violist da gamba,"
... comparable to being ruler of an exceptionally small duchy.
--Newark (NJ) Star Ledger ( http://tinyurl.com/RolandIsNJ )

Frank Pajerski

unread,
Feb 7, 2011, 2:31:30 AM2/7/11
to

"Joe Morris" <j.c.m...@verizon.net> wrote in message
news:iim6n...@news6.newsguy.com...
> <des...@verizon.net> wrote:
>
>[snip]
> IBM could have done a lot of things with the S/360 design ....
>[snip]
>
> Joe Morris

In case anyone missed it, I would like to note a most interesting
IBM-historical document pointed out by Peter Capek in an AFC posting dated
24Aug2010:


Bob Evans of IBM (whose death in 2004 certainly was noted here) wrote a
memoir of about 250 pages, but didn't publish it on the advice of friends
because it contained too much sensitive material. Will Spruth, retired from
IBM, has a copy of the full manuscript. He has produced an excerpt of about
35 pages dealing with Evans' experience from the early 50s through the early
80s dealing with System/360 and System/370, some other systems, and
experiences running the IBM Federal Systems Division. It is available at

http://www.informatik.uni-leipzig.de/cs/Literature/History/boevans.pdf

and is called The Genesis of the Mainframe. Note that there are a few
errors in it (mostly misspelled names), but one is worth mentioning: IBM did
NOT build the Harvard Mark II; it did build the Mark I.

Peter Capek

Peter Flass

unread,
Feb 7, 2011, 8:26:43 AM2/7/11
to
On 2/6/2011 2:53 PM, des...@verizon.net wrote:
...

> Just to be clear, IBM could have made emulation a standard feature.
> Could have, not should have.

From my limited perspective, I'm not sure that it was clear at the time
how big the conversion effort would be. This (again AFAIK) was the
first time a large number of users had to convert such a relatively
large base of software between such incompatible architectures. Having
done it once no one wants to go thru it again which is ore reason there
are such large numbers of IBM mainframes, and why the awful Intel
architecture continues to lay its dead hand on PC development.
...


>
> The biggest issue for me was that the 1311s (14xx Disks) were FBA with
> easy to work with 100 character sectors and the sectors were addressable
> with sequential addresses. Writing your own disk access was trivially
> easy. The follow on 2311, 2314, the drums, the noodle pickers were all
> different. A huge compatibility mess which was 100% unnecessary.

That's an interesting observation. Does anyone know why IBM decided to
go with CKD diska rather than FBA for the 360? In retrospect it turns
out to have been a terrible decision.


>
> On the display side the 2260 was reasonably easy to use but primitive
> but the 3270 was nuts. It was like it was designed badly on purpose.

I used both, I think the 3270 is a much better architecture and not
especially complicated, but of course YMMV.

jmfbahciv

unread,
Feb 7, 2011, 9:29:27 AM2/7/11
to
Roland Hutchinson wrote:
> On Sun, 06 Feb 2011 14:53:17 -0500, despen wrote:
>
>> At the time I had no problem envisioning a future where all printing and
>> display is uppercase. Case really isn't necessary to communications.
>> There are other ways to indicate proper nouns and the beginning of
>> sentences. Upper case is easy to read and display.
>
> i think research shows that all lower case is far easier to read than all
> upper case.

Really?!!! I don't find that to be true but then my eyes are biased
because of years of experience.

> a persistent legend has it that telegrams were printed in
> upper case rather than lower only because samuel morse could not
> countenance the thought of printing the word "god" (in reference to the
> deity of the monothestic faiths) without an uppercase g, and his decision
> stuck us all in upper case for a century or so.
>
> then along came UNIX(tm) and we started writing _almost_ everything in
> lower case.
>
>

Which I find very annoying. I still cannot see the difference between
an m and an rm, especially in newspapers.

People's handwriting is easier to read if they use uppercase, too.

/BAH

Anne & Lynn Wheeler

unread,
Feb 7, 2011, 9:31:44 AM2/7/11
to

Peter Flass <Peter...@Yahoo.com> writes:
> That's an interesting observation. Does anyone know why IBM decided
> to go with CKD diska rather than FBA for the 360? In retrospect it
> turns out to have been a terrible decision.

one of the things was that a lot of file system structure could be kept
on disk (rather than in memory) ... and use search channel programs to
find items on disk ... trading off relatively abundant I/O resources
against very scarce real storage resource.

by at least the mid-70s, the resource trade-off had inverted and FBA
devices were being produced produced ... however the POK (batch)
favorite son operating system didn't/wouldn't support them. I was told
that even giving them fully tested and integrated FBA support, I would
still have to come up with a $26M business case to cover the costs of
documentation and education. I wasn't allowed to use life-cycle savings
or other items ... but had to be purely based on incremental new disk
sales (new gross on the order of $200m-$300m new disk sales). The
argument was then that existing customers would just buy the same amount
of FBA as they were buying CKD. misc. past posts mentioning FBA, CKD,
multi-track search, etc
http://www.garlic.com/~lynn/submain.html#dasd

There was a similar/analogous shift in real resources with uptake in
RDBMS in the 80s. In the 70s, there was some skirmishes between the
'60s IMS physical database group in STL and the RDBMS System/R group
in bldg. 28 ... some old posts mentioning original sql/relational
http://www.garlic.com/~lynn/submain.html#systemr

IMS group claiming RDBMS doubled the physical disk requirements for the
implicit, on-disk index, index processing also significantly increasing
the number of disk i/os. Relational retort was that exposing explicit
record pointers in application significantly increased administrative
overhead and human effort in manual maintenance. In the 80s, significant
drop in disk price/mbyte minimized the additional disk space argument,
and further increases in available of real storage allowed caching of
index structure ... mitigating the number of physical disk I/Os (while
human experience/skills to manage IMS were becoming scarce/expensive).

The early 80s also saw big explosion in mid-range business ... 43xx
machines (also saw similar explosion in vax/vms). The high-end disk was
CKD 3380 ... but the only mid-range disk was FBA 3370. This made it
difficult for the POK favorite son operating system to play in the
mid-range with 4341 ... since there wasn't a mid-range CKD product
(there was some POK favorite son operating system on 4341 for some
installations that upgraded from older 370s to 4341 and retaining
existing legacy CKD). Eventually attempting to address the exploding
mid-range opportunity (for POK favorite son operating system), the CKD
3375 was produced, which was CKD simulation on FBA 3370.

Of course, today ... all CKD devices are simulated on FBA disks.

misc. recent posts mentioning the $26M business case
http://www.garlic.com/~lynn/2011.html#23 zLinux OR Linux on zEnterprise Blade Extension???
http://www.garlic.com/~lynn/2011.html#35 CKD DASD
http://www.garlic.com/~lynn/2011b.html#47 A brief history of CMS/XA, part 1

recent old email item mentioning 4341 & 3081
http://www.garlic.com/~lynn/2011b.html#email841012

in this (linkedin) discussion about 3081
http://www.garlic.com/~lynn/2011b.html#49 vm/370 3081
http://www.garlic.com/~lynn/2011b.html#62 vm/370 3081

des...@verizon.net

unread,
Feb 7, 2011, 10:53:33 AM2/7/11
to
Roland Hutchinson <my.sp...@verizon.net> writes:

> On Sun, 06 Feb 2011 14:53:17 -0500, despen wrote:
>
>> At the time I had no problem envisioning a future where all printing and
>> display is uppercase. Case really isn't necessary to communications.
>> There are other ways to indicate proper nouns and the beginning of
>> sentences. Upper case is easy to read and display.
>
> i think research shows that all lower case is far easier to read than all
> upper case. a persistent legend has it that telegrams were printed in
> upper case rather than lower only because samuel morse could not
> countenance the thought of printing the word "god" (in reference to the
> deity of the monothestic faiths) without an uppercase g, and his decision
> stuck us all in upper case for a century or so.

I've read that but can't see how that can be true.
There's probably some bias in the study based on what people are
used to.

Uppercase fills the rectangle available to make the letter distinctive
making the overall letter much easier to recognize.

> then along came UNIX(tm) and we started writing _almost_ everything in
> lower case.

Sure, blame UNIX.

My first encounter with lower case on computers was the Daily Racing
Form project. We used Bunker Ramo CRTs on an IBM mainframe since no
3270 at the time supported lower case.

des...@verizon.net

unread,
Feb 7, 2011, 11:04:06 AM2/7/11
to
Peter Flass <Peter...@Yahoo.com> writes:

> On 2/6/2011 2:53 PM, des...@verizon.net wrote:
> ...

>> On the display side the 2260 was reasonably easy to use but primitive
>> but the 3270 was nuts. It was like it was designed badly on purpose.
>
> I used both, I think the 3270 is a much better architecture and not
> especially complicated, but of course YMMV.

I think that depends on how much of the details of the architecture
are hidden. If you don't know how screen locations are addressed,
try this:

http://www.prycroft6.com.au/misc/3270.html

It just goes from bad to worse.
12 bit addressing wasn't enough so they came up with 14 bit addressing.

Graphics? A nightmare you don't want to see.
Even hardware wise, IBM engineers must have been trying to cover up
how slow graphics were by treating the viewer to a few seconds of
scrambled screen for every graphics display.

Of course I use 3270s every day, but look inside.
It's a nightmare.

Walter Bushell

unread,
Feb 7, 2011, 11:30:50 AM2/7/11
to
In article <PM00049BB...@aca2e2ea.ipt.aol.com>,
jmfbahciv <See....@aol.com> wrote:

I could see not being able to see the difference between "rn" and "m"
perhaps. Have you tried increasing the size of the text?

--
The Chinese pretend their goods are good and we pretend our money
is good, or is it the reverse?

des...@verizon.net

unread,
Feb 7, 2011, 11:40:26 AM2/7/11
to
Walter Bushell <pr...@panix.com> writes:

I think the point is, there is already a way to increase the size of the
text. Like this:

rn m <== almost too small
RN M <== smacks me in the face large enough

Ahem A Rivet's Shot

unread,
Feb 7, 2011, 11:43:42 AM2/7/11
to
On Mon, 07 Feb 2011 10:53:33 -0500
des...@verizon.net wrote:

> Roland Hutchinson <my.sp...@verizon.net> writes:
>
> > i think research shows that all lower case is far easier to read than
> > all upper case. a persistent legend has it that telegrams were printed
> > in upper case rather than lower only because samuel morse could not
> > countenance the thought of printing the word "god" (in reference to the
> > deity of the monothestic faiths) without an uppercase g, and his
> > decision stuck us all in upper case for a century or so.
>
> I've read that but can't see how that can be true.
> There's probably some bias in the study based on what people are
> used to.

Running text is easier to read as words in lower case.

> Uppercase fills the rectangle available to make the letter distinctive
> making the overall letter much easier to recognize.

Individual letters are easier to identify accurately in upper case.

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/

Charlie Gibbs

unread,
Feb 7, 2011, 12:02:39 PM2/7/11
to
In article <icr5bje...@verizon.net>, des...@verizon.net (despen)
writes:

> Peter Flass <Peter...@Yahoo.com> writes:
>
>> On 2/6/2011 2:53 PM, des...@verizon.net wrote:
>> ...
>>> On the display side the 2260 was reasonably easy to use but
>>> primitive but the 3270 was nuts. It was like it was designed
>>> badly on purpose.
>>
>> I used both, I think the 3270 is a much better architecture and not
>> especially complicated, but of course YMMV.
>
> I think that depends on how much of the details of the architecture
> are hidden. If you don't know how screen locations are addressed,
> try this:
>
> http://www.prycroft6.com.au/misc/3270.html

Omigod. And I thought Univac's Uniscope protocol was bad.

I'm sure glad dumb terminals won out.

--
/~\ cgi...@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!

Joe Pfeiffer

unread,
Feb 7, 2011, 12:12:50 PM2/7/11
to
des...@verizon.net writes:

> Roland Hutchinson <my.sp...@verizon.net> writes:
>
>> On Sun, 06 Feb 2011 14:53:17 -0500, despen wrote:
>>
>>> At the time I had no problem envisioning a future where all printing and
>>> display is uppercase. Case really isn't necessary to communications.
>>> There are other ways to indicate proper nouns and the beginning of
>>> sentences. Upper case is easy to read and display.
>>
>> i think research shows that all lower case is far easier to read than all
>> upper case. a persistent legend has it that telegrams were printed in
>> upper case rather than lower only because samuel morse could not
>> countenance the thought of printing the word "god" (in reference to the
>> deity of the monothestic faiths) without an uppercase g, and his decision
>> stuck us all in upper case for a century or so.
>
> I've read that but can't see how that can be true.
> There's probably some bias in the study based on what people are
> used to.
>
> Uppercase fills the rectangle available to make the letter distinctive
> making the overall letter much easier to recognize.

Ironically, that's actually the reason lower case is easier: since
letters vary in height and some have descenders, you have a greater
variation in shape so the pattern recognition is easier.

Here's a paper: http://www.microsoft.com/typography/ctfonts/WordRecognition.aspx

--
This sig block for rent

Joe Pfeiffer

unread,
Feb 7, 2011, 12:20:15 PM2/7/11
to
des...@verizon.net writes:

Likewise B8 is much easier to read than b8, and Z2 is much easier than
z2. Well, maybe not....

des...@verizon.net

unread,
Feb 7, 2011, 12:26:56 PM2/7/11
to
Joe Pfeiffer <pfei...@cs.nmsu.edu> writes:

Aggh, Microsoft and I can't really find any faults with their
reasoning. Ruin my day.

Seriously, it does make sense. Still there remain a lot of lower case
that's hard to make out on a display screen. We could have gotten
by without lower case.

Anne & Lynn Wheeler

unread,
Feb 7, 2011, 12:30:41 PM2/7/11
to

des...@verizon.net writes:
> I think that depends on how much of the details of the architecture
> are hidden. If you don't know how screen locations are addressed,
> try this:
>
> http://www.prycroft6.com.au/misc/3270.html
>
> It just goes from bad to worse.
> 12 bit addressing wasn't enough so they came up with 14 bit addressing.
>
> Graphics? A nightmare you don't want to see.
> Even hardware wise, IBM engineers must have been trying to cover up
> how slow graphics were by treating the viewer to a few seconds of
> scrambled screen for every graphics display.
>
> Of course I use 3270s every day, but look inside.
> It's a nightmare.

3277/3272 (ANR) had a lot more electronics in the head ... and so the
transmission was a lot more efficient.

3278/3274 (DCA) move a lot of the electronics back into the controller
... which resulted in significantly reducing the transmission
efficiency.

This showed up in things like signicant response time differences
between ANR and DCA (if you worried about such things, when we tried to
escalate ... the response was 3278/3274 was designed for interactive
computing ... but for data-entry ... i.e. essentially computerized
keypunch). An old comparison
http://www.garlic.com/~lynn/2001m.html#19 3270 protocol

It later shows up in upload/download throughput with PC terminal
emulation ... whether it was a 3277 emulation card with ANR or 3278
emulation card with DCA.

All 327x was designed with star-wired coax ... with individual coax
cable from the datacenter out to every 327x terminal. Some buildings
were running into loading limits because of the enormous aggregate
weight of all those 327x cables. Token-ring was somewhat then positioned
to address the enormous problem with the weight of 327x cables. Run CAT5
out to local MAU box in departmental closet ... then star-wired CAT5
from departmental MAU box to individual terminals. Communication
division was having token-ring cards (first 4mbit/sec and later
16mbit/sec) designed with throughput oriented towards "terminal
emulation" paradigm and hundreds of such stations sharing common
LAN bandwidth. misc. past posts mentioning "terminal emulation"
http://www.garlic.com/~lynn/subnetwork.html#emulation

above posts also periodically mentioning that the hard stance on
preserving the "terminal emulation" paradigm (and install base) was
starting to isolate the datacenter from the growing distributed
computing activity. In the late 80s, a senior disk engineer got a talk
scheduled at the internal, world-wide, annual communication group
conference ... and opened the talk with the statement that the
communication group was going to be responsible for the demise of the
disk division. While the "terminal emulation" paradigm help with early
uptake of PCs (business could get a PC with 3270 emulation for about the
same price as a real 3270 ... and in the same desktop footprint get both
business terminal and some local computing, since the 3270 terminals
were already business justified, it was no brainer business
justification to switch from real teraminal to PC), the rigid stance on
preserving "terminal emulation" was started to result in large amounts
of data leaking out of the datacenter to more "distributed computing"
friendly platforms (the leading edge of this wave was showing up in
decline in disk sales).

Because the RS6000 had microchannel ... they were being forced into
using cards designed for PS2 market. The workstation group had done
their own 4mbit token-ring card for the PC/RT (16bit AT bus). It turns
out that 16mbit PS2 microchannel card (designed for terminal emulation
market) had lower per card thruput than the PC/RT 4mbit token ring card.
RS6000 had similar issues with the PS2 scsi controller cards and the PS2
display adapter cards (none of them designed for high-performance
workstation market).

Anne & Lynn Wheeler

unread,
Feb 7, 2011, 12:49:12 PM2/7/11
to
Roland Hutchinson <my.sp...@verizon.net> writes:
> i think research shows that all lower case is far easier to read than all
> upper case. a persistent legend has it that telegrams were printed in
> upper case rather than lower only because samuel morse could not
> countenance the thought of printing the word "god" (in reference to the
> deity of the monothestic faiths) without an uppercase g, and his decision
> stuck us all in upper case for a century or so.
>
> then along came UNIX(tm) and we started writing _almost_ everything in
> lower case.

little topic drift ... irate email (done in upper case for emphasis)
about the enormous amount of SNA/VTAM misinformation being spewed ... in
this case regarding applicability of SNA/VTAM for the NSFNET backbone:
http://www.garlic.com/~lynn/2006w.html#email870109
in this post
http://www.garlic.com/~lynn/2006w.html#26 SNA/VTAM for NSFNET

also included in this linkedin reference:
http://lnkd.in/JRVnGk

similar motivation to the group attempting to preserve the "terminal
emulation" paradigm mentioned in this post:
http://www.garlic.com/~lynn/2011b.html#64 If IBM Hadn't Bet the Company
and these
http://www.garlic.com/~lynn/subnetwork.html#emulation

also shows up as part of effort to get the corporate internal network
converted to SNA/VTAM ... part of the justification telling the
executive committee that PROFS was a VTAM application:
http://www.garlic.com/~lynn/2006x.html#email870302
in this post
http://www.garlic.com/~lynn/2006w.html#7 vmshare

and another old email
http://www.garlic.com/~lynn/2011.html#email870306
in this post
http://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?

Anne & Lynn Wheeler

unread,
Feb 7, 2011, 12:59:24 PM2/7/11
to
Anne & Lynn Wheeler <ly...@garlic.com> writes:
> This showed up in things like signicant response time differences
> between ANR and DCA (if you worried about such things, when we tried to
> escalate ... the response was 3278/3274 was designed for interactive
> computing ... but for data-entry ... i.e. essentially computerized
> keypunch). An old comparison
> http://www.garlic.com/~lynn/2001m.html#19 3270 protocol

.... finger slip

the response was 3278/3274 was *NOT* designed for interactive computing

Stan Barr

unread,
Feb 7, 2011, 1:13:21 PM2/7/11
to
On Mon, 07 Feb 2011 10:53:33 -0500, des...@verizon.net <des...@verizon.net>
wrote:

> Roland Hutchinson <my.sp...@verizon.net> writes:
>>
>> i think research shows that all lower case is far easier to read than all
>> upper case. a persistent legend has it that telegrams were printed in
>> upper case rather than lower only because samuel morse could not
>> countenance the thought of printing the word "god" (in reference to the
>> deity of the monothestic faiths) without an uppercase g, and his decision
>> stuck us all in upper case for a century or so.
>
> I've read that but can't see how that can be true.
> There's probably some bias in the study based on what people are
> used to.

Early Morse telegrams were hand-written* so case would be down to the
operator, or company policy. When typewriters were introduced to
telegraph offices** they were upper case only - known as "MILLS".
Printing telegraphs, which came after Morse, were as far as I can see
in contemporary literature, all upper case.

The earliest surviving telegraphy - Morse's message "What hath GOD
wrought" is in mixed case, hand-written.

I should mention, when I was learning Morse Code (for radio) I was
taught to write in upper case - in pencil...I still do :-)

* Telegraphy pre-dates the typewriter by about a quarter of a century.
** About 1890, see:
http://www.cix.co.uk/~jmgriffiths/typewriterandpiecework.pdf

--
Cheers,
Stan Barr plan.b .at. dsl .dot. pipex .dot. com

The future was never like this!

Nick Spalding

unread,
Feb 7, 2011, 1:31:18 PM2/7/11
to
Stan Barr wrote, in <8rar21...@mid.individual.net>
on 7 Feb 2011 18:13:21 GMT:

> On Mon, 07 Feb 2011 10:53:33 -0500, des...@verizon.net <des...@verizon.net>
> wrote:
>
> > Roland Hutchinson <my.sp...@verizon.net> writes:
> >>
> >> i think research shows that all lower case is far easier to read than all
> >> upper case. a persistent legend has it that telegrams were printed in
> >> upper case rather than lower only because samuel morse could not
> >> countenance the thought of printing the word "god" (in reference to the
> >> deity of the monothestic faiths) without an uppercase g, and his decision
> >> stuck us all in upper case for a century or so.
> >
> > I've read that but can't see how that can be true.
> > There's probably some bias in the study based on what people are
> > used to.
>
> Early Morse telegrams were hand-written* so case would be down to the
> operator, or company policy. When typewriters were introduced to
> telegraph offices** they were upper case only - known as "MILLS".
> Printing telegraphs, which came after Morse, were as far as I can see
> in contemporary literature, all upper case.

This was still so when I was with Cable & Wireless in the 1950s.

> The earliest surviving telegraphy - Morse's message "What hath GOD
> wrought" is in mixed case, hand-written.
>
> I should mention, when I was learning Morse Code (for radio) I was
> taught to write in upper case - in pencil...I still do :-)
>
> * Telegraphy pre-dates the typewriter by about a quarter of a century.
> ** About 1890, see:
> http://www.cix.co.uk/~jmgriffiths/typewriterandpiecework.pdf
--

Nick Spalding

Stan Barr

unread,
Feb 7, 2011, 2:10:34 PM2/7/11
to
On Mon, 07 Feb 2011 10:12:50 -0700, Joe Pfeiffer <pfei...@cs.nmsu.edu> wrote:
>
> Ironically, that's actually the reason lower case is easier: since
> letters vary in height and some have descenders, you have a greater
> variation in shape so the pattern recognition is easier.
>
> Here's a paper:
> http://www.microsoft.com/typography/ctfonts/WordRecognition.aspx
>

See also the literature on UK road signs. They're in mixed case and
a new font "Transport" was designed for them with quick legibility in
mind.
http://en.wikipedia.org/wiki/Transport_%28typeface%29

Peter Flass

unread,
Feb 7, 2011, 2:21:32 PM2/7/11
to

I've programmed 3270's bare-metal in assembler, etc. Always seemed
simple to me ;-)

Peter Flass

unread,
Feb 7, 2011, 2:23:11 PM2/7/11
to

Yes, newsprint is too small, and various text attributes like leading,
kerning, etc. make quite a difference. There's a big difference between
a typographer and a hacker with a word-processing program.

des...@verizon.net

unread,
Feb 7, 2011, 3:12:51 PM2/7/11
to
Peter Flass <Peter...@Yahoo.com> writes:

That's good and it does earn some credibility.

I think I did better. After developing a 2260 application my boss
handed me the specifications for a 3270 and asked me what I thought of
it. I gave him the correct answer, it's a piece of crap.

Subsequent to that I've done a lot of pretty low level 3270 stuff but
never changed my opinion. It has no redeeming qualities.

Anne & Lynn Wheeler

unread,
Feb 7, 2011, 3:28:50 PM2/7/11
to
des...@verizon.net writes:
> That's good and it does earn some credibility.
>
> I think I did better. After developing a 2260 application my boss
> handed me the specifications for a 3270 and asked me what I thought of
> it. I gave him the correct answer, it's a piece of crap.
>
> Subsequent to that I've done a lot of pretty low level 3270 stuff but
> never changed my opinion. It has no redeeming qualities.

re:


http://www.garlic.com/~lynn/2011b.html#64 If IBM Hadn't Bet the Company

A class of (3270) "psuedo devices" were added to vm370 ... and along
with PVM (passthrough virtual machine) ... allowed remote terminal
emulation ("dial PVM" and then select another network node to "logon"
to.

Fairly early ... the person responsible for internal email client (VMSG,
which was also used as the embedded technology for PROFS email) wrote
PARASITE & STORY ... small compact programmable interface to PVM along
with a HLLAPI-like language (predating IBM/PC and 3270 terminal
emulation).

old posts with some old PARASITE/STORY information ... along
with STORY semantics and example scripts:
http://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question
http://www.garlic.com/~lynn/2001k.html#36 Newbie TOPS-10 7.03 question

including things like logging into the Field Engineering "RETAIN"
system and pulling off lists of service support information.

both PARASITE & STORY are remarkable for the function they provided
... considering the size of their implementation.

Rod Speed

unread,
Feb 7, 2011, 5:35:41 PM2/7/11
to
jmfbahciv wrote
> Roland Hutchinson wrote
>> despen wrote

>>> At the time I had no problem envisioning a future where all
>>> printing and display is uppercase. Case really isn't necessary to
>>> communications. There are other ways to indicate proper nouns and
>>> the beginning of sentences. Upper case is easy to read and display.

>> i think research shows that all lower case is far easier to read than all upper case.

> Really?!!!

Yes, really??!!!!!

> I don't find that to be true but then my eyes are biased because of years of experience.

And yet you clearly dont write entirely in upper case for normal communications.

>> a persistent legend has it that telegrams were printed in
>> upper case rather than lower only because samuel morse could not
>> countenance the thought of printing the word "god" (in reference to
>> the deity of the monothestic faiths) without an uppercase g, and his
>> decision stuck us all in upper case for a century or so.

>> then along came UNIX(tm) and we started writing _almost_ everything in lower case.

> Which I find very annoying.

And many find entirely upper case even more annoying.

> I still cannot see the difference between an m and an rm, especially in newspapers.

Thats the font and kerning used, not upper or lower case.

> People's handwriting is easier to read if they use uppercase, too.

Yes, but thats nothing like a printed font.

And like I said, you clearly dont write entirely in upper case for normal communications.


Rod Speed

unread,
Feb 7, 2011, 5:48:36 PM2/7/11
to
des...@verizon.net wrote
> Roland Hutchinson <my.sp...@verizon.net> wrote
>> despen wrote

>>> At the time I had no problem envisioning a future where all
>>> printing and display is uppercase. Case really isn't necessary to
>>> communications. There are other ways to indicate proper nouns and
>>> the beginning of sentences. Upper case is easy to read and display.

>> i think research shows that all lower case is far easier to read
>> than all upper case. a persistent legend has it that telegrams were
>> printed in upper case rather than lower only because samuel morse
>> could not countenance the thought of printing the word "god" (in
>> reference to the deity of the monothestic faiths) without an
>> uppercase g, and his decision stuck us all in upper case for a
>> century or so.

> I've read that but can't see how that can be true.

Presumably you are talking about his first bit, lower case being far easier to read.

> There's probably some bias in the study based on what people are used to.

Corse there is with reading. But that doesnt alter the fact
that thats what people are used to for whatever reason.

> Uppercase fills the rectangle available

Like hell it does with some of the letters like A V I etc.

> to make the letter distinctive

Thats true of any well designed font for the letters that matter.

> making the overall letter much easier to recognize.

Reading isnt actually about recognising invidual letters for competant readers.


Rod Speed

unread,
Feb 7, 2011, 5:53:05 PM2/7/11
to
des...@verizon.net wrote
> Walter Bushell <pr...@panix.com> writes

>> jmfbahciv <See....@aol.com> wrote
>>> Roland Hutchinson wrote
>>>> despen wrote

>>>>> At the time I had no problem envisioning a future where all
>>>>> printing and display is uppercase. Case really isn't necessary
>>>>> to communications. There are other ways to indicate proper nouns
>>>>> and the beginning of sentences. Upper case is easy to read and
>>>>> display.

>>>> i think research shows that all lower case is far easier to read than all upper case.

>>> Really?!!! I don't find that to be true but then my eyes are biased because of years of experience.

>>>> a persistent legend has it that telegrams were printed in
>>>> upper case rather than lower only because samuel morse could not
>>>> countenance the thought of printing the word "god" (in reference
>>>> to the deity of the monothestic faiths) without an uppercase g,
>>>> and his decision stuck us all in upper case for a century or so.

>>>> then along came UNIX(tm) and we started writing _almost_ everything in lower case.

>>> Which I find very annoying. I still cannot see the difference
>>> between an m and an rm, especially in newspapers.

>>> People's handwriting is easier to read if they use uppercase, too.

>> I could see not being able to see the difference between "rn"


>> and "m" perhaps. Have you tried increasing the size of the text?

> I think the point is, there is already a way to increase the size of the text. Like this:

> rn m <== almost too small

Or just use appropriate kerning and get r n m

> RN M <== smacks me in the face large enough

And few choose to read most of what they read in all upper case.


Roland Hutchinson

unread,
Feb 7, 2011, 5:56:26 PM2/7/11
to

gd grf. sm o th old abbrs r jst lik txtg tkn 2 xtrms.

--
Roland Hutchinson

He calls himself "the Garden State's leading violist da gamba,"
... comparable to being ruler of an exceptionally small duchy.
--Newark (NJ) Star Ledger ( http://tinyurl.com/RolandIsNJ )

Rod Speed

unread,
Feb 7, 2011, 6:00:39 PM2/7/11
to

Thats the font, not the case.

> We could have gotten by without lower case.

We could have gotten by without all sorts of things. There isnt any point in doing that tho.


Joe Morris

unread,
Feb 7, 2011, 7:34:40 PM2/7/11
to
"Anne & Lynn Wheeler" <ly...@garlic.com> wrote in message
news:m3tygfo...@garlic.com...

> All 327x was designed with star-wired coax ... with individual coax
> cable from the datacenter out to every 327x terminal. Some buildings
> were running into loading limits because of the enormous aggregate
> weight of all those 327x cables. Token-ring was somewhat then positioned
> to address the enormous problem with the weight of 327x cables.

IBM wasn't the only offender. Try Wang OIS systems: *two* coax lines from
the central box to each terminal. The one smart thing that the designers
did was to standardize one cable as having BNC connectors and the other as
TNC.

In the data center I managed for many years we housed a couple of OIS
clusters, but (thankfully) that was all: another department owned the Wang
... um, "stuff" ... and had the responsibility for keeping it operational.

None of which helped when the ceiling tiles just outside the machine room
door collapsed one day as a result of all of the <censored> Wang coax up
there. A couple of years later the agency involved surrendered to common
sense and replaced the Wang boxes with WordPerfect.

Joe Morris


Quadibloc

unread,
Feb 7, 2011, 8:25:59 PM2/7/11
to
On Feb 7, 3:56 pm, Roland Hutchinson <my.spamt...@verizon.net> wrote:

> gd grf. sm o th old abbrs r jst lik txtg tkn 2 xtrms.

You can see the rest of it here:

http://www.radions.net/philcode.htm

nd gt a gd jb...

John Savard

Roland Hutchinson

unread,
Feb 8, 2011, 2:39:21 AM2/8/11
to
On Mon, 07 Feb 2011 17:25:59 -0800, Quadibloc wrote:

> On Feb 7, 3:56 pm, Roland Hutchinson <my.spamt...@verizon.net> wrote:
>
>> gd grf. sm o th old abbrs r jst lik txtg tkn 2 xtrms.
>
> You can see the rest of it here:
>
> http://www.radions.net/philcode.htm

Thanks. Very cool, in a cheeseparing kind of way.

> nd gt a gd jb...

ES AB TI, TOO.

jmfbahciv

unread,
Feb 8, 2011, 8:56:27 AM2/8/11
to

Difficult if you're trying to do xword puzzles. Making the font larger
means that I have to do paging for the shortest messages.


Pre-laser printing required upper case for coding. I'd really really
hate to have to read code in mixed case. Every time I see C code
labels, I want to go chop, chop, chop to reduce them to upper case
and 6 characters. :-).

/BAH

jmfbahciv

unread,
Feb 8, 2011, 8:56:29 AM2/8/11
to


It is if you are proofreading or trying to learn science and math.

/BAH

jmfbahciv

unread,
Feb 8, 2011, 8:56:29 AM2/8/11
to
des...@verizon.net wrote:

> Walter Bushell <pr...@panix.com> writes:
>
>> In article <PM00049BB...@aca2e2ea.ipt.aol.com>,
>> jmfbahciv <See....@aol.com> wrote:
>>
>>> Roland Hutchinson wrote:
>>> > On Sun, 06 Feb 2011 14:53:17 -0500, despen wrote:
>>> >
>>> >> At the time I had no problem envisioning a future where all printing
and
>>> >> display is uppercase. Case really isn't necessary to communications.
>>> >> There are other ways to indicate proper nouns and the beginning of
>>> >> sentences. Upper case is easy to read and display.
>>> >
>>> > i think research shows that all lower case is far easier to read than
all
>>> > upper case.
>>>
>>> Really?!!! I don't find that to be true but then my eyes are biased
>>> because of years of experience.
>>>
>>> > a persistent legend has it that telegrams were printed in
>>> > upper case rather than lower only because samuel morse could not
>>> > countenance the thought of printing the word "god" (in reference to the
>>> > deity of the monothestic faiths) without an uppercase g, and his
decision
>>> > stuck us all in upper case for a century or so.
>>> >
>>> > then along came UNIX(tm) and we started writing _almost_ everything in
>>> > lower case.
>>> >
>>> >
>>> Which I find very annoying. I still cannot see the difference between
>>> an m and an rm, especially in newspapers.
>>>
>>> People's handwriting is easier to read if they use uppercase, too.
>>>
>>> /BAH
>>
>> I could see not being able to see the difference between "rn" and "m"
>> perhaps. Have you tried increasing the size of the text?
>
> I think the point is, there is already a way to increase the size of the
> text. Like this:
>
> rn m <== almost too small
> RN M <== smacks me in the face large enough

and you don't have to page down to read that last line. (Another
thing that really bugs me about every web page I've pulled up
on the TTY screen.

/BAH

Ahem A Rivet's Shot

unread,
Feb 8, 2011, 9:14:44 AM2/8/11
to
On 8 Feb 2011 13:56:27 GMT
jmfbahciv <See....@aol.com> wrote:

> Pre-laser printing required upper case for coding. I'd really really
> hate to have to read code in mixed case. Every time I see C code
> labels, I want to go chop, chop, chop to reduce them to upper case
> and 6 characters. :-).

You must really hate typical Java variable names - as in
typicalJavaVariableNameContainingASentenceDescribingWhatItsFor.

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/

Michael Wojcik

unread,
Feb 8, 2011, 10:49:59 AM2/8/11
to
Joe Morris wrote:
>
> IBM wasn't the only offender. Try Wang OIS systems: *two* coax lines from
> the central box to each terminal. The one smart thing that the designers
> did was to standardize one cable as having BNC connectors and the other as
> TNC.

IBM's 5250s, used with the AS/400 (and with earlier System/3x? I never
used those), used twinax. Similar excessive cabling, though at least
they didn't have to use different connectors to avoid swapping the cables.

I had an IBM RT/PC with the "Megapixel" display, which used one of
those cables that was just a bundle of three individual cables with
BNC connectors, for RGB. Sync was carried on one (G?), but you could
swap the other two without ill effect, except that you'd be swapping
the two color signals.

I have poor color vision, and didn't notice I'd hooked the thing up wrong.

At the time, I was working on PEX, the PHIGS eXtension for X11, a 3D
graphics extension IBM TCS was helping UIUC develop. (PHIGS stands for
"Programmable Hierarchical Interactive Graphics System", so as you can
see the "PEX" name was the output of a highly efficient compressor.)

So one day I was debugging some of the color stippling code, and the
project manager walked by...

And, well, someone else ended up with the job of fixing the color
code, while I went to work on some other bit.

(I *told* them they should have bought me one of those nifty handheld
color meters, but they were having none of it.)

--
Michael Wojcik
Micro Focus
Rhetoric & Writing, Michigan State University

Michael Wojcik

unread,
Feb 8, 2011, 10:09:55 AM2/8/11
to
Stan Barr wrote:
>
> * Telegraphy pre-dates the typewriter by about a quarter of a century.

Even a little longer than that, for most purposes. Telegraphy was
introduced in the US in 1844. The first commercially successful
typewriters appeared in the late 1970s, but only saw limited
application (eg for courtroom reporting) until the 1880s. Yates quotes
a piece in _Pensman's Art Journal_ (whatever happened to that?): "Five
years ago the typewriter was simply a mechanical curiosity".[1] So for
many businesses it was around 35 years between significant use of the
telegraph and significant use of the typewriter.

The date of "about 1890" Murray gives (in the piece you cited) for
typewriters in telegraph offices sounds right; that would mesh both
with what other businesses were doing, and with the rising use of
telegraphy in business (and so increasing telegraph volume providing
an incentive to go to typewriting).


[1] JoAnne Yates, _Control Through Communication: The Rise of System
in American Management_. An authoritive classic, based on extensive
archival research. See 22ff, 39ff.

Michael Wojcik

unread,
Feb 8, 2011, 10:40:52 AM2/8/11
to
des...@verizon.net wrote:
> Peter Flass <Peter...@Yahoo.com> writes:
>
>> I used both, I think the 3270 is a much better architecture and not
>> especially complicated, but of course YMMV.
>
> I think that depends on how much of the details of the architecture
> are hidden. If you don't know how screen locations are addressed,
> try this:
>
> http://www.prycroft6.com.au/misc/3270.html
>
> It just goes from bad to worse.
> 12 bit addressing wasn't enough so they came up with 14 bit addressing.

Indeed. I've worked on 3270 implementations on both the terminal and
host side, and it's a right mess.

The 3270 datastream isn't even consistent across devices. You look at
the _3270 Data Stream_ manual (GA23-0059), and see that the command
code for Write is x'F1'. That's odd, I don't see x'F1' in my wire
trace. Oh, that's because x'F1' is only for remotely-attached
terminals; for locally-attached terminals (which, confusingly,
includes TN3270 clients, which could physically be anywhere), it's x'01'.

Of course that's obvious, once you see Table 2-1 in section 2.1.3 of
_3174 Controller_ (GA23-0218). Why didn't you think to read _3174
Controller_ in the first place, dummy? The title just screams "you
need to read this book if you're trying to figure out how the 3270
datastream actually works".

Joe Pfeiffer

unread,
Feb 8, 2011, 11:39:56 AM2/8/11
to
jmfbahciv <See....@aol.com> writes:

Non upper-case-only printers were certainly available before laser
printers... I got away from upper-case-only decades ago. This
dicussion is the closest I've ever come to looking back.
--
This sig block for rent

Joe Pfeiffer

unread,
Feb 8, 2011, 11:40:31 AM2/8/11
to
jmfbahciv <See....@aol.com> writes:

Proofreading is a skill almost unrelated to reading for comprehension...

Stan Barr

unread,
Feb 8, 2011, 11:41:04 AM2/8/11
to
On Mon, 7 Feb 2011 17:25:59 -0800 (PST), Quadibloc <jsa...@ecn.ab.ca>
wrote:

When someone sends me an email in text-speek I always reply in
Phillips Code to show there's nothing new under the sun!

Anne & Lynn Wheeler

unread,
Feb 8, 2011, 1:35:39 PM2/8/11
to

Michael Wojcik <mwo...@newsguy.com> writes:
> I had an IBM RT/PC with the "Megapixel" display, which used one of
> those cables that was just a bundle of three individual cables with
> BNC connectors, for RGB. Sync was carried on one (G?), but you could
> swap the other two without ill effect, except that you'd be swapping
> the two color signals.

I had RT/PC with megapixel display in non-IBM booth at Interop '88
... it was at right angles to the SUN booth which had including Case
doing SNMP; Case was talked into coming over and getting SNMP up and
running on the RT. misc. past posts mentioning Interop '88
http://www.garlic.com/~lynn/subnetwork.html#interop88

At the time of Interop '88, for lots of people, it wasn't clear that
SNMP was going to be the winner. also at Interop '88, vendors had
extensive GOSIP/OSI stuff in their booths (fed. gov. was mandating
internet would be killed off and be replaced by OSI stuff).

misc. past posts in this thread:
http://www.garlic.com/~lynn/2011b.html#57 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011b.html#63 If IBM Hadn't Bet the Company


http://www.garlic.com/~lynn/2011b.html#64 If IBM Hadn't Bet the Company

http://www.garlic.com/~lynn/2011b.html#65 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011b.html#67 If IBM Hadn't Bet the Company

Rod Speed

unread,
Feb 8, 2011, 2:21:28 PM2/8/11
to
jmfbahciv wrote
> Walter Bushell wrote

>> jmfbahciv <See....@aol.com> wrote
>>> Roland Hutchinson wrote:
>>>> despen wrote

>>>>> At the time I had no problem envisioning a future where all printing
>>>>> and display is uppercase. Case really isn't necessary to communications.
>>>>> There are other ways to indicate proper nouns and the beginning of
>>>>> sentences. Upper case is easy to read and display.

>>>> i think research shows that all lower case is far easier to read than all upper case.

>>> Really?!!! I don't find that to be true but then my eyes are biased
>>> because of years of experience.

>>>> a persistent legend has it that telegrams were printed in
>>>> upper case rather than lower only because samuel morse could not
>>>> countenance the thought of printing the word "god" (in reference
>>>> to the deity of the monothestic faiths) without an uppercase g,
>>>> and his decision stuck us all in upper case for a century or so.

>>>> then along came UNIX(tm) and we started writing _almost_ everything in lower case.

>>> Which I find very annoying. I still cannot see the difference
>>> between an m and an rm, especially in newspapers.

>>> People's handwriting is easier to read if they use uppercase, too.

>> I could see not being able to see the difference between "rn"
>> and "m" perhaps. Have you tried increasing the size of the text?

> Difficult if you're trying to do xword puzzles.

You are welcome to use anything you like in crossword puzzles.

You'll find some real downsides if you exclusively use all caps in
quite a bit of format communication which will see your shit flushed
where it belongs and ignored.

> Making the font larger means that I have to do paging for the shortest messages.

Your problem.

> Pre-laser printing required upper case for coding.

Irrelevant to what makes sense with normal communication which didnt.

> I'd really really hate to have to read code in mixed case.
> Every time I see C code labels, I want to go chop, chop,
> chop to reduce them to upper case and 6 characters. :-).

Pathetic.

Nearly as bad as one other fool I know well who went thru pages of
printout of normal text, not code, from a Dataproducts printer that
didnt have descenders and manually inserted ALL the descenders.


Rod Speed

unread,
Feb 8, 2011, 2:25:27 PM2/8/11
to
jmfbahciv wrote

Odd that no one with even half a clue does proof reading in all caps.

> or trying to learn science and math.

Yes, there are some situations in which all caps is useful.

Pity that isnt with normal communication.


Rod Speed

unread,
Feb 8, 2011, 2:28:24 PM2/8/11
to
jmfbahciv wrote
> des...@verizon.net wrote
>> Walter Bushell <pr...@panix.com> wrote

>>> jmfbahciv <See....@aol.com> wrote
>>>> Roland Hutchinson wrote
>>>>> despen wrote

>>>>>> At the time I had no problem envisioning a future where all
>>>>>> printing and display is uppercase. Case really isn't necessary
>>>>>> to communications. There are other ways to indicate proper nouns
>>>>>> and the beginning of sentences. Upper case is easy to read and display.

>>>>> i think research shows that all lower case is far easier to read than all upper case.

>>>> Really?!!! I don't find that to be true but then my eyes are
>>>> biased because of years of experience.

>>>>> a persistent legend has it that telegrams were printed in
>>>>> upper case rather than lower only because samuel morse could not
>>>>> countenance the thought of printing the word "god" (in reference
>>>>> to the deity of the monothestic faiths) without an uppercase g,
>>>>> and his decision stuck us all in upper case for a century or so.

>>>>> then along came UNIX(tm) and we started writing _almost_
>>>>> everything in lower case.

>>>> Which I find very annoying. I still cannot see the difference
>>>> between an m and an rm, especially in newspapers.

>>>> People's handwriting is easier to read if they use uppercase, too.

>>> I could see not being able to see the difference between "rn"


>>> and "m" perhaps. Have you tried increasing the size of the text?

>> I think the point is, there is already a way to increase the size of
>> the text. Like this:

>> rn m <== almost too small
>> RN M <== smacks me in the face large enough

> and you don't have to page down to read that last line.
> (Another thing that really bugs me about every web
> page I've pulled up on the TTY screen.

Using that dinosaur approach has some downsides, stupid.


Peter Flass

unread,
Feb 8, 2011, 8:31:50 PM2/8/11
to

The difference between local and remote is annoying, but unfortunately
BTAM wanted to transmit EBCDIC rather than binary, which also explains
the addressing quirks. CCW code '01' is used for (some form of) write
by most devices. My acquaintance with 3270s predates the 3174 by a bit,
I suspect the datastream was better documented in earlier times.

des...@verizon.net

unread,
Feb 8, 2011, 9:05:36 PM2/8/11
to
Peter Flass <Peter...@Yahoo.com> writes:

> BTAM wanted to transmit EBCDIC rather than binary, ,,,

EBCDIC is a binary representation, it's not clear to me what you mean.
BTAM certainly had no problem sending whatever you asked it including
codes below x'40'.

Message has been deleted
Message has been deleted

Peter Flass

unread,
Feb 9, 2011, 8:01:14 AM2/9/11
to
On 2/9/2011 5:24 AM, Morten Reistad wrote:
> In article<iirpl...@news7.newsguy.com>,

> Michael Wojcik<mwo...@newsguy.com> wrote:
>> des...@verizon.net wrote:
>>> Peter Flass<Peter...@Yahoo.com> writes:
>>>
>>>> I used both, I think the 3270 is a much better architecture and not
>>>> especially complicated, but of course YMMV.
>>>
>>> I think that depends on how much of the details of the architecture
>>> are hidden. If you don't know how screen locations are addressed,
>>> try this:
>>>
>>> http://www.prycroft6.com.au/misc/3270.html
>>>
>>> It just goes from bad to worse.
>>> 12 bit addressing wasn't enough so they came up with 14 bit addressing.
>>
>> Indeed. I've worked on 3270 implementations on both the terminal and
>> host side, and it's a right mess.
>
> And I get flashbacks of cold sweat to these 3270/vtam/cics debug
> sessions whenever I debug web pages with cgi-bin logic. Could there
> be a connection here?

Exactly! When CGI first came out I read the description and thought
"this is nothing new, CICS programmers have been doing this since the
nineteen-sixties."

jmfbahciv

unread,
Feb 9, 2011, 9:26:17 AM2/9/11
to

That isn't how I proofread RUNOFF docs we prepared. I did both at the
same time: one side of the brain did the spelling, punctuation and layout,
the other side of the brain did the comprehension part.

the layout of all those paragraphs, tables, and subtitles were done
to make the technical detail easily read and understood. Proofing all
of that was part of the job.

/BAH

jmfbahciv

unread,
Feb 9, 2011, 9:26:15 AM2/9/11
to

With the ones I worked with, they all had lousy print quality.
If the printers weren't maintained well, one had to

R TECO
ERFOO.MAC$N<text>$0tt$$

to find out what that smudged character was. and flyspecs,
which was common on cheaper line printer paper, was extremely
unhelpful, especially when a period generated code or modified
the code generated.

/BAH

jmfbahciv

unread,
Feb 9, 2011, 9:26:16 AM2/9/11
to
Ahem A Rivet's Shot wrote:
> On 8 Feb 2011 13:56:27 GMT
> jmfbahciv <See....@aol.com> wrote:
>
>> Pre-laser printing required upper case for coding. I'd really really
>> hate to have to read code in mixed case. Every time I see C code
>> labels, I want to go chop, chop, chop to reduce them to upper case
>> and 6 characters. :-).
>
> You must really hate typical Java variable names - as in
> typicalJavaVariableNameContainingASentenceDescribingWhatItsFor.
>
Absolutely!!! The chances of a typo creating an unwanted bug
is extremely high. And a reading of a CREF or GLOB listing
won't find it.

/BAH

grey...@mail.com

unread,
Feb 9, 2011, 9:51:28 AM2/9/11
to
On 2011-02-08, Stan Barr <pla...@dsl.pipex.com> wrote:
> On Mon, 7 Feb 2011 17:25:59 -0800 (PST), Quadibloc <jsa...@ecn.ab.ca>
> wrote:
>> On Feb 7, 3:56 pm, Roland Hutchinson <my.spamt...@verizon.net> wrote:
>>
>>> gd grf. sm o th old abbrs r jst lik txtg tkn 2 xtrms.
>>
>> You can see the rest of it here:
>>
>> http://www.radions.net/philcode.htm
>>
>> nd gt a gd jb...
>
> When someone sends me an email in text-speek I always reply in
> Phillips Code to show there's nothing new under the sun!
>

Aramaic (I think) and Arabic, probably Hebrew, all have sorta vowel
signs, but don't use them unless really needed. The languages have a
structure that makes omitting them possible.

krbm abbrd


--
greymaus
.
.
...

grey...@mail.com

unread,
Feb 9, 2011, 9:51:28 AM2/9/11
to

I think Wheatstone(?) did some work pre-Morse that depended on
making marks, (NOT letters), on paper rolls. Forget the source
for the info. Definately dot-matrix printers were mixed case.

Michael Wojcik

unread,
Feb 9, 2011, 9:20:18 AM2/9/11
to
Peter Flass wrote:
>
> Exactly! When CGI first came out I read the description and thought
> "this is nothing new, CICS programmers have been doing this since the
> nineteen-sixties."

For that matter, we've had the "HTTP is just a return to block-mode
terminals" discussion any number of times here in a.f.c., so even
reminiscing about that observation is nothing new. :-)

Michael Wojcik

unread,
Feb 9, 2011, 9:23:12 AM2/9/11
to
Rod Speed wrote:
>
> Using that dinosaur approach has some downsides, stupid.

Ah, Rod, you charmer. A forty-seven line post for one zero-content
insult. What a boon you are to Usenet.

At least it's good to see that the need for rhetoric instruction among
hoi polloi has not diminished, I suppose.

Rod Speed

unread,
Feb 9, 2011, 11:02:55 AM2/9/11
to
Michael Wojcik wrote just the usual mindless posturing thats all it can ever manage.


Anne & Lynn Wheeler

unread,
Feb 9, 2011, 8:43:20 AM2/9/11
to

Peter Flass <Peter...@Yahoo.com> writes:
> Exactly! When CGI first came out I read the description and thought
> "this is nothing new, CICS programmers have been doing this since the
> nineteen-sixties."

univ. library got a ONR grant to do online catalog, part of the money
went to 2321 datacell and the effort was also selected to be one of the
beta test sites for CICS product ... and I got tasked to support/debug
CICS effort.

somewhat similar recent post in (linkedin) MainframeZone group
http://www.garlic.com/~lynn/2011.html#23 zLinux OR Linux on zEnterprise Blade Extension???

recent post in ibm 2321 (data cell) thread in (linkedin) IBM Historic
Computing group
http://www.garlic.com/~lynn/2010q.html#67

yelavich cics history pages from way back machine
http://web.archive.org/web/20050409124902/www.yelavich.com/cicshist.htm

other yelavish cics history
http://web.archive.org/web/20040705000349/http://www.yelavich.com/history/ev198001.htm
and
http://web.archive.org/web/20041023110006/http://www.yelavich.com/history/ev200402.htm

past posts mentioning CICS (&/or BDAM)
http://www.garlic.com/~lynn/submain.html#cics

Anne & Lynn Wheeler

unread,
Feb 9, 2011, 9:02:11 AM2/9/11
to

Peter Flass <Peter...@Yahoo.com> writes:
> Exactly! When CGI first came out I read the description and thought
> "this is nothing new, CICS programmers have been doing this since the
> nineteen-sixties."

<possible repost, don't see previous yet>

Anne & Lynn Wheeler

unread,
Feb 9, 2011, 9:50:10 AM2/9/11
to

Peter Flass <Peter...@Yahoo.com> writes:
> Exactly! When CGI first came out I read the description and thought
> "this is nothing new, CICS programmers have been doing this since the
> nineteen-sixties."

<repost try 3rd time, don't know where things are being lost>

grey...@mail.com

unread,
Feb 9, 2011, 11:51:28 AM2/9/11
to
On 2011-02-09, Michael Wojcik <mwo...@newsguy.com> wrote:
> Rod Speed wrote:
>>
>> Using that dinosaur approach has some downsides, stupid.
>
> Ah, Rod, you charmer. A forty-seven line post for one zero-content
> insult. What a boon you are to Usenet.
>
> At least it's good to see that the need for rhetoric instruction among
> hoi polloi has not diminished, I suppose.
>

Sadly, not just the hoi polloi,

innit, mate.

Ahem A Rivet's Shot

unread,
Feb 9, 2011, 12:23:33 PM2/9/11
to
On 9 Feb 2011 14:26:16 GMT
jmfbahciv <See....@aol.com> wrote:

> Ahem A Rivet's Shot wrote:
> > On 8 Feb 2011 13:56:27 GMT
> > jmfbahciv <See....@aol.com> wrote:
> >
> >> Pre-laser printing required upper case for coding. I'd really really
> >> hate to have to read code in mixed case. Every time I see C code
> >> labels, I want to go chop, chop, chop to reduce them to upper case
> >> and 6 characters. :-).
> >
> > You must really hate typical Java variable names - as in
> > typicalJavaVariableNameContainingASentenceDescribingWhatItsFor.
> >
> Absolutely!!! The chances of a typo creating an unwanted bug
> is extremely high.

Usually a typo causes a compile time error because you never get
away with using oneOfTheseDirtyGreatLongNames just once and unless you are
really stupid typos do not generate another one that is actually n use.

> And a reading of a CREF or GLOB listing
> won't find it.

They can be a pain to find even with a compiler error stating that
aVeryLongNameinCamelCase is not defined on line 5764 it can take a while to
spot that it isn't aVeryLongNameInCamelCase and correct it.

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/

Stan Barr

unread,
Feb 9, 2011, 12:54:17 PM2/9/11
to
On 9 Feb 2011 14:51:28 GMT, grey...@mail.com <grey...@mail.com> wrote:
>
> I think Wheatstone(?) did some work pre-Morse that depended on
> making marks, (NOT letters), on paper rolls. Forget the source
> for the info. Definately dot-matrix printers were mixed case.
>
>

Morse's original system made marks on paper tape that were interpreted
visually, but the operators soon found they could read the clicking of
the inker and copying by ear was invented!

Peter Flass

unread,
Feb 9, 2011, 4:14:33 PM2/9/11
to
On 2/9/2011 9:23 AM, Michael Wojcik wrote:
> Rod Speed wrote:
>>
>> Using that dinosaur approach has some downsides, stupid.
>
> Ah, Rod, you charmer. A forty-seven line post for one zero-content
> insult. What a boon you are to Usenet.
>
> At least it's good to see that the need for rhetoric instruction among
> hoi polloi has not diminished, I suppose.
>

I've got him killfiled now, so the only way I know he's still around is
when someone replies to one of his posts :-)

Rich Alderson

unread,
Feb 9, 2011, 4:14:42 PM2/9/11
to
grey...@mail.com writes:

>>> http://www.radions.net/philcode.htm

> krbm abbrd

None of the alphabets used by Semitic languages in Southwest Asia consistenly
used vowel letters. Certain consonantal letters were used as guides to the
appropriate vowel: y = i/e, w = u/o, h = a. (In this function, these letters
are referred to in the grammars as "matres lectionae" = "mothers of reading";
I won't try to reproduce the Hebrew form of the phrase here.)

We *are* speaking of writing systems here, of course. The spoken languages
have perfectly ordinary vowels in their words--not entirely predictable, but
often enough that the writing systems work.

And "case" is only relevant in scripts derived from mediaeval Greek and Latin
models (which includes Cyrillic, of course) and brought into modern printing
usage. Previously, majuscules and minuscules were simply different alphabets
altogether used for the same languages.

--
Rich Alderson ne...@alderson.users.panix.com
the russet leaves of an autumn oak/inspire once again the failed poet/
to take up his pen/and essay to place his meagre words upon the page...

Peter Flass

unread,
Feb 9, 2011, 4:24:02 PM2/9/11
to
On 2/9/2011 12:23 PM, Ahem A Rivet's Shot wrote:
> On 9 Feb 2011 14:26:16 GMT
> jmfbahciv<See....@aol.com> wrote:
>
>> Ahem A Rivet's Shot wrote:
>>> On 8 Feb 2011 13:56:27 GMT
>>> jmfbahciv<See....@aol.com> wrote:
>>>
>>>> Pre-laser printing required upper case for coding. I'd really really
>>>> hate to have to read code in mixed case. Every time I see C code
>>>> labels, I want to go chop, chop, chop to reduce them to upper case
>>>> and 6 characters. :-).
>>>
>>> You must really hate typical Java variable names - as in
>>> typicalJavaVariableNameContainingASentenceDescribingWhatItsFor.
>>>
>> Absolutely!!! The chances of a typo creating an unwanted bug
>> is extremely high.
>
> Usually a typo causes a compile time error because you never get
> away with using oneOfTheseDirtyGreatLongNames just once and unless you are
> really stupid typos do not generate another one that is actually n use.
>
>> And a reading of a CREF or GLOB listing
>> won't find it.
>
> They can be a pain to find even with a compiler error stating that
> aVeryLongNameinCamelCase is not defined on line 5764 it can take a while to
> spot that it isn't aVeryLongNameInCamelCase and correct it.
>

That's one of the things I like about PL/I - case is not significant.
You can use any case that makes sense, but "aVeryLongNameinCamelCase"
and "aVeryLongNameInCamelCase" will be the same identifier.

Joe Pfeiffer

unread,
Feb 9, 2011, 10:19:40 PM2/9/11
to
Peter Flass <Peter...@Yahoo.com> writes:

Same here. I figure if he ever learns how to behave in public I'll see
it in people's quotes and I can let him out.
--
This sig block for rent

hanc...@bbs.cpcn.com

unread,
Feb 9, 2011, 11:13:31 PM2/9/11
to
On Feb 5, 7:02 pm, Quadibloc <jsav...@ecn.ab.ca> wrote:

> Since the IBM 1401 was IBM's largest-selling computer, presumably the
> way for IBM to avoid "betting the company" would have been to devise
> an upgraded 1401 that would have replaced all of IBM's _other_ product
> lines, except for the 1401. Could that even be conceivable?

Yes, it was actually being seriously considered.

There were people in IBM strongly wedded to the 1401, and a 1401-S was
under development using more modern hardware. The IBM history
explains all that.

However, the 1401 architecture would not have been good as a
scientific computer. According to the book "Computer" (Aspray Kelly)
the 1401 CPU was no big deal; the 1401's claim to fame was its high
speed printer. Also, as others mentioned, the numeric logic was
decimal and not that good for scientific work.

hanc...@bbs.cpcn.com

unread,
Feb 9, 2011, 11:15:31 PM2/9/11
to
On Feb 5, 10:03 pm, des...@verizon.net wrote:

> I always thought the IBM 1410 was IBMs attempt to build a follow on
> to the 1401 but the Wikipedia timeline shows the timeline as
> 1401, 1410, 1440, 1460.
>
> (Well those are the ones I know because I worked on them.)

I always got the impression that large shops preferred the 7090
instead of the 1410. I also thought the 1410 came out later and
wasn't a big seller.

Didn't SABRE start off with a 7090?


hanc...@bbs.cpcn.com

unread,
Feb 9, 2011, 11:18:18 PM2/9/11
to
On Feb 5, 10:59 pm, des...@verizon.net wrote:

> If I recall correctly, the 360/30 used 16K of 8 bit memory to represent
> 16K of 14xx memory.  No memory was wasted.  The 360/40 left gaps of
> unused storage I suppose to remove the need for base 10 addressing.
>
> IBM could have made 14xx emulation a standard feature on all
> System/360s.  No word marks really made System/360 a different machine
> for programmers.

1401 emulation also required microcode in the S/360 to work.
Originally they tried doing it merely via software, but that ran too
slow.

I believe emulation was extremely popular so it was virtually a
standard feature. It was critical for IBM to provide so as to provide
an upward path for 1401 and 7090 users to go to S/360.

Many shops retained old programs into the 1990s. Who knows, maybe
some is still running today though I suspect Y2k needs killed off
whatever was left.

hanc...@bbs.cpcn.com

unread,
Feb 9, 2011, 11:26:13 PM2/9/11
to
On Feb 6, 4:44 pm, John Levine <jo...@iecc.com> wrote:

> Remember that part of the goal of the 360 was to unify the software so
> there would be one operating system.  That failed for a variety of
> reasons (see "The Mythical Man Month") but the last thing they wanted
> to do was to keep supporting the software on all of the older machines
> forever.

The goal of one operating system was only one of several goals, and
that was dropped early on when they realized "OS" wouldn't fit or work
on the low end machines. DOS, etc were rushed out to serve low end
machines. Wise move.

The goal of S/360 was a unified architecture so that:

1) Peripherals could be the same up and down the line instead of four;
2) Programming would need to know only one architecture instead of
four;
3) Customers going from low end to high end machines need not rewrite
their programs;
4) The four specialities of low end business, high end business, low
end sci/eng and high end sci/eng would be consolidated into a single
architecture and product line.
5) Science and business applications could share the same computer.


hanc...@bbs.cpcn.com

unread,
Feb 9, 2011, 11:26:59 PM2/9/11
to
On Feb 6, 11:15 pm, Quadibloc <jsav...@ecn.ab.ca> wrote:

> While most of this is quite true, I would be very surprised if the 75
> was a "hard wired 65". The two machines may have used the same type of
> core memory, but they were designed by separate design teams. (It was
> originally hoped to use microprogrammed control for the Model 70 as
> well, but none could be found that was fast enough.)

I don't have it handy, but the IBM S/360 history (Pugh et al) explains
the development and differences in detail.

hanc...@bbs.cpcn.com

unread,
Feb 9, 2011, 11:33:35 PM2/9/11
to
On Feb 7, 8:26 am, Peter Flass <Peter_Fl...@Yahoo.com> wrote:

>  From my limited perspective, I'm not sure that it was clear at the time
> how big the conversion effort would be.  This (again AFAIK) was the
> first time a large number of users had to convert such a relatively
> large base of software between such incompatible architectures.  Having
> done it once no one wants to go thru it again which is ore reason there
> are such large numbers of IBM mainframes, and why the awful Intel
> architecture continues to lay its dead hand on PC development.

That is true. The S/360 history explains that initially emulation was
seen only as a software program, but later that was found to be too
slow, and it was lucky that changes in the microcode could support
emulation.

The System/360 architecture will undoubtedly reach it's 50th birthday
in April 2014 which quite a large amount of CPU horsepower still in
service. I'm not sure how much new development is underway for Z
series today, but there is a huge amount of major enterprise-wide
legacy systems that aren't going anywhere just yet.


> > The biggest issue for me was that the 1311s (14xx Disks) were FBA with
> > easy to work with 100 character sectors and the sectors were addressable
> > with sequential addresses.  Writing your own disk access was trivially
> > easy.  The follow on 2311, 2314, the drums, the noodle pickers were all
> > different.  A huge compatibility mess which was 100% unnecessary.
>
> That's an interesting observation.  Does anyone know why IBM decided to
> go with CKD diska rather than FBA for the 360?  In retrospect it turns
> out to have been a terrible decision.

I don't understand why it was a terrible decision. Isn't CKD more
efficient since the record stored on the disk matched the needs of the
application?

From the perspective of a COBOL programmer, the particular model of
disk was irrelevant. We had ISAM and later VSAM for random access
files.


Nick Spalding

unread,
Feb 10, 2011, 6:59:41 AM2/10/11
to
hanc...@bbs.cpcn.com wrote, in
<f40eb3c8-bd21-45dc...@i40g2000yqh.googlegroups.com>
on Wed, 9 Feb 2011 20:26:13 -0800 (PST):

> The goal of one operating system was only one of several goals, and
> that was dropped early on when they realized "OS" wouldn't fit or work
> on the low end machines. DOS, etc were rushed out to serve low end
> machines. Wise move.

The first /30 I installed, ca 1966, came with BOS. It was serial number
30 from the Sindelfingen plant.
--
Nick Spalding

Peter Flass

unread,
Feb 10, 2011, 8:34:20 AM2/10/11
to
On 2/9/2011 11:26 PM, hanc...@bbs.cpcn.com wrote:
> On Feb 6, 4:44 pm, John Levine<jo...@iecc.com> wrote:
>
>> Remember that part of the goal of the 360 was to unify the software so
>> there would be one operating system. That failed for a variety of
>> reasons (see "The Mythical Man Month") but the last thing they wanted
>> to do was to keep supporting the software on all of the older machines
>> forever.
>
> The goal of one operating system was only one of several goals, and
> that was dropped early on when they realized "OS" wouldn't fit or work
> on the low end machines. DOS, etc were rushed out to serve low end
> machines. Wise move.

I'm not sure of that. From IBM's POV that's more resources they have to
devote to software that probably has a small profit margin.

From the user's POV, obviously a low-end customer couldn't afford to
run an OS that eats 75% or more of his system. I believe these systems
could have run PCP, but that was single-tasking and much less capable
than DOS.

On the other hand I've long been disappointed (since the 70's) that IBM
didn't make more of a move to unify the two systems. I understand from
the stuff Lynn posts that probably no one could make a business case for
it, but for users more compatibility would have been a good thing. It
defeats the benefits of a unified architecture when you have to at least
recompile all your programs and recode all your JCL when moving from DOS
to OS.

Anne & Lynn Wheeler

unread,
Feb 10, 2011, 8:36:06 AM2/10/11
to

hanc...@bbs.cpcn.com writes:
> 1401 emulation also required microcode in the S/360 to work.
> Originally they tried doing it merely via software, but that ran too
> slow.
>
> I believe emulation was extremely popular so it was virtually a
> standard feature. It was critical for IBM to provide so as to provide
> an upward path for 1401 and 7090 users to go to S/360.
>
> Many shops retained old programs into the 1990s. Who knows, maybe
> some is still running today though I suspect Y2k needs killed off
> whatever was left.

univ. had a 407 plug-board application which went thru some stages and
eventual was running as 360 cobol program ... still simulating the 407
plug-board ... including printing out 407 settings at the end-of-the
run. this was run every day for the administration (on os/360 on 360/67
running in 360/65 mode). one day one of the operators noted that the
ending 407 settings were different and they stopped all processing.
They spent next couple hrs trying to find somebody in the administration
that knew what should be done (while all other processing was
suspended). Finally the decision was made to rerun the application (and
see if the same results came out) ... and then resumed normal
processing.

Peter Flass

unread,
Feb 10, 2011, 8:44:33 AM2/10/11
to
On 2/9/2011 11:33 PM, hanc...@bbs.cpcn.com wrote:
> On Feb 7, 8:26 am, Peter Flass<Peter_Fl...@Yahoo.com> wrote:
>
>> From my limited perspective, I'm not sure that it was clear at the time
>> how big the conversion effort would be. This (again AFAIK) was the
>> first time a large number of users had to convert such a relatively
>> large base of software between such incompatible architectures. Having
>> done it once no one wants to go thru it again which is ore reason there
>> are such large numbers of IBM mainframes, and why the awful Intel
>> architecture continues to lay its dead hand on PC development.
>
> That is true. The S/360 history explains that initially emulation was
> seen only as a software program, but later that was found to be too
> slow, and it was lucky that changes in the microcode could support
> emulation.
>
> The System/360 architecture will undoubtedly reach it's 50th birthday
> in April 2014 which quite a large amount of CPU horsepower still in
> service. I'm not sure how much new development is underway for Z
> series today, but there is a huge amount of major enterprise-wide
> legacy systems that aren't going anywhere just yet.

50! It seems like such a short time and a long time at once.

>
>
>>> The biggest issue for me was that the 1311s (14xx Disks) were FBA with
>>> easy to work with 100 character sectors and the sectors were addressable
>>> with sequential addresses. Writing your own disk access was trivially
>>> easy. The follow on 2311, 2314, the drums, the noodle pickers were all
>>> different. A huge compatibility mess which was 100% unnecessary.
>>
>> That's an interesting observation. Does anyone know why IBM decided to
>> go with CKD diska rather than FBA for the 360? In retrospect it turns
>> out to have been a terrible decision.
>
> I don't understand why it was a terrible decision. Isn't CKD more
> efficient since the record stored on the disk matched the needs of the
> application?
>
> From the perspective of a COBOL programmer, the particular model of
> disk was irrelevant. We had ISAM and later VSAM for random access
> files.

All the disks now are FBA under the covers, and the control units
emulate the FBA. Huge amounts of programmer time were spent computing
the best block sizes to fit on a track and at the same time fit in core.
DOS programs required the block size in the DTF, so it wasn't easy to
change block sizes for new devices - you had to recompile all your COBOL
stuff. Remember "BLOCK CONTAINS nnn RECORDS"? ISTR moving from 2311 to
2314 would prompt reblocking, certainly moving to 3330.


>
>

Anne & Lynn Wheeler

unread,
Feb 10, 2011, 9:25:20 AM2/10/11
to

hanc...@bbs.cpcn.com writes:
> I don't understand why it was a terrible decision. Isn't CKD more
> efficient since the record stored on the disk matched the needs of the
> application?
>
> From the perspective of a COBOL programmer, the particular model of
> disk was irrelevant. We had ISAM and later VSAM for random access
> files.

re:
http://www.garlic.com/~lynn/2011b.html#63 If IBM Hadn't Bet the Company

specialized formating with keys and data ... and then search operations
that would scan for specific keys &/or data ... minimizing filesystem
structure in real memory. multi-track search could scan every track at
same arm location (cylinder) ... but to continue would have to be
restarted by software. major os/360 disk structure was VTOC (volume/disk
table of contents) ... which used multi-track search. The other was
library files, PDS (partitioned data set) that was used for most
executable software (as well as other items) ... which had PDS
"directory" ... that was also searched with multi-track search.

As part of scarce real-storage the disk i/o search argument was in real
processor storage ... and was refetched by search command for every
key/data encountered for the compare operation (requiring end-to-end
access from disk to processor storage).

The all-time winner and complicated in this was ISAM channel programs
which could do a search, read the data ... reposition the arm ... and
use the previously read data for later search argument (potentially
repeated several times, all in single channel program w/o interrupting
the processor)

As I mentioned the trade-off started to shift by at least the mid-70s
... with real storage resources significantly increasing ... while I/O &
disk thruput improvements was starting to lag significantly (becoming
the major bottleneck; other infrastructures was increasingly leverage
real storage to compensate for the growing disk bottleneck).

In the late 70s, I was brought into datacenter for large national
retailer that had large number of loosely-coupled processors running
latest os/360 in a "shared disk" environment. They were having a really
horrible throughput problem and several experts from the corporation had
already been brought in to examine the problem.

I was brought into classroom that had several tables covered with
foot-high stacks of printed output ... all performance activity reports
for all the various systems (including each system individual disk i/o
counts at several minute interval). After 30-40 minutes of fanning all
the reports ... I asked about one specific disk ... that under peak-load
... the aggregate sum of the disk i/o counts across all systems seemed
to peg at six or seven (very low correlation of peak versus non-peak for
any other item) ... was just rough measure since I was doing the
aggregation across all systems and correlation in my head.

Turns out the disk contained the shared application program (PDS)
library for all the regions. It had large number of applications and the
size of the PDS directory was three (3330) cylinders. Every program load
(across all systems) first required a multi-track search of the PDS
directory ... on avg was a 1.5 cylinder search. 3330 spinning a 3600 RPM
... the first multi-track search i/o took full 19 revolutions (or
19/60th of a second) with a 2nd multi-track search of 9.5 revolutions
(9.5/60 of a second) ... before getting the actual location of the PDS
member application program ... movement and load maybe 2/60 of a
second. In aggregate, each program load was taking 3 disk I/Os and
approx. 29.5/60 of a second .... or the whole infrastructure across all
processor for all the national regions for all retail stores ... was
limited to loading two applications per second.

Now normally disk infrastructure has multiple disks sharing a common
controller and a common channel. Because of the dependency to repeatedly
reload the search argument from processor memory ... a multi-track
search locks up the controller and channel for the duration of the
operation (not available for any other operations) ... severely
degrading all other operations.

So solution ... was replicate the program library ... one for processor
(instead a shared/common for all processors) ... requiring lots for
manual effort to keep them all in sync. The replicated program library
was also split ... with more application logic to figure out which of
the libraries contained specific application (with dedicated set of
program libraries per system).

Similar, but different story about pervasive use of multi-track by
os/360 (and its descendents) was san jose research for a period had a
370/168 running MVS (replacing 370/195 that had run MVT) in shared disk
configuration with 370/158 running VM. Even tho, VM used CKD disks, its
I/O paradigm had always been FBA (and was trivial to support real FBA
disks). SJR had rule that while the disks & controller were physically
shared, they were logically partitioned so I/O activity from the two
different systems wouldn't interfere.

One day an operator accidentially mounted a MVS 3330 pack on a VM
"string" disk controller. Within 10 minutes, operations were getting
irate calls from VM users about severe degraded performance. Normal
multi-track search by MVS to its pack ... was causing extended lock-out
of VM access to other disks on the same string/controller). MVS
operations refused to interrupt the application and move the 3330 to an
MVS string/controller.

A few individuals then took a pack for a VS1 system that had been highly
optimized for operation under VM ... got the pack mounted on a MVS
string/controller and brought up VS1 undere VM (on 370/158) ... and
started running their own multi-track searches. This managed to nearly
bring the MVS system to a halt (drastically reducing MVS activity
involving the 3330 on the VM string ... and significantly improve
response for the VM users that were accessing data on that string). MVS
operations decided that they would immediately move the MVS 3330
(instead of waiting to off-shift) ... if the VM people would halt the
VS1 activity.

One of the jokes is that a large factor contributing to horrible TSO
response under MVS (TSO users don't normally even realize how horrible
it is ... unless they have seen VM response for comparison) ... isn't
just the scheduler and other MVS operational characteristics ... but
also the enormous delays imposed by multi-track search paradigm.

misc. past posts discussing CKD, multi-track search and FBA
http://www.garlic.com/~lynn/submain.html#dasd

Anne & Lynn Wheeler

unread,
Feb 10, 2011, 9:55:53 AM2/10/11
to
Peter Flass <Peter...@Yahoo.com> writes:
> I'm not sure of that. From IBM's POV that's more resources they have
> to devote to software that probably has a small profit margin.
>
> From the user's POV, obviously a low-end customer couldn't afford to
> run an OS that eats 75% or more of his system. I believe these
> systems could have run PCP, but that was single-tasking and much less
> capable than DOS.
>
> On the other hand I've long been disappointed (since the 70's) that
> IBM didn't make more of a move to unify the two systems. I understand
> from the stuff Lynn posts that probably no one could make a business
> case for it, but for users more compatibility would have been a good
> thing. It defeats the benefits of a unified architecture when you
> have to at least recompile all your programs and recode all your JCL
> when moving from DOS to OS.

The POK favorite son operating system repeatedly tried ... customers
kept getting in the way.

the software has small profit margin (in the 60s, it use to be free)
... however, user being able to run an application program was the
justification for having the hardware. Amdahl gave a talk in the early
70s at MIT about founding his clone processor company. He was asked how
he convinced the venture/money people to fund the company. He said that
customers had already invested several hundred billion in 360
application software development ... which even if IBM were to totally
walk away from 360 (I claim could be a vieled reference to Future
System), that was enough to keep him in business through the end of the
century.

DOS to OS conversion was an incompatibility inhibitor ... but not as
great as totally different system. Part of the issue was perception
... part of the issue was the degree of effort to move from DOS to OS
(compared to radically different systems and hardware) ... aka possibly
reduces the benefit of unified architecture ... but didn't totally
defeat the benefit of unified architecture.

Low-end (actually any) customers could run an OS that eats 75% or more
of his system ... if the aggregate cost of the system is less than the
cost of not running the application (net benefit) ... and they didn't
perceive a viable alternative. In early days it was completely different
world ... cost of application development could dominate all other costs
... and opportunity costs ... having the automation versus doing it
manually ... covered a lot of sins ... especially with high value
operations like financial operations (financial institutions and/or
financial operations of large corporations).

Compatible architecture provided perception that the several hundred
billion in (customer) software application development didn't have to
scrapped and start all over every generation. This was the testimony in
gov. litigation by RCA(?) that all the vendors had realized by the late
50s the requirement for single compatible line ... and only IBM actually
pulled it off (corrollary is that with the perception of being the only
one that pulled it off, a lot of other things could be done wrong
... and still be able to dominate the market).
http://www.garlic.com/~lynn/2011b.html#57 If IBM Hadn't Bet the Company

POK favorite son operating system ... after FS failure and mad rush to
get product (hardware & software) back into the 370 line ... managed to
convince corporate to kill off vm370 (because POK needed all the vm370
developers in order to meet the MVS/XA delivery date). Endicott
eventually managed to save the vm370 product .. but had to reconstitute
a development group from scratch. misc. past posts mentioning FS
http://www.garlic.com/~lynn/submain.html#futuresys

some of the 70s confusion was that FS was planned to completely replace
all 370 products (hardware and operating system) ... the aftermath of FS
demise left the company is disarray and scrambling to recover (there
have been comments if something the magnitude of FS had been attempted
by any other vendor ... the resulting massive failure would have
resulted in the vendor no longer being in business)

des...@verizon.net

unread,
Feb 10, 2011, 11:10:22 AM2/10/11
to
hanc...@bbs.cpcn.com writes:

* IBM 7090 - 1959
* IBM 1401 - 1959
* IBM 1410 - 1960
* IBM 1440 - 1962
* IBM 7010 - 1962
* IBM 1460 - 1963

Never worked in a shop big enough for a 7090.

The only 1410 I ever worked on was a 1410 being emulated on
S/360 40.

des...@verizon.net

unread,
Feb 10, 2011, 11:27:52 AM2/10/11
to
hanc...@bbs.cpcn.com writes:

Yes, in theory. Theory being the key word.

On early machines it was a quandary. Do you pick the largest block
size to utilize every byte you can fit on the disk? If you do,
every program you write has to account for buffers large enough
to read those huge tracks. On a 32 or 64K model 30, that could
be a big problem.

Another issue is addressing those tracks. Because of CKD you
are exposed to bizarre BIN/CYL/HEAD/TRACK addressing. Different
arithmetic on every device.

Michael Wojcik

unread,
Feb 10, 2011, 12:44:28 PM2/10/11
to
hanc...@bbs.cpcn.com wrote:
> I'm not sure how much new development is underway for Z
> series today, but there is a huge amount of major enterprise-wide
> legacy systems that aren't going anywhere just yet.

It depends on where you draw the line between maintenance and "new
development". There seems to be plenty of activity enhancing and
extending existing zOS application suites, whether it's adding new
application functions or extracting more value through things like
wrapping business logic as web services so it's easily accessible to
non-mainframe clients.

Since a lot of "new" applications make use of those legacy systems,
it's hard to say what's really "greenfield" development.

--
Michael Wojcik
Micro Focus
Rhetoric & Writing, Michigan State University

Michael Wojcik

unread,
Feb 10, 2011, 12:47:50 PM2/10/11
to

Well, as a semi-pro rhetorician, of course I think everyone could use
more rhetoric theory in their lives. But that's just me.

(BTW, "the hoi polloi" is redundant - "hoi" is the nominative
masculine plural definite article in classical Greek. Really it's
omicron-iota with a rough breathing, but we typically transcribe that
breathing as an initial "h".)

Michael Wojcik

unread,
Feb 10, 2011, 1:44:21 PM2/10/11
to
Peter Flass wrote:
>
> That's one of the things I like about PL/I - case is not significant.
> You can use any case that makes sense, but "aVeryLongNameinCamelCase"
> and "aVeryLongNameInCamelCase" will be the same identifier.

Java is case-sensitive partly because its character set is Unicode
(BMP, characters U+0000 through U+FFFF), and "case" doesn't apply to
most of those. If you start creating special rules for characters that
do have other-case equivalents, things start off arbitrary and quickly
get very complicated, confusing, and expensive to implement.

But of course Java also inherited a lot of its syntax from the C
family, which is case-sensitive.[1] And there was perhaps a bit of a
trend in that era (around the 80s, more or less) away from
case-insensitivity. For example, Pascal is case-insensitive, but
Modula-II is case-sensitive.

Anyway, with modern tools, it's typically trivial to find and correct
this sort of error. Most Java developers seem to use IDEs that hold
knowledge about the application and can identify typos on the fly; and
even those of us holdouts who scorn IDEs have toolsets (editors like
vim and emacs, text processing tools like grep, etc) that do most of
the work.


[1] Mostly; the C standard does not guarantee that external
identifiers can be distinguished by case, for example, because it
leaves that up to the implementation.

Anne & Lynn Wheeler

unread,
Feb 10, 2011, 2:56:41 PM2/10/11
to

re:
http://www.garlic.com/~lynn/2011b.html#77

company did have a large variety of other processors (besides 360/370)
... 1800, s/3, s/32, s/34, s/36, s/38, series/1, system/7, 8100,
etc. in addition there was large variety of embedded processors in
controller, devices, and the native engines for the low-end 370. old
email (joke) about MIT lisp machine project asking for 801/risc
processor and being offerred 8100 instead:
http://www.garlic.com/~lynn/2003e.html#email790711

370 compatibility was already giving up a lot in performance, low&mid
range emulators typically ran ratio of 10:1 native instructions to
370; 300kip 370/145 needed 3mip native engine, 80 kip 370/115 needed
nearly 1mip native engine.

the 135/145 follow-on (370 138/148) had additional microcode storage
... plan was to 1) add additional (virtual machine) 370 privilege
instructions to be executed directly to virtual machine rules (rather
than interrupt into kernel and be simulated ... aka "VMA" originally
done for 370/158) and 2) "ECPS" which placed part of the cp kernel
directly in microcode. There was 6000 bytes of extra microcode for
"ECPS" and 370->native translated nearly byte-for-byte. So the first
effort was instrument the vm370 kernel and identify the highest used
kernel instruction pathlengths ... and cut-off when 6000 bytes was
reached (which accounted for just under of 80% of kernel
overhead). The process involved invented a new instruction and adding
it to the vm370 kernel in front of the instruction sequence it was
replacing (with a pointer to the next following instruction where it
would resume). At startup, there was a test if the appropriate
microcode was available ... and if not, overlay all the "ECPS"
instructions with "no-ops". Old post with result of the vm370 kernel
instrumentation that selected the kernel paths that would become ECPS
(part of work I did for Endicott spring of 1975):
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

That 80% of the kernel pathlength (ECPS) then ran ten times faster
(implemented in native engine) ... 370/148 at over 500kips ... ECPS then
ran over 5mips (directly on the native engine).

Now, somewhat as result of complexity of Future System ... I've
claimed that the work on 801/risc went to the opposite for extremely
simple software. There was an advanced technology conference held in
POK in the mid-70s ... where we presented 16-way 370 multiprocessor
and the 801 group presented risc, CP.r and PL.8. End of 70s, a
801/risc effort was kicked off to replace the large variety of
internal microprocessors with 801/risc (eliminating huge duplication
in chip designs along with native engine software & application
development). Low&mid range 370s would all converge to 801/risc as
native microprocessor (4341 follow-on, the 4381 was suppose to be 801
native; s/38 follow-on, the as/400 was suppose to be 801 native, OPD
displaywriter follow-on was to be 801 native, large number of
microprocessors for controllers and other embedded devices would all
be 801). For various reasons, these efforts all floundered and were
aborted.

In spring 1982, I held internal corporate advance technology
conference (first since the earlier one hld in POK, comments were that
in the wake of Future System demise ... most advance technology
efforts were all eventually scavanged to resume 370 activity and push
370 hardware and software out the door as quickly as possible).
http://www.garlic.com/~lynn/96.html#4a

I wanted to do a project that rewroted vm/cms with lots of new
function ... implemented in higher level language ... to minimize it
being able to port to different machine architectures. In theory, this
would allow also being able to retarget a 370 kernel to some native
engine (possibly 801) while still providing both native virtual
machines and at the same time 370 virtual machines (best of ECPS
combined with being able to port to other processors) ... slight
analogy might be Apple with CMU MACH and power/pc. misc. old email
mentioning 801, risc, iliad, romp, rios, power, power/pc, etc
http://www.garlic.com/~lynn/lhwemail.html#801

note there had been big scavenging of advance technology towards
the end of FS (attempting to save the effort) ... reference in this
old email
http://www.garlic.com/~lynn/2011b.html#email800117
in this (linkedin) Future System post
http://www.garlic.com/~lynn/2011b.html#72

but then (after demise of FS) the mad rush to try and get stuff back
into the 370 product pipelines ... they sucked nearly all remaining
advanced technology resources into near term product production.

Other interests got involved (in SCP redo) and eventually the effort
was redirected to a stripped down kernel that could be common across
all the 370 operating systems. The prototype/pilot was to take the
stripped down tss/370 kernel ... which had been done for special
effort for AT&T ... they wanted to scaffold unix user interface on top
of the stripped down tss/370 kernel. The justification was with four
370 operating systems ... there was enormous duplication effort
required by all the device product houses ... having four times the
cost for device drivers and RAS support in the four different
operating systems (DOS, VS1, VM, and MVS). This became strategic
corporate activity (common low-level code for all four operating
systems) ... huge numbers of people assigned to it ... and eventually
had its own FS-demise moment ... collapsing under its own
weight. recent post on the subject
http://www.garlic.com/~lynn/2011.html#20
older reference about some of the TSS/370 analysis (for above)
http://www.garlic.com/~lynn/2001m.html#53

with all that going on ... by the mid-80s ... there were starting to
be single chip 370 implementations ... as well as various other/faster
801/risc chips ... so going in slightly different direction was large
clusters of processors (in rack mount configurations):
http://www.garlic.com/~lynn/2004m.html#17
also mentioned in this recent post
http://www.garlic.com/~lynn/2011b.html#48

and this collections of old email about NSFNET backbone ... I had to
find a fillin for presentation to head of NSF on HSDT ... because I
needed to do a week in YKT on the cluster processor stuff
http://lnkd.in/JRVnGk

other old NSFNET backbone related email
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

and misc. old HSDT related email
http://www.garlic.com/~lynn/lhwemail.html#hsdt

the above cluster processor effort somewhat stalled ... but resumed
with medusa (cluster in a rack) ... some old email
http://www.garlic.com/~lynn/lhwemail.html#medusa

old referenced in this old post about jan92 meeting in Ellison's
conference room on cluster scaleup
http://www.garlic.com/~lynn/95.html#13

as part of ha/cmp product some old posts
http://www.garlic.com/~lynn/subtopic.html#hacmp

for other drift ... within a month of the meeting in Ellison's
conference room, the effort had been transferred, announced as
supercomputer and we were told we couldn't work on anything with more
than four processors. This contributed to decision to leave not long
afterwards.

now a year or so later, two of the other people that were in that jan92
meeting ... had also left (oracle) and show up at a small client/server
startup responsible for something called the "commerce server" ... the
startup had also invented this technology they called "SSL". We were
brought in to consult because they wanted to do payment transaction on
the server; the result is now frequently called "electronic commerce".

Walter Bushell

unread,
Feb 10, 2011, 3:18:05 PM2/10/11
to
In article <ij0ph1$uvf$1...@news.eternal-september.org>,
Peter Flass <Peter...@Yahoo.com> wrote:
<snip>< . . .

>
> On the other hand I've long been disappointed (since the 70's) that IBM
> didn't make more of a move to unify the two systems. I understand from
> the stuff Lynn posts that probably no one could make a business case for
> it, but for users more compatibility would have been a good thing. It
> defeats the benefits of a unified architecture when you have to at least
> recompile all your programs and recode all your JCL when moving from DOS
> to OS.

JCL was referred to as "Jesus Christ the Latter".

--
The Chinese pretend their goods are good and we pretend our money
is good, or is it the reverse?

Joe Morris

unread,
Feb 10, 2011, 8:28:10 PM2/10/11
to
"Anne & Lynn Wheeler" <ly...@garlic.com> wrote:

> univ. had a 407 plug-board application which went thru some stages and
> eventual was running as 360 cobol program ... still simulating the 407
> plug-board ... including printing out 407 settings at the end-of-the
> run. this was run every day for the administration (on os/360 on 360/67
> running in 360/65 mode). one day one of the operators noted that the
> ending 407 settings were different and they stopped all processing.
> They spent next couple hrs trying to find somebody in the administration
> that knew what should be done (while all other processing was
> suspended). Finally the decision was made to rerun the application (and
> see if the same results came out) ... and then resumed normal
> processing.

Sigh...the first assignment I was given when I showed up for work at what
would for nineteen years be my POE was to (a) learn how to wire a 407 board,
and (b) build one that wouldn't tear up the printwheels when a student tried
to list a 7040 binary deck. I've still got the wiring charts in a box in my
basement. At least when I wrote the P0 utility (noted in a recent posting)
I retained the function but *not* the 407 architecture. (In any case, how
could I have emulated the 407's dynamic timer?)

And on the issue of just rerunning the job...that's not *always*
unreasonable. Recall the bug that Melinda Varian discovered in the VM
dispatch code for a multiprocessor system, where the system forgot to load
the FPRs on, IIRC, a fast redispatch.

Joe


Anne & Lynn Wheeler

unread,
Feb 10, 2011, 9:40:37 PM2/10/11
to

"Joe Morris" <j.c.m...@verizon.net> writes:
> And on the issue of just rerunning the job...that's not *always*
> unreasonable. Recall the bug that Melinda Varian discovered in the VM
> dispatch code for a multiprocessor system, where the system forgot to load
> the FPRs on, IIRC, a fast redispatch.

I first did "fastpath" (including fast redispath) in the 60s at the
univ, for (uniprocessor) cp67. it is included in the pathlength
work mentioned in this 1968 presentation at share
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

and was picked up and shipped in the standard cp67 product.

there was lots of simplification in the morph from cp67 to vm370 ... and
all the fastpath code was dropped. Even tho i still hadn't done
port of cambridge system changes from cp67 to vm370 ... reference
here in this old email
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

i did give the development group the "fastpath" changes that (i believe)
were shipped in vm370 release 1, plc9.

justification for not restoring floating point registers in "fast
redispatch" (interrupt into the kernel and resuming execution of the
same virtual machine) was that cp kernel never used floating point
registers (so they should have been unchanged during kernel execution).

multiprocessor support required getting smarter about "fast redispatch"
(was the same virtual machine being redispatched on the same processor
with contents of the real floating point registers still current for
that virtual machine).

various fiddling in dispatch can result in all sorts of problems ...
here is recent (linkedin) post about fix "vm13025" ...
http://www.garlic.com/~lynn/2011b.html#61

including in above, old email with melinda:
http://www.garlic.com/~lynn/2011b.html#email860217
http://www.garlic.com/~lynn/2011b.html#email860217b

It is loading more messages.
0 new messages