00000047540adef...@LISTSERV.UA.EDU (Bill Johnson) writes:
> At GM, circa 1983-84, we had plants all over the country that worked
> off of a mainframe in Warren, Ohio. (Packard Electric) After the
> transition to EDS, those same plants worked off the mainframe in
> Charlotte, NC. (They moved multiple GM divisions there) Early cloud
> processing, outsourced to EDS. Now, any time I perform a banking
> transaction, it gets processed wherever the JP Morgan mainframe is
> located. If I access my photos, they are retrieved from Apple or
> Amazon at one of their DC’s. The only difference is in 1984 you used a
> stationary PC or Terminal. Now you use a smaller PC called a laptop or
> phone.
took two semester hr intro to fortran/computers and then within a year,
univ. hired me fulltime to be responsible for ibm mainframe
systems. Then before I graduate, I'm hired fulltime into a small group
in the Boeing CFO office to help with the formation of Boeing Computer
Services (consolidate all dataprocessing into independent business unit
to better moentize the investment, including offering services to
non-Boeing entities). I thought renton datacenter was possibly largest
in the world, couple hundred million in IBM 360s ... and 360/65s
arriving faster than they could be installed, boxes constantly staged in
the hallways around the machine room (747#3 was flying skies of seattle
getting FAA flt certification). There was disaster plan to replicate
renton datacenter up at the new 747 plant in Everett (aka another couple
hundred million; scenario where Mt. Rainier heats up and the resulting
mud slide takes up the renton datacenter). When I graduate, I join the
IBM Cambridge Science Center (rather than staying in the Boeing CFO
office).
In the late 60s and early 70s, there had been spin-offs of the science
center, offering online commercial computer services with (virtual
machine) CP67. During this period there was great deal of effort
expanding to 7x24 service ... including support for dark room unattended
offshift online availability. The other was special terminal CCWs. 360s
in this period were rented/leased with charges based on the system
"meter" which ran whenever the CPU(s) and any channels were running. The
special terminal CCWs were to allow the system meter to stop (when there
was no activity) ... but "instantly on" whenever characters started
arriving. Also, all processors and channels had to be completely idle
for 400ms for the "system meter" to come to a stop. The cloud
megadatacenter analogy is stop using power/cooling when idle, but
instantly on when needed, typical cloud megadatacenter will have over
500,000 server blades, each blade with something like ten times
processor power of max. decked out IBM mainframe ... with significant
use fluctuation between low & peak demand.
Trivia: at least in late 70s, well after IBM mainframes changed to being
sold instead of leased/rented, MVS still had a timer task that woke up
every 400ms (making sure that system meter would never come to a stop).
At least two of the 60s science center spinoffs had quickly moved up the
value stream into offering online services to the financial industry
... and had to demonstrate significant security ... making sure that
competitors using the same systems couldn't evesdrop/compromise each
other.
there were "portable" 2741 terminals from the 60s ... but they came in
two 40lb suitcases. less than decade later, in 1977 I get a CDI miniterm
... portable that was only a few lbs.
Wasn't just the 60s outside online commercial service bureaus, the
science center had to demonstrate significant security. Science center
had ported APL\360 to CP67/CMS for CMS\APL (having to rewrite
significant portions to change from 16kbyte real storage swapped
workspaces to multimegabyte virtual memory demand paged workspaces
... and adding API for system services, like file i/o ... enabling
real-world applications). CSC had enabled online access for staff,
students and professors from various boston/cambridge area univ ... but
then Armonk business planners started using CMS\APL remotely also
... and loaded the most valuable IBM business information on the system
(and we had to demonstrate tight security especially with the dialup
oline non-IBM users).
We must have done a good job ... a couple years later (after joining
science center), IBM got a new CSO (had come from gov. service at one
time head of presidential detail) and I was asked to run around with him
talking about computer security (while a little bit of physical security
rubs off on me).
then there are the gov. agencies that I didn't hear about until later
(this ref gone 404, but lives on at wayback machine)
http://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml
GM trivia .... 1990, I was working with RS/6000 workstations and got
asked to participate in the GM "C4" taskforce ... that was looking at
how to better compete with foreign automakers ... they were planning on
heavily leveraging IT ... and invited several vendors to send
representatives. They described how GM was on a 7-8yr product cycle
with two programs running in parallel offset 4yrs (so it would look like
something new was coming out more often). Foreign competition had cut
their product cycle to 4yrs in the first half of the 80s and were in the
process of cutting it in half again (18-24months to deliver brand new
product) ... much more agile and able to respond to changing buyer
habits and/or technology. Offline I would ask the POK mainframe people
how they could contribute since they had similarly long development
cycles.
Poster child/example was 'vet ... it had tight internal tolerances under
the "skin" ... and from initial design to rolling off the line, part
makers would have changed their product lines ... and there was cases
where they had to redesign in order to get the existing parts to fit.
--
virtualization experience starting Jan1968, online at home since Mar1970