The Gazette also gave a pointer to an unedited version of an interview Cray
did last year with the Smithsonian (along with their edited version): the
unedited version is at URL "http://innovate.si.edu/history/cray/craytoc.htm".
--
John R. Grout Center for Supercomputing R & D j-g...@uiuc.edu
Coordinated Science Laboratory University of Illinois at Urbana-Champaign
This article has already shifted into an archive. Getting to it
takes some time. The shortcut is
//www.usa.net/gtonline/archive/96-10-06/top010.html
>
>The Gazette also gave a pointer to an unedited version of an interview
>Cray did last year with the Smithsonian (along with their edited
version):
>the unedited version is at URL
>"http://innovate.si.edu/history/cray/craytoc.htm".
Very nice article, despite some transcription glitches. Note that the
URL ends in htm, not html.
So what's the future?
No, this is not what I'd have wished, and I've *never* said/thought that Seymour
should step down to let other people get ahead. <a bogus reason>
What I've said many times, dating from the period when it became clear
that CCC was unlikely to make it, was:
"This is very painful. This is like watching your favorite
quarterback, who won the Superbowl *many* times, including last year,
but the world is not 1976, and his knees are gone, and those
300-pound defensive tackles are fierce, and while he keeps getting
up, it's agonizing to watch, and you really wish he could have
quit after last year, on a high."
--
-john mashey DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: ma...@sgi.com
DDD: 415-933-3090 FAX: 415-967-8496
USPS: Silicon Graphics/Cray Research 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311
Seymour Cray is legendary because he pushed the envelope consistently and
still produced working machines. My impression is that the Cray-3 worked
and that the Cray 4 would have as well. They just cost more than the market
was willing to pay. The Cray 2 is still considered a win is it not? (As
usual the details matter a lot and I don't know many details about the Cray
3 and Cray 4 machines.)
To twist the analogy a bit, its like your favorite quarterback going out
and having a pretty good game scoring lots of touchdowns and coming off the
field to find out he lost because the rules have been changed so field
goals are now worth ten points and touchdowns only four. (I expect the
scourge of American football is pretty familiar to most of the world, but
just in case, a touchdown is worth 6 points and often 7 while a field goal
is worth 3.) One expects a quarterback should react to rule changes, but
you can't aruge that he didn't play a good game of football.
Was Frank Lloyd Wright a great architect? Very few houses look like his
designs. They're relatively impractical for everyday living and were
generally only purchased by the very rich. Some of his later large designs
were never built for cost reasons.
-Z-
|> Was Frank Lloyd Wright a great architect? Very few houses look like his
|> designs. They're relatively impractical for everyday living and were
|> generally only purchased by the very rich. Some of his later large designs
|> were never built for cost reasons.
Well, strange that you should mention this, but as noted in the past,
sometimes computer architecture and building architecture have parallels:
1) Opinion: FLW was a great architect.
I saw FallingWater when I was a child and
was enthralled (even though that particular house is a maintenance nightmare).
2) While FLW is often known for his spectacular houses for the very rich,
he designed a lot that were lower cost (sometimes called "Usonians"), that
I thought made good use of space and were pretty livable. Books have been
written about these as well, even though they are nothing like as
spectacular as his famous houses. I do like the flexibility and
openness of the bulk of the space, even though the style of small bedrooms
bothers some people. It is INACCURATE to label him as only building for
the rich... [try Amazon, keyword "Usonian", for pointers to some of the books].
3) Palo Alto, Mountain View, and nearby have large numbers of houses with
lots of glass, open plans, indoor-outdoor integration, etc, which might be
considered California-tract-home versions of FLW Usonians. I had one of these
when living in Palo Alto. People either love them or despise them, but there
are lots of them. They would make no sense outside of mild climates.
(These are "Eichlers"). In Portola Valley, many houses have the organic FLW
quality (i.e., the house looks like it grew there, rather than being built);
Portola Valley Ranch is a large development that I think would have pleased him.
4) Calibration: even with the admitted issues mentioned,
I'm still a fan of FLW; admittedly, I may not be objective, as we live in
a house designed by one of FLW's students.
The 2 succeeded in pushing CRI over the top (which ETA wasn't able
match) for it's hardware and software during a charging era.
I never had a chance to visit NCAR or use the 3 even though I had invitations.
Details of the 4 are on the FAQ for c.s.s./c.p. (dead horse).
My most recent involvement was as an intermediary to get Seymour to
attend a special conference of peers (we still invite Chen, Burton,
Wallach: and Hillis is coming and skipping Pittsburgh (at least the last
part)). His effect on EEs is nothing short of subtle and amazing.
Seymour produced machines with differences which people with personal
computers would probably find hard to believe. The world will probably
never know all the reasons why he did what he did and what inspired him.
There's also probably one very worried 33-year old, now.
We will probably toast our fallen figures come November.
I'm not overly fond of football analogies. I'm a nerd, and that's what
I got into computers for.....
>Was Frank Lloyd Wright a great architect? Very few houses look like his
>designs. They're relatively impractical for everyday living and were
>generally only purchased by the very rich. Some of his later large designs
>were never built for cost reasons.
This is an architectural digression of a civil sense. My bias: my Dad
was an struggling architect who worked for Neutra. Was FLW great? Sure.
"Cheap" FLWs exist. I know at least three in CA. But FLW was not the
only architect. I.M. Pei did the notorious NCAR building where the
Cray-3 was housed. But I think architecture goes beyond that. Let me
cite two examples:
1) Dick Gabriel in Patterns of Software (his C++ column, which I've
noted in c.s.s.): cites Christopher Alexander (UCB architect) as an
inspiration. By chance another friend who died in July of similar injures
in a similar accident [I am mindful of the cross-post] was also similarly
inspired by Alexander (so I am reading Alexander books at bedtime with a host
of other materials). I think Gabriel's book will be increasingly hard to
find because it's a kind of esoteric book to the averager person.
The book's got mixed messages of quality, but I am passing it between
smart friends at start ups and they like it, in that nameless way.
Find Alexander and Gabriel.
2) I am mindful of Don Norman's three popular books:
Things that Make Us Smart
Turn Signals are the Facial Expressions of Automobiles
The Psychology/Design of Everyday Things
(available as a Voyager Co. CD: www.voyagerco.com)
and his examples of unenterable buildings (ridiculous doors which some
architects really like to look at and trap visitors).
Cray would never have made buildings like those cited in examples.
Oh (Spanish Inquisition example), 3) How Buldings Learn by Stewart Brand,
how could I have forgotten, he started our Conference.
Where he recants his own promotion of geodesic domes.
I would bet Stewart would have somewhat gotten along with Seymour.
We will never know now.
A very respectful one nanosecond of silence, please.
In article <zalmanDy...@netcom.com>,
Zalman Stern <zal...@netcom.com> wrote:
>Seymour Cray is legendary because he pushed the envelope consistently and
>still produced working machines. My impression is that the Cray-3 worked
>and that the Cray 4 would have as well.
They did work, and the packaging technology amazed me. I have held in
my hand an entire Cray-4 prototype. No, I don't mean a board - I mean
a brick of ultrathin boards, populated with naked, thinned chips. A
one GHz multiprocessor, complete with main memory, and I held it in
one hand.
Technology like that doesn't happen by accident. Every detail has to
be perfect. Not that Mr. Cray was fussy: he cheerfully burned over a
watt per gate in the CDC 6600.
>They just cost more than the market was willing to pay.
More precisely, the design of the -3 and -4 took several years longer
than expected. In a business where everyone else's technology also
advances, the schedule slips proved financially crucial. It was a pity
that his career didn't go out with a bang. But, the whole point of
pioneering is risk taking. Pioneers enrich us all.
RIP
--
Don D.C.Lindsay http://www.cs.colorado.edu/~lindsay/Home.html
I concur in recommending that book, likewise the Norman ones.
You can tell a lot about an organization's style & appraoch from its
buildings. I like the Brand book for thoughts about computer architecture as
well, i.e., avoiding fads, building things that are adapatable.
Instructive is the comparison between MIT Building 20 and Pei's MediaLab;
although not quite as strong a comparison, Bell Labs Murray HIll versus
Bell Labs Holmdel comes to mind.
The problems of "art" as architectural aspiration come down
to these:
Art is proudly non-functional and impractical.
Art reveres the new and despites the conventional.
Architectural art sells at a distance.
...
Art begets fashion; fashion means style; style is made of
illusion (granite veneer pretending to be solid; facade
columns pretending to hold something); and illusion is no
friend of function. The fashion game is fun for architects to
play and diverting for the public to watch, but it's deadly for
building users.
--Stewart Brand
>I concur in recommending that book, likewise the Norman ones.
>You can tell a lot about an organization's style & approach from its
>buildings. I like the Brand book for thoughts about computer architecture as
>well, i.e., avoiding fads, building things that are adapatable.
>Instructive is the comparison between MIT Building 20 and Pei's MediaLab;
>although not quite as strong a comparison, Bell Labs Murray Hill versus
>Bell Labs Holmdel comes to mind.
On my last visit to MIT and friends in the Weisner Bldg.,
I sought out Building 20. B 20 is subtle.
I have yet to visit Murray Hill and Holmdel, but I'll get there.
I did appreciate the TJ Watson building. C-shaped. But it lacked
something. They must have had Seymour in mind all the time.
Reminds me of the General Atomic building (to a small degree) in La Jolla
(except the sloping sides and the completed circle).
SGI can build C shaped buildings, too.
This may well be true, but I have never been sure what to make of the
big purple corporate nipple on that ugly SGI building over at
Shoreline Boulevard in Mountain View.
<b
--
Let us pray:
What a Great System. b...@eng.sun.com
Please Do Not Crash. b...@serpentine.com
^G^IP@P6 http://www.serpentine.com/~bos
One difference is that Cray did not agonize over the "failures".
The Smithsonian interview shows that Cray's passion was in doing new
designs and taking on major risks, and that the eventual job losses at
his former companies was not a heavy problem for him. He even joked
that the goal for his latest company (formed just weeks before the
freeway accident) was for it to survive a bit longer than usual before
going bankrupt. Sitting out the game to avoid some risk would be
boring. Investors and employees with less tolerance for risk and
change may have been hurt, but Cray was having a great time doing
exactly what he has always most enjoyed doing, right up to the end.
> One difference is that Cray did not agonize over the "failures".
> The Smithsonian interview shows that Cray's passion was in doing new
> designs and taking on major risks, and that the eventual job losses at
> his former companies was not a heavy problem for him. He even joked
A few months ago, Cray appeared on a round-table panel with some
other computer industry heavyweights, broadcast on the mbone.
One thing which distinguished him from all the other panelists
was his optimism. Or their pessimism, as he liked to put it.
The other panelists seemed to be pessimistic about many things,
but Cray, having just gone through the painful shutdown of CCC,
still seemed as optimistic as ever.
>In article <zalmanDy...@netcom.com>,
>>Was Frank Lloyd Wright a great architect? Very few houses look like his
>>designs. They're relatively impractical for everyday living and were
>>generally only purchased by the very rich. Some of his later large designs
>>were never built for cost reasons.
>This is an architectural digression of a civil sense. My bias: my Dad
>was an struggling architect who worked for Neutra. Was FLW great? Sure.
>"Cheap" FLWs exist. I know at least three in CA. But FLW was not the
>only architect. I.M. Pei did the notorious NCAR building where the
>Cray-3 was housed. But I think architecture goes beyond that. Let me
>cite two examples:
Off-topic point, but the original "ranch" design popularized in the
60's and 70's was a Wright creation, and a great many of his designs
were specifically done with a lower-income budget in mind. In fact,
he did a series of house designs that could be completed for about
$12,000, specifically to address this matter.
Do I understand that he influenced actual Cray cabinet design? That
wouldn't entirely surprise me.
The football analogy is slightly inspired by the following experience:
When I was at Penn State, there was a superb defensive lineman named Mike
Reid, who was All-America, and went to the pros, was All-Pro ~5 years in
a row with the Bengals, then quit (while his knees still worked :-).
This was a little different from Cray, in that Mike's passion *really* was
music; he composed classical piano and played concerts in the dorm I lived
in; after the Bengals he's had various musical groups, composed, and won
at least one Grammy. While not for everybody, I admired the fact that he
could be excellent at one thing, then go on to be excellent at something
else.
I won't comment on this - mainly because I can't think of any way to do so
without being profoundly insulting. John is generally a pretty reasonable
guy and I'm probably a little hypersensitive right now, but this seems like
a really cheap shot to me.
>Seymour Cray is legendary because he pushed the envelope consistently and
>still produced working machines. My impression is that the Cray-3 worked
Yup.
>and that the Cray 4 would have as well.
It already was, pretty much.
>They just cost more than the market
>was willing to pay.
The Cray-4 was priced at about 1/2 the cost of "competitive" offerings
from CRI. If that's "more than the market was willing to pay", then
why is SGI/CRI still selling T-90s?
We were late on the Cray-3 and the well ran dry. End of story.
Steve
From a purely economic/business mode, it is really hard to see how
any of the "big" supercomputing startups could possibly have survived.
Consider the numbers:
ETA spent ~$400 Million
SSI spent >$250 Million
CCC spent >$200 Million
Right now, the worldwide market for computers costing $5 Million
and up is about $680 Million per year.
The life expectancy of a supercomputer design is about 4 years.
Even assuming really fat manufacturing margins, one would be
very hard pressed to apply more than 50% of gross income to
retiring the engineering/development costs.
The product is going to begin with 0% market share, and by
four years later is going to again have near 0% market share
as it is eclipsed by competitor's products and subsequent
generations from the original vendor.
So what sort of market share would be required to pay off
the investments incurred by these startups?
The numbers show that you would have to *average* 20% market share for
a four-year period, *and* be able to apply >50% of gross income to
retiring the initial investment (or rolling over to the next
generation).
Given the inevitable cycle of desirability of a product, you would
probably have to capture 40% market share at your peak to do this.
This is approximately the share of the high-end market that is held by
SGI+Cray, and is about 3x larger than the next largest entry.
I would certainly not want to bet my money on *anyone* being able to
do that in the current era.
The only way to succeed is to do the initial development for very
little money, and/or arrange financing that does not need to be
paid back....
--
--
John D. McCalpin, Ph.D. Supercomputing Performance Analyst
Advanced Systems Division http://reality.sgi.com/employees/mccalpin
Silicon Graphics, Inc. mcca...@asd.sgi.com 415-933-7407
|> >Seymour Cray is legendary because he pushed the envelope consistently and
|> >still produced working machines. My impression is that the Cray-3 worked
I think Steve must indeed be in an unusual hypersensitive state ...
since this was hardly intended to be a cheap shot, nor derogatory in any way, shape or form of Seymour & his efforts. [It may help to know that I spent
many years at Penn State, hence football analogies are generally positive...]
I've more than once been asked
by various people what I'd thought of CCC's prospects, over several
years, and I'd be astonished if anyone who asked me in person came away
thinking it was a cheap shot or negative, but maybe personal expression/demeanor is needed, not impersonal netpostings. So let me try again:
1) I didn't know Seymour personally, but always admired his work,
and *thousands* of people have heard me give talks in which I said things
like: "Modern RISCs grew out of the work at IBM TJ Watson Research,
and related work at Stanford & Berkeley, but many RISC designers look to
Seymour Cray and the CDC 6600 as the earliest RISC in many ways." and
"It is instructive to compare ther CDC 6600 and IBM 360/91..."
and such things have shown up inb net postings over the years.
2) Hardly anyone succeeds in starting/leading even one company that
pushes the state of the art in computing AND has substantial commercial
success over a useful length of time.
I strongly admire that particular combination, as it is very difficult,
and contributes a lot mroe than doing either without the other.
Very few do it twice, and I'm hard-pressed to name anybody who clearly has
done it three times. [any candidates?].
Cray certainly did it twice, and I would have been happy if he could have
done it three times ...
3) Note that the comments about 1976, knees, and
tackles, do *not* imply diminution of talent, skill, brains, or vision,
but refer to the inevitable passage of time:
I.e., every year, it gets harder and harder to
start a new *computer* company with a new architecture: the ante goes up to
even play.
(There may be plenty of oppurtunities for new companies in computing, but I don't really expect to see a lot of new companies started with their own architectures these days, that survive very long... which is too bad.)
4) So, anyway, I'm sorry if Steve is mortally offended, but I think he misread
the comment, which, to have been a cheap shot, would have had to have been
contradictory with numerous public comments I've said over many years.
You might call it Progress on Automatic Pilot.
Cray stood for something else, IMHO.
Don Gillies
Here is the key data:
> ETA spent ~$400 Million
> SSI spent >$250 Million
> CCC spent >$200 Million
Every time I read those numbers, I wonder how the companies involved thought
they could succeed with that large of an investment to pay off.
In contrast, Cray Research managed to build it's first shippable Cray 1 for
under $10Million. Of course, it did not have much software. Convex managed
to build it's first shippable C-1 for around $20 Million, including software.
Others have also gotten to market with innovative hardware for a lot less
than $100 Million. Critical issues for startups:
1) Know what your initial target market is and understand what it requires.
2) Maintain focus and don't allow significant investment in anything that
does not aid that market.
3) Do everything you can to minimize time to market. Every extra month just
burns more money.
4) Keep your initial staff small (just large enough to do the job).
More people mean more time spent communicating instead of doing.
If you are very selective in hiring, only accepting the top 10% of
potential candidates, and motivating them with "average salaries and
extraordinary stock options, plus exciting work", you should be able
to get several times industry average effort and productivity.
5) Use other people's work whenever possible - like starting with Unix as
an OS instead of inventing your own. Look for strategic partnerships
in as many areas as possible. These efforts will help (3) and (4).
6) Be lucky. :-)
[It helps to be first one to market in your niche, or for your target
market to experience a business boom just as your product is ready.
It also helps for your vendors to deliver what they promise. The
reverse of any of these can sink you. Examples abound.]
There is no doubt that the obvious barriers to entry in the supercomputer
market are higher today than they were at some times in the past.
Expectations of hardware reliability and software quality are much higher
than they were in past decades. Still, the future will hold opportunities
for innovators, especially in those niches which the big players consider
"too small" for serious attention. But maximum attention to development
cost and time to market is required to be a niche player.
- Patrick McGehearty
In article <5430b8$j...@bach.convex.com>,
Patrick F. McGehearty <pat...@convex.COM> wrote:
>Every time I read those numbers, I wonder how the companies involved thought
>they could succeed with that large of an investment to pay off.
Because they were not in it for the money. A time existed when IBM
didn't do the work for the money. We're in different times now.
Gone are the days where corporations do things for cost plus $1.
Ref: E. Pugh, Memories That Shaped an Industry, MIT Press.
>In contrast, Cray Research managed to build it's first shippable Cray 1 for
>under $10Million. Of course, it did not have much software.
Those companies attempted to make growth projections on the numbers from
past technology and research. The 1 was assembled at a time when major
sites were still using vaccuum tubes. Except for the FAA, Russia, and a
few other sites, you don't have those technology margins.
>Convex managed
>to build it's first shippable C-1 for around $20 Million, including software.
I wish I held stock from 1984-1988. 8^)
But then J. Knight had 100 shares of original CRI and kicked himself
for selling "early." And he was a 203 programmer.
The future will consist of:
1) people who will make do (or not) with single PCs.
2) people who will cluster a few PCs.
3) corporations/institutions who contract for computing and
programming from firms like computing "Bechtels."
>Others have also gotten to market with innovative hardware for a lot less
>than $100 Million. Critical issues for startups:
I am not aware of many new startups in this market in the past couple of
years, who might be able to listen this advice. I think MS may have
scared many of these people off.
>1) Know what your initial target market is and understand what it requires.
>2) Maintain focus and don't allow significant investment in anything that
> does not aid that market.
>3) Do everything you can to minimize time to market. Every extra month just
> burns more money.
I think the keyword here is: Find a niche.
>4) Keep your initial staff small (just large enough to do the job).
> More people mean more time spent communicating instead of doing.
> If you are very selective in hiring, only accepting the top 10% of
> potential candidates, and motivating them with "average salaries and
> extraordinary stock options, plus exciting work", you should be able
> to get several times industry average effort and productivity.
That nice Thorndyke paper in
%A Karyn R. Ames
%A Alan Brenner, eds.
%T Frontiers of supercomputing II: a national reassessment
>5) Use other people's work whenever possible - like starting with Unix as
> an OS instead of inventing your own. Look for strategic partnerships
> in as many areas as possible. These efforts will help (3) and (4).
The key word here is "possible." Someday we have to wean ourselves to
the next stage. Mach sort of tried (became NT?). I think this has
become an old refrain. Someone is going to have to do this. It will
have to succeed by having the Unix advantages and more.
>6) Be lucky. :-)
> [It helps to be first one to market in your niche, or for your target
> market to experience a business boom just as your product is ready.
> It also helps for your vendors to deliver what they promise. The
> reverse of any of these can sink you. Examples abound.]
This was Dennis Ritchie's advice about the history of C in
The History of Programming Languages II.
>There is no doubt that the obvious barriers to entry in the supercomputer
>market are higher today than they were at some times in the past.
>Expectations of hardware reliability and software quality are much higher
>than they were in past decades. Still, the future will hold opportunities
>for innovators, especially in those niches which the big players consider
>"too small" for serious attention. But maximum attention to development
>cost and time to market is required to be a niche player.
I wonder if we need "sugar Daddys?"
It's easy to come down on by the "quarterly earnings" of corporate America.
Or government. We need a new way of doing things if not a return to
certain old ways.
Dr. Edwin H. Land's panel ...
maintains that "discoveries are made by some individual who has
freed himself from a way of thinking that is held by friends and
associates who may be more intelligent, better educated, better disciplined,
but who have not mastered the art of a fresh, clean look at the
old, old knowledge." He once remarked that all governmental research
and development activity eventually follows a
well-worn path towards bigness, turf protection, security, inertia,
and incompetence. Under Dr. Land's leadership many learned men and women
-- leaders in academia, science, and industry -- generously gave their time
and talents; research libraries made their resoruces and facilities available;
and industry displayed a willingless to cooperate in the manufacture of
highly sophisticated hardware.
...
All these organizations have grown phenomenally, staffed with
effete personalities who jealously guarded their specific turf.
Because of new improved equipment was seldom on schedule.
Land found the situation most depressing. He once remarked that
organizations were more concerned with protecting traditional franchises
than exploring fresh new areas of technical activity.
Eyeball to Eyeball
Dino Brugioni
Page 14-xx?.
If carpenters built houses the way software
engineers write computer programs, the first
woodpecker to come along would destroy civilization.
Where have you been hiding? Our field is littered with mavericks.
It may be awhile before someone else achieves the recognition that Cray
had but there are plenty of worthwhile contenders, and it *will* happen.
>If you ask me, the inertia of "the attack of the killer micros" is
>stifling creativity in the field of computer architecture, because next
>year we're just doing what we did last year with twice as many
>transistors as we used last year.
O ye of little faith... Last year's solution, scaled up to twice as
many transistors, hardly makes a dent in next year's performance
requirements. We have reached the limits of simple superscalar
parallelism, we have reached the point of diminishing returns for
larger caches (which are slower anyway), and memory is fading into
the distance as the clock frequency soars ever higher.
I think we are at the verge of a particularly interesting era in
computer architecture. In the last generation (P6, R10000, PPC604,
etc) we have only just caught up with most of the architectural ideas
pioneered by the mainframes. Those ideas will not take us where
we need to go. With the huge dies of the future we can entertain
all sorts of weird possibilities that were never practical before.
So now we'll be making new and interesting designs (not to mention
new and interesting mistakes) rather than just repeating history.
--
Mike Haertel <hae...@ichips.intel.com>
Not speaking for Intel.
> >In contrast, Cray Research managed to build it's first shippable Cray 1 for
> >under $10Million. Of course, it did not have much software.
>
> Those companies attempted to make growth projections on the numbers from
> past technology and research. The 1 was assembled at a time when major
> sites were still using vaccuum tubes. Except for the FAA, Russia, and a
> few other sites, you don't have those technology margins.
Really? Vacuum tubes in 1972 or so? After Cray himself had built the first
mainframe with semiconductor memory, if I can believe all the obits?
Jan
> ... that Cray produced ideas and computers
>that were brilliant all his life. He was arguably one of the last
>maverick inventors / architects in our field - comparisons to Edison are
>possible. Unfortunately, the rules of the game changed. ...
>If you ask me, the inertia of "the attack of the killer micros" is
>stifling creativity in the field of computer architecture, because next
>year we're just doing what we did last year with twice as many
>transistors as we used last year. Very little creativity is needed other
>than thinking of how to soak up the new transistors and coordinate their
>actions.
>
>You might call it Progress on Automatic Pilot.
>
>Cray stood for something else, IMHO.
Consider the following:
A1. the automobile industry from its start the early or mid 1930's.
A2. the automobile industry after the early or mid 1930's.
C1. the computer industry from its start until the mid 1970's.
C2. the computer industry after the mid 1970's.
A1 and C1 lasted about the same number of years.
In A1 and C1 there were many fundamental technical innovation and plenty
of innovators.
In A2 and C2 there have been little more than the elaboration, refinement,
and commercialization of the technical inventions from A1 and C1, but
major social and economic effects. Except for computers, what is in
a 1996 car that was not available in at least one model by 1935? Name
a major technical innovation in computers since 1980.
Vernon Schryver v...@rhyolite.com
how about automatically parallelizing f77 and C compilers...
-bill
--
---------------------
Bill Rosenkranz
HP/Convex
rose...@convex.hp.com
>Consider the following:
> A1. the automobile industry from its start the early or mid 1930's.
> A2. the automobile industry after the early or mid 1930's.
A3. the incredible variety of different ways of "making a living"
evolved by animals during the so-called "Cambrian Explosion",
immediately after the appearance of the first multi-cellular
animals, as recorded in the fossil record of the Burgess shales.
> C1. the computer industry from its start until the mid 1970's.
> C2. the computer industry after the mid 1970's.
C3. the relative paucity of "innovation" displayed by evolution
since the Cambrian era.
>A1 and C1 lasted about the same number of years.
A3 lasted for only a hundred million years or so - a mere minute in the
evolutionary scheme of things.
>In A1 and C1 there were many fundamental technical innovation and plenty
>of innovators.
We are seeing a natural law of evolution unfolding in each of these
events. When a new evolutionary niche opens up, there is an immediate
explosion of new ways to fill that niche. Once the niche has been
fully explored, most of the variety produced by the original explosion
tends to get weeded out in favor of the few forms that are marginally
best at dealing with the hazards of living in that niche. After
that, new forms tend to be "mere" elaboration or refinements of
existing forms.
Another example of this is the incredible ingenuity displayed by the
various life forms currently evolving as a result the ongoing
settlement of cyberspace. I'm referring to crackers and spammers and
other low-life forms currently crawling out of the [wood/net]work.
Once the initial explosion is over, we can look forward to a slower
pace of change [[(<-8 I hope! 8->) This raises an interesting
point. If cyberspace, as claimed by some, really has an infinite
number of fundamentally new niches, we may be witnessing the beginning
of a permanent "Cambrian explosion". It may _never_ settle down. Now
there's a scary thought!]]
>In A2 and C2 there have been little more than the elaboration, refinement,
>and commercialization of the technical inventions from A1 and C1, but
>major social and economic effects. Except for computers, what is in
>a 1996 car that was not available in at least one model by 1935? Name
>a major technical innovation in computers since 1980.
Electronic fuel injection and catalytic converters, to name only two.
Indeed, the whole field of pollution abatement technology has been
undergoing its own "Cambrian explosion" in recent years.
>Vernon Schryver v...@rhyolite.com
Rick Thomas rbth...@rutgers.edu
Anti-lock brakes.
Air bags.
Perhaps fuel injection?
Catalytic convertors.
>Name
>a major technical innovation in computers since 1980.
Superscalar processors.
MPP machines (e.g. Thinking Machines).
Perhaps DSP chips?
Perhaps CD-ROM?
Perhaps fiber-optics?
Seemly, innovation rate slows down over time for a given technology,
but it doesn't disappear completely.
For a mature technology, innovation is often in the details.
Just a few thoughts,
Peter.
----------------------------
Peter C. Damron, (not speaking for) SunSoft, a Sun Microsystems, Inc. Business
SPARCompilers, UMPK 16-303, 2550 Garcia Ave. Mtn. View, CA 94043
peter....@eng.sun.com
>2) Hardly anyone succeeds in starting/leading even one company that
>pushes the state of the art in computing AND has substantial commercial
>success over a useful length of time.
>I strongly admire that particular combination, as it is very difficult,
>and contributes a lot mroe than doing either without the other.
>Very few do it twice, and I'm hard-pressed to name anybody who clearly has
>done it three times. [any candidates?].
Well, we could make a case for Bill Poduska (Prime, Apollo, and
Stellar/Stardent).
-- David Wright :: wri...@hi.com :: Not an Official Spokesman for Anyone
These are my opinions only, but they're almost always correct.
"The difference between a printing press and a modern digital long-
distance network is that the press produces money much more slowly."
-- Neil Kirby, Lucent Technologies
>In A2 and C2 there have been little more than the elaboration, refinement,
>and commercialization of the technical inventions from A1 and C1, but
>major social and economic effects. Except for computers, what is in
>a 1996 car that was not available in at least one model by 1935? Name
>a major technical innovation in computers since 1980.
Turbochargers? Automatic transmissions?
In a big way,
the Connection Machine, the KSR, the Tera, and others...
Sure, their architects built upon the ideas of the past,
but they were/are innovative.
In a small way,
every machine that gets built has lots of small innovations.
Sometimes it's in the cache design, sometimes it's in the packaging,
sometimes, it's in the cooling.
Preston Briggs
I would like to suggest for your consideration (please post if any are
pre-1980 or have significant pre-1980 roots):
- Java virtual machine
- browser-based OS
- IEEE floating point standard
- memory consistency models
- exception barrier instructions
- spatial locality hint bits in load/store instructions (e.g., HP PA)
- 2-level adaptive branch prediction
--
Mark Smotherman, Computer Science Dept., Clemson University, Clemson, SC
http://www.cs.clemson.edu/~mark/homepage.html
Never met one as good as the one my mama and dad created nearly 49
years ago. :)
[rm -rf]
>John, I think what he meant was that Cray produced ideas and computers
>that were brilliant all his life. He was arguably one of the last
>maverick inventors / architects in our field - comparisons to Edison are
>possible. Unfortunately, the rules of the game changed. And even though
>he kept designing with outdated packaging and sidelined semiconductor
>processes, he was still a very creative and brilliant man. The light in
>his head still shone very brightly. There was nothing, nothing at all,
>mediocre or feeble coming out of his mind at the end of his life.
>
>If you ask me, the inertia of "the attack of the killer micros" is
>stifling creativity in the field of computer architecture, because next
>year we're just doing what we did last year with twice as many
>transistors as we used last year. Very little creativity is needed other
>than thinking of how to soak up the new transistors and coordinate their
>actions.
>
>You might call it Progress on Automatic Pilot.
I would tend to agree with your ideas above, although from an economic
point of view, its probably MUCH easier to make a buck this way, than
to invest massive time and money into R&D, which is what has made life
hard for the supercomputer vendors. This is why "massively parallel"
computers have been the rage the last few years.. Innovation is limited
to a much smaller part of machine design, mainly the interconnect.
Most of the nodes on the massively parallel machines are pretty sad when
you look at them in the context of a single cpu by itself.
(good example the low memory bandwidth on the paragon MP-3 nodes..)
>Cray stood for something else, IMHO.
Yup..
John Stone
jo...@cs.umr.edu
[stuff about car snipped]
> >Name
> >a major technical innovation in computers since 1980.
>
> Superscalar processors.
> MPP machines (e.g. Thinking Machines).
Illiac IV was pre 1980.
First working large scale SIMD array was CLIP4 at UCL (London), work
started on it back in 77/78 and we had working machine in 1979/80.
ICL DAP was available in 1980.
Goodyear MPP for NASA (Batcher) predates TMI CM1, but comes after
CLIP4 and DAP.
Goodyear STARAN (Associative Processor) certainly predates 1980. A
suceesor to the STARAN stuff is ASP (Brunel University) but I suspect
this dates from early 80s.
> Perhaps DSP chips?
> Perhaps CD-ROM?
> Perhaps fiber-optics?
>
> Seemly, innovation rate slows down over time for a given technology,
> but it doesn't disappear completely.
> For a mature technology, innovation is often in the details.
>
> Just a few thoughts,
> Peter.
>
> ----------------------------
> Peter C. Damron, (not speaking for) SunSoft, a Sun Microsystems, Inc. Business
> SPARCompilers, UMPK 16-303, 2550 Garcia Ave. Mtn. View, CA 94043
> peter....@eng.sun.com
Regards,
Zahid
--
Zahid Hussain, BSC (Hons), Phd (Lond) Email: Zahid....@tiuk.ti.com
VLSI (DSP) Design Engineer IMS: ZHUS
MOS Design, Texas Instruments Ltd vox: +44 (0)1604 66 3405
Northampton, UK NN4 7YL Speed Dial: +8 447 3405
Burroughs sold multi-issue stack machines in the late 70s.
>
>I would like to suggest for your consideration (please post if any are
>pre-1980 or have significant pre-1980 roots):
>
>- Java virtual machine
A mere rehash of 70's UCSD portable Pascal P-code, and various Lisp &
Smalltalk implementations. What's new is that there's widespread
adoption of a de facto portable standard due to an unfilled niche, and
their reason to not finish compiling the code down to a nonsymbolic
form is for some defense against viruses etc.
>- browser-based OS
Worldwide deployment of internetworked interoperable linked GUI info is
indeed a new phenomenon. Cranking it into the DOS is not a profound
innovation.
>- IEEE floating point standard
So what. Not significantly better than many prior FP implementations;
merely standardized just when Intel & Motorola cranked out zillions of
compatible cheap FP chips to make this standard popular and important.
>- memory consistency models
>- exception barrier instructions
This might be new. Anyone know of any instance of this before Dec's
Alpha?
>- spatial locality hint bits in load/store instructions (e.g., HP PA)
A mere combining of self-organizing cache notions and previous
practises of explicitly programmed (or microprogrammed) transfers
between distinct forms of memory or registers. Very good refinement,
but not profound.
>- 2-level adaptive branch prediction
Not profound, merely a minor refinement. Branch prediction was profound,
but was introduced in the Livermore S-1 in mid 70's, if not earlier.
P code. Yawn.
>- browser-based OS
Xanadu.
>- IEEE floating point standard
Floating point standards existed beforehand.
>- 2-level adaptive branch prediction
I thought branch prediction existed in the 70s?
>- memory consistency models
>- exception barrier instructions
>- spatial locality hint bits in load/store instructions (e.g., HP PA)
These are major?
My suggestion: PDAs (ok, they were envisioned earlier)
--
Matthew Crosby cro...@cs.colorado.edu
Disclaimer: It was in another country, and besides, the wench is dead.
Less, much less. Hundred million years, that's longer than the whole
Cambrium took. The time of A3 is so short that it can't be determined.
You either find fossils of cambrian animals, or you don't. Nothing in
between, except some single worms that can be found in last-minute
pre-cambrium, and that started the whole thing. It was a blink of lids
in evolution, thus it took much less than a million years.
> Another example of this is the incredible ingenuity displayed by the
> various life forms currently evolving as a result the ongoing
> settlement of cyberspace. I'm referring to crackers and spammers and
> other low-life forms currently crawling out of the [wood/net]work.
> Once the initial explosion is over, we can look forward to a slower
> pace of change [[(<-8 I hope! 8->) This raises an interesting
> point. If cyberspace, as claimed by some, really has an infinite
> number of fundamentally new niches, we may be witnessing the beginning
> of a permanent "Cambrian explosion". It may _never_ settle down. Now
> there's a scary thought!]]
There's no such thing as "infinity". Only with human madness, I doubt
(as Einstein ;-). Humans can't count up to seven, until they use their
hands or words/digits. And because you have to imagine nices until you
start to count, you'll never get an accurate estimation.
The average rainwood tree has some thousands of niches for insects and
other small animals, some hundreds for other plants and less than 10 for
higher life forms like mammals or birds. There are thousands of
different trees in a jungle, each creating different niches. At the
moment, the cyberspace is just about equivalent to one tree. We just
start to imagine that there might be other trees, but the whole thing is
nowhere near infinity.
> Electronic fuel injection and catalytic converters, to name only two.
> Indeed, the whole field of pollution abatement technology has been
> undergoing its own "Cambrian explosion" in recent years.
I thought of the car-phone or the more modern digital counterparts. It
seems to be an existential part of some sort of cars (that's why these
phones are called Yuppi-Teddy in Sweden).
--
Bernd Paysan
"Late answers are wrong answers!"
http://www.informatik.tu-muenchen.de/~paysan/
No one else seems to have gotten this one yet:
In article <5466ba$n...@engnews1.Eng.Sun.COM>, p...@complex.eng.sun.com (Peter C. Damron) writes:
> Perhaps fiber-optics?
Definitely predates 1980, but not by much.
---------------- Cray Research ---------------- *** Roger Glover ***
---------- A Silicon Graphics Company --------- http://home.cray.com/~glover
> >- exception barrier instructions
>
> This might be new. Anyone know of any instance of this before Dec's
> Alpha?
Mips R4000 (1991) had LL/SC before alpha.
Stardent had an exception generating block instruction in 1987
(you get an exception if a thread waits too long at a barrier).
--
Michael McNamara Silicon Sorcery <http://www.silicon-sorcery.com>
Get my verilog emacs mode (subscribe for free updates!) at
<http://www.silicon-sorcery.com/verilog-mode.html>
>>2) Hardly anyone succeeds in starting/leading even one company that
>>pushes the state of the art in computing AND has substantial commercial
>>success over a useful length of time.
>>I strongly admire that particular combination, as it is very difficult,
>>and contributes a lot mroe than doing either without the other.
>>Very few do it twice, and I'm hard-pressed to name anybody who clearly has
>>done it three times. [any candidates?].
>Well, we could make a case for Bill Poduska (Prime, Apollo, and
>Stellar/Stardent).
What about Gene Amdahl?
It is interesting, in his golden era Seymour Cray built the
Cray-1 machine with commodity parts (ordinary ECL chips),
later days he worked with the non-commodity GaAs chips
which apparently made the expected new machine to delay...
I think he should have made the parallel machine with the commodity CPUs.
<snip>
>Unfortunately, the rules of the game changed. And even though
>he kept designing with outdated packaging and sidelined semiconductor
>processes, he was still a very creative and brilliant man. The light in
Please explain what you mean by "outdated packaging". I really don't
understand how this particular statement applies to the Cray-3/4.
I would argue that he was the only designer *not* designing with
"outdated packaging".
>his head still shone very brightly. There was nothing, nothing at all,
>mediocre or feeble coming out of his mind at the end of his life.
You can say that again.
Steve
We were using fiber optics in card readers in 1968, and they weren't that new
then.
|>
|> ---------------- Cray Research ---------------- *** Roger Glover ***
|> ---------- A Silicon Graphics Company --------- http://home.cray.com/~glover
--
Del Cecchi
Personal Opinions
IBM Rochester MN
Very few of whom have a consistent track record of originality that
spans 40 years ;-). Yes, we have a lot of mavericks but we only had
one Seymour Cray. This reminds me of a conversation I had with a VP
at Thinking Machines between my initial stint at CRI and my time at
CCC. I asked him what he thought of Danny Hillis (architect of the
Connection Machine), and he replied "Danny's a real bright guy. He's
no Seymour Cray, mind you, but he's a real bright guy." We've got a lot
of "real bright guys". I hope that'll be enough.
Steve
>>From a purely economic/business mode, it is really hard to see how
>>any of the "big" supercomputing startups could possibly have survived.
>>
>>Consider the numbers:
>> ETA spent ~$400 Million
>> SSI spent >$250 Million
>> CCC spent >$200 Million
>>Right now, the worldwide market for computers costing $5 Million
>>and up is about $680 Million per year.
>>
>>The life expectancy of a supercomputer design is about 4 years.
But most of those startup costs could have been amortized over more than
one "generation". The Cray-4 was a lot cheaper (and faster) to design/build
than the Cray-3 because it leveraged the Cray-3 technology.
<snip>
>>I would certainly not want to bet my money on *anyone* being able to
>>do that in the current era.
Unfortunately, an awful lot of people agree with you :-(.
>>The only way to succeed is to do the initial development for very
>>little money, and/or arrange financing that does not need to be
>>paid back....
Well, a sizable chunk of CCC's financing was in a form that "did not need
to be paid back", but our initial expenditures were way too high. Had we
been able to find a reliable GaAs vendor without having to build our own
foundry, that cost figure would have been quite a bit lower. Had CCC
decided to take advantage of its ability to sell Cray-2s (especially
fast memory, 8-CPU Cray-2 systems, of which only one was built) under
the "split" agreement, the financials would also have looked quite a bit
different. We probably could have made enough to pay the Cray-3 development
costs, since the "big 2" was *quite* competitive with the Y-MP. There are
lots of ways this could have played out, but that's all water under the
bridge.
ETA might have made it had they not been smothered by the dead weight of
CDC. I'm *sure* their development costs would have been lower.
I doubt if any outsiders will *ever* know the ins and outs of IBM's
decision to do in SSI.
>Every time I read those numbers, I wonder how the companies involved thought
>they could succeed with that large of an investment to pay off.
>In contrast, Cray Research managed to build it's first shippable Cray 1 for
>under $10Million. Of course, it did not have much software.
That figure is also not in constant dollars. ;-) FWIW, CRI very nearly
folded before the first Cray-1 shipped.
I don't know about SSI and ETA, but software development was a pretty
minor cost item on the Cray-3/4 (there were only 26 of us in SW development
at the peak).
Steve
I saw (and held) a Cray-3 module, and I thought it was 20 years ahead of
anything being offered at present. Micro-laser welded, three-dimensional
interconnects in a way that even the infamous Rube Goldberg could only dream
about. And with other amazing things too numerous to mention. Maybe
"outdated" meant "out-in-the-future"???
Please don't post such things so close to lunch time. A parallel 80286
makes me want to puke everything I've eaten for the past two weeks... :)
BTW, as far as the "commodity CPU's" goes, call your friently local CRI
salesman and order a T3E as large as you can afford. It's already been
done...
I have a small paperweight made from one of the (about 1.5" x 1.5") Cray3
PCBs. The size was so small, and the paths were so tightly coupled, that
the Cray3 development folks were forced to invent robots to combine the
parts. While this wasn't nearly the cost of the GaAs foundry, it
contributed to the expense of the development process. Mr. Cray was trying
to convert PCBs to a density relative to chip paths. The densities of chips
may make this idea ultimately obsolete, but the costs of going "off board"
still kill the performance on most systems. Maybe someone will pick up &
continue with the 3-d interconnect (non daughter-board) concept.
<<PS. Did I mention, he was trying to interconnect naked (non-plastic
framed) chips on these boards? Wire-paths were to be minimized whereever
possible. This is a hallmark of all Cray designed hardware.>>
--
David Ecale
ec...@cray.com Work = 612-683-3844 // 800-BUG-CRAY x33844
http://wwwsdiv.cray.com/~ecale Beep = 612-637-0873
Will hack UNIX(TM) for food!
: I saw (and held) a Cray-3 module, and I thought it was 20 years ahead of
: anything being offered at present. Micro-laser welded, three-dimensional
: interconnects in a way that even the infamous Rube Goldberg could only dream
: about. And with other amazing things too numerous to mention. Maybe
: "outdated" meant "out-in-the-future"???
Is there a photo of this anywhere? I've tried to read about it, and ended up with
various sites suggesting that it looked like a gallium arsenide house-brick with
holes for the cooling fluid.
I think one of ACClarke's stories had a computer of which it was said 'We can't
possibly fix that. It's solid microcircuitry, packed as tightly as the human
brain'; I guess that's a Cray 3, and I fear the 'we can't fix that' was one of the
reasons they didn't catch on.
--
Tom
We will do what we have always done when we've had our back to
the wall; we will turn round and fight.
Speaking of which, I remember reading an article, ohhh, 5 or so years back
about some group at some U. building a parallel 286 based machine... with
64 or so processors... forgotten all the details. Maybe it was 7 years ago.
Ohh well... Laters, MIKE...
>I have a small paperweight made from one of the (about 1.5" x 1.5") Cray3
>PCBs. The size was so small, and the paths were so tightly coupled, that
>the Cray3 development folks were forced to invent robots to combine the
>parts.
Actually, the fully automatic pinsetter was *cough* less than entirely
successful.
The semi-automatic machines did pretty well, though.
>While this wasn't nearly the cost of the GaAs foundry, it
>contributed to the expense of the development process.
>Mr. Cray was trying
>to convert PCBs to a density relative to chip paths.
He did more than try. We may not have been a raging commercial success,
but the Cray-3 certainly worked on a technical level. Once the module
assembly process was debugged, it worked quite well.
>The densities of chips
>may make this idea ultimately obsolete, but the costs of going "off board"
>still kill the performance on most systems. Maybe someone will pick up &
>continue with the 3-d interconnect (non daughter-board) concept.
><<PS. Did I mention, he was trying to interconnect naked (non-plastic
>framed) chips on these boards?
Once again, the verb "trying" is unwarranted. Cray-3 #S5 ran full production
at NCAR for well over a year.
>Wire-paths were to be minimized whereever
>possible. This is a hallmark of all Cray designed hardware.>>
You forgot to mention that we had to grind off the back of the wafers to
make the chips thinner before they were assembled in this fashion.
The interconnects themselves were pretty interesting, a seven strand
Be/Au wire, finer than a human hair, laser micro-welded at precise intervals
so that it would expand into a sort of "birdcage" when inserted and
twisted. This is what guaranteed good contact.
Of course, the micro-coaxial cable developed for the Cray-4 was pretty
slick, too. You could thread a needle with the stuff.
Steve
To be concrete, I think we are talking about the TRAPB instruction on
Alpha. (Of which I'm fairly sure there are a couple variants.) This
instruction waits for all exceptions that could possibly be generated by
preceeding instructions to be resolved. It guarnatees that no "visible"
changes to the architected state of the machine due to succeeding
instructions will happen until the TRAPB is finished. (Unfortunately, my
Alpha architecture manual is elsewhere right now...)
This is used mainly with reagard to floating-point instructions. It allows
these instructions to be issued without regard to preserving in-order
exception semantics. When such semantics are necessary, the compiler
inserts a TRAPB to ensure that exception handling can be done.
: Mips R4000 (1991) had LL/SC before alpha.
That's not an exception barrier instruction. MIPS implementations have
traditionally used patented techniques to detect floating-point exceptions
very early in the execution of floating-point instructions. This is less
applicable perhaps in this day and age of single-cycle pipelined
floating-point compute instructions.
: Stardent had an exception generating block instruction in 1987
: (you get an exception if a thread waits too long at a barrier).
This sounds like a "block here for sychronization, but throw an exception
if the wait is too long." That is not a TRAB either.
Several hundred, maybe a thousand or two. I know, I took them. Unfortunately,
the originals "disappeared" after CCC filed for bankruptcy (so much
for my portfolio). If you can dig up copies of
the Annual Reports for CCC (photos by yours truly) or the
Cray-3 and Cray-4 marketing brochures (ditto), or the Cray-3 poster (likewise),
you can see detailed photos of the modules and the assembly technology.
>various sites suggesting that it looked like a gallium arsenide house-brick with
>holes for the cooling fluid.
No, it didn't look anything like that. We did refer to a Cray-4 module
set (CPU + 1GB of RAM) as a "brick" but that was just a nickname. A
Cray-4 brick was about the size of a paperback Roget's thesaurus. Personally,
I wanted to photograph it next to a copy of "Colossus: The Forbin Project",
but not everyone saw the humor in that ;-) (the character of Charles Forbin
was inspired by Seymour Cray).
> I think one of ACClarke's stories had a computer of which it was said 'We can't
>possibly fix that. It's solid microcircuitry, packed as tightly as the human
>brain'; I guess that's a Cray 3, and I fear the 'we can't fix that' was one of the
>reasons they didn't catch on.
We fixed them quite handily. They weren't "field repairable", but we took
them apart and repaired them in-house all the time.
>Tom
>We will do what we have always done when we've had our back to
>the wall; we will turn round and fight.
No we won't. We'll cower, whimper, ignore the problem, and then wonder why
we're always getting the stuffing knocked out of us.
Not exactly a Churchillian sentiment, but probably a good deal more
accurate.
Steve
Wait, if we've got our back to a wall, and we turn round and
fight, won't we hurt ourselves punching the wall? No wonder
we're having problems!
>No we won't. We'll cower, whimper, ignore the problem, and then wonder why
>we're always getting the stuffing knocked out of us.
That, or we'll look for someone else to blame our problems on,
knowing full well that it couldn't possibly be ourselves.
greg
--
--------------------------------------------------------------
Greg Titus (g...@cray.com) PE Group
Cray Research, a Silicon Graphics Company Santa Fe, NM
Opinions expressed herein (such as they are) are purely my own.
> >I would like to suggest for your consideration (please post if any are
> >pre-1980 or have significant pre-1980 roots):
[...]
> >- IEEE floating point standard
>
> Floating point standards existed beforehand.
Not to mention that this is an innovation on the level of
the standardised location for the steering wheel and the pedals, imho ...
Victor.
--
405 Hilgard Ave ................................. `[W]e don't usually like to
Department of Mathematics, UCLA ............. talk about market share because
Los Angeles CA 90024 .................... we're not going to share anything.'
phone: +1 310 825 2173 / 9036 .................. [Jim Cantalupo, president of
http://www.math.ucla.edu/~eijkhout/ McDonald's Int.]
>It is interesting, in his golden era Seymour Cray built the
>Cray-1 machine with commodity parts (ordinary ECL chips),
If ECL was so commonplace in '72, why wasn't everybody building ECL
machines with 80 MHz clocks back then?
>later days he worked with the non-commodity GaAs chips
>which apparently made the expected new machine to delay...
The use of GaAs was a comparitively minor factor.
>I think he should have made the parallel machine with the commodity CPUs.
Well, it _was_ commonplace. Standard parts from Motorola and Fairchild.
According to someone's formula, MIPS = Megabytes. If you can scan through
all of memory in one second (or is it 0.1 second?) your CPU is fast
enough. In 1972 most mainframes were under a Megabyte, so they only
needed about 1 MIPS. ECL was overkill.
Still, I did attend a job interview at a company in Hemel Hempstead,
England about that time. They were building minicomputers using ECL. I
can't recall the name of the company; I think they must have disappeared.
(Serves them right -- they didn't offer me a job :-)
--
Chris Perrott
Maybe we should just use our heads. After all, if we hurt our hands,
we can't type. ;-)
>>No we won't. We'll cower, whimper, ignore the problem, and then wonder why
>>we're always getting the stuffing knocked out of us.
>
>That, or we'll look for someone else to blame our problems on,
>knowing full well that it couldn't possibly be ourselves.
Let's not forget to demand "free and fair trade" while we're at it.
Steve
ma...@hubcap.clemson.edu (Mark Smotherman) wrote:
>The IBM ACS was a seven-issue superscalar designed in the mid 1960s
>but never built
d.s...@ix.netcom.com(Duane Sand) wrote:
>Burroughs sold multi-issue stack machines in the late 70s.
My mistake. Burroughs's Mission Viejo division delivered a stack
machine (A-9 ?) in about 1983 which did out-of-order execution over a
16-op window using 3 parallel alu pipelines and cycle-level threaded
execution of microinstruction steps. But that was superpipelined, as in
CDC 6600 etc, with sequential issue of single ops at a faster clock
rate than the basic ALU-op cycle time. It was not superscalar, ie
sustained issuing and completion of multiple ops per tick of the
fastest clock rate in the machine.
Burrough's east coast division at Paoli and Treddifryn developed larger
faster stack machines for gov't work. In late 70's these began working
around the problem of 12-bit stack ops' low semantic content by
recognising certain common pairs (& triples?) of stack ops at
instruction decode time, and executing them as a single long complex
op. This gives some of the speedup advantages of multi-issue
superscalar approaches, but it's too limited to be called that.
I suspect that multi-issue in the modern sense didn't become profitable
until icache bandwidth and latency surpassed the needs of a
single-issue non-microprogrammed pipeline.
So, what was the first machine sold having multi-issue? Anything
before 1985?
ma...@hubcap.clemson.edu (Mark Smotherman) wrote:
>The IBM ACS was a seven-issue superscalar designed in the mid 1960s
>but never built
d.s...@ix.netcom.com(Duane Sand) wrote:
>Burroughs sold multi-issue stack machines in the late 70s.
My mistake. Burroughs's Mission Viejo division delivered a stack
machine (A-9 ?) in about 1983 which did out-of-order execution over a
16-op window using 3 parallel alu pipelines and cycle-level threaded
execution of microinstruction steps. But that was superpipelined, as in
CDC 6600 etc, with sequential issue of single ops at a faster clock
rate than the basic ALU-op cycle time. It was not superscalar, ie
sustained issuing and completion of multiple independent ops per tick
I don't know, but it was probably before 1965. When you talk about
multi-issue and 'superscalar' performance, you are assuming an
architectural model that was not in common use on the early systems
and is very dubious even on modern ones. For example:
Many of the IBM 370 range had microarchitectures that could
specify multiple actions to be performed within a single gross clock
cycle. When programming in microcode, you were effectively working
to a rather restrictive multi-issue model.
Many of the vector systems could start more than one operation
per clock cycle. They were assuredly 'superscalar', because that
was the raison d'etre for their existence!
I am afraid that multi-issue is neither a major technical innovation
nor a new idea. It merely happens to fit with current constraints,
in the way that multi-level clock cycles and vector operations did
with the older systems.
If you want a recent major innovation, try self-correcting hardware
(i.e. the chips that reroute themselves to bypass faulty areas).
Again, this idea is very old, but it was not feasible until about
1980 even in the laboratory and is only now beginning to hit real
production.
Nick Maclaren,
University of Cambridge Computer Laboratory,
New Museums Site, Pembroke Street, Cambridge CB2 3QG, England.
Email: nm...@cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679
> > >- IEEE floating point standard
> >
> > Floating point standards existed beforehand.
>
> Not to mention that this is an innovation on the level of
> the standardised location for the steering wheel and the pedals, imho ...
Definitly not. A lot of experience with bad (and fairly good) FP formats went
into IEEE 758, which was, AFAIK, the first one to make numericists happy. The
only thing they did wrong 8-| was not to put a formal description (in Z, say)
into the standard; Oxford and Inmos did that for them a little later.
Jan
Neither load_locked/store_conditional (which Alpha indeed inherited from
MIPS) nor the other has anything to do with the EXCB/TRAPB. The instructions
themselves are not interesting, what is interesting is the whole concept
of imprecise floating point exceptions (of which they are part of) in
the Alpha architecture. Arithmetic exceptions are allowed to be signalled
arbitrarily after the fact (though an Alpha implementation may decide to
make them precise...), TRAPB/EXCB just put an end to that unlimited horizon.
Burkhard Neidecker-Lutz
EUROMEDIA - Distributed Multimedia Archives for Cooperative TV Production
CEC Karlsruhe , European Applied Research Center, Digital Equip. Corp.
email: nei...@kar.dec.com
AlphaStation 500/500: SPECint95 15.0, SPECfp95 20.4
The Astronautics ZS-1 (designed by Jim Smith, now at U. Wisconsin) was a
decoupled access-execute architecture ("DAE") and could at maximum fetch,
dispatch, and issue two instructions per cycle. There were two "processors"
that could execute in parallel. The Access processor had three function
units, and the Execute processor had four function units.
The ZS-1 fetched 64-bit words from memory into an instruction "splitter".
Instructions could be either 32 bits in length or 64 bits; branches were
64 bits and fully executed by the splitter. Access ('A') instructions
were dispatched by the splitter into a 4-entry A-inst. queue, and Execute
('E') instructions were placed into a 24-entry E-inst. queue. One
instruction per cycle could be issued from each of these two queues (issue
occurs after any dependencies and function unit conflicts had cleared).
Jim's first papers on DAE dated to 1982 (ISCA); papers on the actual ZS-1
design were published in 1986 (IEEE TC) and 1987 (ASPLOS).
> Which 1985 vintage MPU was he supposed to use? An 80286?
Surely not. How about a T800?
Jan
OK. You had to ask...
I think part of the answer is very much the same as regarding microprocessor
clock rates today: (DEC Alphas are pushing 500MHz but MIPS R10000s are only
hitting 200MHz.) It's often a philosophical choice. But also there are
choices that relate to the economics of the target market. In the case
of the CRAY-1, the target market was willing to spend $$$ for special
site requirements and big utility bills.
Having designed with CRAY-1 technology, I can say that the ground rules
were remarkably simple. The entire machine had only three types of
circuits: a NAND gate circuit (2 gates per chip: one 4-input and one 5-input),
an addressable register file circuit (1x16, I believe), and an SRAM for main
memory (1K-bit orginally). The only discetes were termination resistor
packages. All logic outputs were complimentary pairs and each logic
output was 60 Ohm terminated. All board-to-board wiring was 120 ohm
twisted-pair wires carrying complementary signal pairs. Memory had
single-ended outputs but they were terminated to 470 ohms to keep current
swings minimal. All PCB stub lengths were kept under a few inches. These
rules kept system noise under control without the need for ANY decoupling
capacitors anywhere in the machine.
In addition, the power supplies and power distribution busses saw only
a relatively small load variation since every TRUE logic output had a
corresponding FALSE mate. The power supplies though large but extremely
simple in design. A motor-generator (150 KVA, I believe) provided
profound electrical isolation from the power utility.
Of course, FREON-based conductive cooling allowed for high circuit density
keeping wiring distances to a minimum. This certainly effected clock rate
in a positive way.
But the trade-off was in power consumption. Both to power the circuits
and then to cool them.
Other notes:
Cray bought a few MHz of clock rate by utilizing the circular chassis
configuration. This allowed the PCB connectors at the rear (or inner
arc) of the chassis to be in somewhat closer proximity to each other
reducing board-to-board wire lengths while the spacing between boards at
the front (the outer arc) was wider to accomodate ease of access as well
as power busses and coolant plumbing.
: Well, it _was_ commonplace. Standard parts from Motorola and Fairchild.
As I understood it, the 5/4 NAND circuit was built to Cray's spec. It
may well have become a commodity part. Certainly the fact that no other
logic parts were employed in the original design allowed him to buy the
parts at commodity-like prices (several 100K parts per machine).
: According to someone's formula, MIPS = Megabytes. If you can scan through
: all of memory in one second (or is it 0.1 second?) your CPU is fast
: enough. In 1972 most mainframes were under a Megabyte, so they only
: needed about 1 MIPS. ECL was overkill.
Seems like memory *bandwidth* and memory *latency* should be a factor in the
equation somewhere...
: --
: Chris Perrott
-eric
--
|King Lee * Phone 805-664-3148 |
|California State University,Bakersfield * email: kl...@nas.nasa.gov |
|9001 Stockdale Highway * |
|Bakersfield, CA 93309 * |
>>A suggestion was made:
>>>> > Name a major technical innovation in computers since 1980
CD-rom.
Modulation techniques for modems that allow more than 1 bit per baud (not sure
on the timing of this one, but I remember thinking 30cps was fast in 1980...).
Vertical recording for magentic media to increase surface bit density.
And probably plenty that weren't commercially successful:-)
============================================================================
Ian Kemmish 18 Durham Close, Biggleswade, Beds SG18 8HZ
i...@five-d.com Tel: +44 1767 601 361 Fax: +44 1767 312 006
Info on Jaws and 5D's other products on http://www.five-d.com/5d
============================================================================
`The customer is King, but the proprietor is God'
System 370 and System/3 from about 1970 were constructed of MST (Monolithic
System Technology). This was IBM's internal ECL. But of course they weren't
running at 80 MHz. (by a long shot) One reason is that we weren't as smart as
Seymour Cray.
--
Del Cecchi
cecchi@rchland
This all sounds Way Cool.
Is anyone going to write this up (or is this considered property to
be sold)?
A wild observation: why didn't CCC try to get funding from the nano-tech
people? Or the other groups that really care about micro-machining? After
all, by the sounds of it, CCC was really developing a whole micro-machine
industry that is needed for building micro-computer (micro as in small).
--
Stanley Chow; sc...@bnr.ca, stanley....@nt.com; (613) 763-2831
Bell Northern Research Ltd., PO Box 3511 Station C, Ottawa, Ontario
Me? Represent other people? Don't make them laugh so hard.
Thanks for the clarification; the original was imprecise... (IMHO)
The Ardent Titan (1988) had imprecise floating point exceptions, and a
set of wait instructions a cautious user could issue inorder to take
them precisely. There also are waits that allowed the programmer to
wait until the vector unit had completed all outstanding reads, or
outstanding writes, and so on.
Our premise at the time is most users would be happy with letting
denormals flush to zero, and hence we would impose the burden for
precision on just the subset that cared.
--
Michael McNamara Silicon Sorcery <http://www.silicon-sorcery.com>
Get my verilog emacs mode (subscribe for free updates!) at
<http://www.silicon-sorcery.com/verilog-mode.html>
How does this differ from FWAIT on the 386?
---Dan
George has a set of manuals (3) and those brochures.
He wants to place them into the Computer Museum in Boston.
I tried to grab them from him, but that's probably a useful second choice.
I wish I had more SS-1 photos.
>I wanted to photograph it next to a copy of "Colossus: The Forbin Project",
>but not everyone saw the humor in that ;-) (the character of Charles Forbin
>was inspired by Seymour Cray).
I can laugh, but I wish it had been a better movie.
"THIS IS THE VOICE OF WORLD CONTROL."
You're in a desert walking along in the sand when
all of a sudden you look down, and you see a tortoise.
It's crawling toward you. You reach down; you flip the tortoise on its back.
The tortoise lays on its back, its belly baking in the hot sun,
beating it's legs trying to turn itself over; but it can't,
not without your help, but you are not helping. Why is that?
>>He did more than try. We may not have been a raging commercial success,
>>but the Cray-3 certainly worked on a technical level. Once the module
>>assembly process was debugged, it worked quite well.
>[...]
>>You forgot to mention that we had to grind off the back of the wafers to
>>make the chips thinner before they were assembled in this fashion.
>>The interconnects themselves were pretty interesting, a seven strand
>>Be/Au wire, finer than a human hair, laser micro-welded at precise intervals
>>so that it would expand into a sort of "birdcage" when inserted and
>>twisted. This is what guaranteed good contact.
>>Of course, the micro-coaxial cable developed for the Cray-4 was pretty
>>slick, too. You could thread a needle with the stuff.
>This all sounds Way Cool.
It was.
>Is anyone going to write this up (or is this considered property to
>be sold)?
The machinery was sold off at auction (the last time I looked there
were some custom-made fully automatic 1GHz wafer testers for sale at a surplus
electronics store here in the Springs). Pretty much everything ended
up as scrap metal - just like the (working) SSI computers a couple of
years earlier.
In any event, it's discussed in marketing-level detail (;-) ) in
(mirabile dictu) the Cray-3 and Cray-4 marketing materials as well
as the Annual Reports. There are probably a few Cray-3 Hardware Reference
Manuals floating around NCAR, as well.
>A wild observation: why didn't CCC try to get funding from the nano-tech
>people? Or the other groups that really care about micro-machining? After
>all, by the sounds of it, CCC was really developing a whole micro-machine
>industry that is needed for building micro-computer (micro as in small).
Well, yes - but only because nobody else had the cojones to do it (MHO, of
course).
The intellectual property was sold for a ripsnorting $250,000 (that's
right, *thousand*).
Steve
>d.s...@ix.netcom.com(Duane Sand) writes:
>>So, what was the first machine sold having multi-issue? Anything
>>before 1985?
>
>The Astronautics ZS-1 (designed by Jim Smith, now at U. Wisconsin) was a
>decoupled access-execute architecture ("DAE") and could at maximum fetch,
>dispatch, and issue two instructions per cycle. ...
>Jim's first papers on DAE dated to 1982 (ISCA); papers on the actual ZS-1
>design were published in 1986 (IEEE TC) and 1987 (ASPLOS).
Were any ZS-1's completed and sold?
I think I recall that the two instructions per cycle was for a 1-1 mix of FP
ops and the integer ops that kept the FP loops looping. If so, this was a
more easily-programmed variant of the i860 flavor of dual issue.
Being an anti-FP bigot, I would especially like to hear about early actually-sold
machines which issued multiple integer ops per cycle. I think i960 was the
first commercial microprocessor doing that, circa 1989. Tandem's Cyclone
(a dual-issue microcoded stack machine ECL minicomputer) was a few months
earlier, but it cheated. The instr decoder mapped common opcode pairs into
a single entry pt in a single microcode engine.
I believe the current terminology is: Asymmetric VLIW architecture
(minus the necessary automatic parallalization compiler technology).
May I add that he was inspiration to many of us. The inspiration
came from his ideas and his character. The design of the
CDC 6600 and Cray I stirred me as no other machine. I dare say
that the death no other computer architect would bring forth so
much deeply felt comments as the death of Seymour Cray.
Are these designs available in good reference books? I am a software
engineer by trade but I have read some hardware books and perhaps
could learn a lot or at least understand what you are praising if I
could get hold of the raw material. All recommendations gratefully
received.
Chris
--
Chris Morgan |email c...@mihalis.demon.co.uk (home)
http://www.mihalis.demon.co.uk/ | or chris....@baesema.co.uk (work)
> > This is used mainly with reagard to floating-point instructions. It allows
> > these instructions to be issued without regard to preserving in-order
> > exception semantics.
>
> How does this differ from FWAIT on the 386?
I suppose FWAIT waits until all FP operations have completed? TRAPB only has
to wait until all previous FP instructions signal that they will not generate
an exception; this they can do early in operation, long before results are
available.
Jan
To whom? Will any of the technology ever see the light of day again?
I'll add my voice to the call for a summary of what was cool about
the Cray-3 and Cray-4. I've heard bits and pieces, but most of what I
know I've seen here, recently.
Surely one of you CCIers can see fit to spend half an hour and list
the key factors (GaAs, bare chip bonding, microcoax, any interesting
features of the instruction set and architecture, etc.). Do it for
Seymour, do it for the next generation of computer architects, do it
for the fame and glory, Just Do It (tm) :-).
--Rod
I used one extensively at NYU. It was fast, though not stunningly so;
the Convex across the room was much faster for vectorizable code. The
ZS-1 BSD port had several annoying bugs and kept crashing despite my
best efforts to treat it nicely. The hardware wasn't rock-solid either.
---Dan
I had thought that my opinions were subjective, but evidently
many people share them.
King Lee
One of the wonders of the time was the wealth of detail that went into
the hardware reference manuals. I'm sorry to say I no longer have any,
but if you can find copies you can spend many enjoyable hours studying
them. Mr. Lee above captured the emotions that many of us felt about
Seymour and his machines.
Best regards,
Joel Williamson
--
Joel Williamson jo...@rsn.hp.com
Hewlett-Packard Company, Convex Division (HP/CXD) (972) 497-3019
I think we can safely say that the thread has proven vernon's point.
It's also quite probable that we're not building the fastest/cheapest/
best computers we can. There may be superior approaches that we will
never see because there is no locally optimal path to them from where
we are. Imagine, for example, that there were an approach that would
outperform our systems by a factor of 10 at the same level of capital
investment. If someone proposed that approach today, the capital would
never be available.
--bob--
The high end IBM system 360's designed in the middle 60's
had some form of super scaler. Seem's like it was
called Tomasulo's algorithm, but memory may be failing
me here.
R. Brice
MCC
Air bags. I am still trying to outfit my computer with one for when it
crashes!! :-)
-- Norman
I ran on a 360/91 that was heavily pipelined and gave those famous
imprecise exception interrupts, but I don't remember anything that
issued multiple instructions in a single clock. Anyone else?
>: >|> So, what was the first machine sold having multi-issue? Anything
>: >|> before 1985?
>:
>: The high end IBM system 360's designed in the middle 60's
>: had some form of super scaler. Seem's like it was
>: called Tomasulo's algorithm, but memory may be failing
>: me here.
>I ran on a 360/91 that was heavily pipelined and gave those famous
>imprecise exception interrupts, but I don't remember anything that
>issued multiple instructions in a single clock. Anyone else?
IBM Advanced Computer System -- The First Superscalar
Mark Smotherman, updated December 1995
In 1961 IBM started planning for two high-performance projects to exceed
the capabilities of Stretch.
"Project X", which led to the announcement of the IBM System/360 Model 91
in 1964 and it delivery in 1967, had a goal of 10 to 30 times the
performance of Stretch. The second project, named "Project Y" in 1963,
had a goal of building a machine that was one hundred times faster than
Stretch. The initial work concerned advanced circuit technologies,
but studies of machine design were undertaken in 1962 where multiple,
identical functional units would be used.
The announcement of the CDC 6600 supercomputer added impetus to the
Project Y effort, and a supercomputer laboratory was
established in San Jose, California, in 1965 under the direction of Max
Paley and Jack Bertram. The San Jose team, which included Fran Allen,
John Cocke, Herb Schorr, and Ed Sussenguth, developed what became known
as the ACS, Advanced Computer System.
Parallel decoding of multiple instructions and dispatch to two instruction
windows, one of which provided out-of-order execution, was proposed for
this machine. Schorr wrote in a 1971 paper on the ACS that "multiple
decoding was a new function examined by this project." Cocke in a
recent interview stated that he arrived at the idea of multiple decoding
for ACS in response to an early 1960's IBM internal report written by
Gene Amdahl on how fast a single-instruction-counter machine could
execute. In this report, Amdahl postulated one instruction decode
per cycle as one of the fundamental limits on obtainable performance.
In addition to multiple decoding, the ACS design
proposed many technological, architectural, implementation,
and compiler techniques that are now used in present-day superscalar
processors.
Technological innovations:
<UL>
<LI> The clock cycle time goal was 10 nanoseconds. (In comparison,
the Project X machine - the S/360 Model 91 - had a 60 nanosecond
cycle time, and the mid-range S/360 Model 50 had a 500 nanosecond
cycle time).
</UL>
Architectural innovations:
<UL>
<LI> Register-to-register instruction formats had three register specifiers.
There were 31 24-bit index registers and 31 48-bit
arithmetic registers.
<LI> There were 31 "backup registers", each one being paired with a
corresponding arithmetic register. This provided a
form of register renaming for overlapped execution of different
loop iterations, so that a load or writeback could occur to the
backup register whenever a dependency on the previous register
value was still outstanding.
<LI> A set of 24 condition code registers allowed precalculation of branch
conditions and also supported logical operations between condition codes.
This similar to the eight independent condition codes in the IBM RS/6000
and PowerPCs.
<LI> A conditional write-back bit in each instruction was used with a
conditional 'skip' to replace regular conditional branches (similar to
conditional moves used today).
<LI> A 'prepare-to-branch' instruction was paired with an 'exit' instruction
so that a variable number of branch delay slots could be filled.
This was due to Ed Sussenguth's recognition that a conditional branch
was comprised of three separate functions: branch target address
calculation, taken/untaken determination, and PC update. The ACS
combined the first two functions in the prepare-to-branch and used
the exit instruction to accomplish the third function.
<LI> In 1967, prior to a major change in the project, a second instruction
counter and a second set of registers were added to the simulator to
make the ACS into a multithreaded design. Instructions were tagged with
an additional "red/blue" bit to designate the instruction stream and
register set; and, as project members had expected, the utilization of
the functional units was increased since more independent instructions
were available.
</UL>
Implementation innovations:
<UL>
<LI> There were six functional units for index operations: compare, shift,
add, branch address add and logic, and two effective address adders.
There were seven functional units for arithmetic operations:
compare, shift, logic, add, divide/integer multiply, floating-point
multiply, and an additional floating-point adder.
<LI> Up to 7 instructions could be issued per cycle: 3 index operations
(two of which could be load/stores), three arithmetic
operations, and one branch. A load/store/index instruction buffer
would be decoded each cycle and could issue up to three instructions
in-order. An 8-entry arithmetic instruction buffer would
be decoded each cycle to search for up to three ready instructions and
could issue these instructions out-of-order.
<LI> There was dynamic branch prediction to provide for instruction
prefetch, but speculative execution was ruled out after a study of the
predict-untaken policy of Stretch indicated that performance problems
could arise in some cases of misprediction recovery. A 96-entry target
instruction cache was proposed by Ed Sussenguth to provide the initial
target instructions and thus reduce the cycle penalty for taken branches.
<LI> Most external interrupts were converted by the hardware into branches
to the appropriate handlers and then inserted into the instruction
stream to allow previously-issued instructions to complete. (This was
called "soft-interrupts".)
<LI> Arithmetic exceptions were handled in two modes: multiple issue with
imprecise interrupts and serialized issue. This approach was used on
the S/360 Model 91, and a similar approach is used in the IBM RS/6000.
<LI> Cache memory was introduced within IBM in 1965, leading to the
announcement of the S/360 Model 85 in 1968. The ACS adopted the
cache memory approach and proposed a 64K-word unified instruction
and data cache. The ACS cache was
two-way set associative with a line size of 32 words and had LRU
replacement. A block of up to eight 24-bit instructions could be
fetched each cycle.
<LI> A store buffer was proposed and provided address-collision interlock for
load bypass.
</UL>
Compiler innovations:
<UL>
<LI> An optimizing compiler with instruction scheduling, register
allocation, and global code motion was developed in parallel
with the design by Fran Allen and John Cocke. Fran Allen credits this
work as providing the foundations for program analysis and
machine independent/dependent optimization.
</UL>
Special emphasis was given to six benchmark kernels by both the design
and compiler groups. One of these was double-precision floating-point
inner product. John Cocke estimated that the machine could reach 5-6
instructions per cycle on linear algebra codes of this type.
By 1968, an ACS machine that was incompatible with the S/360 architecture
was loosing support within the company.
Gene Amdahl proposed to redesign the ACS to provide compatibility, and
this plan was accepted by management.
However, the project was thrown into a state of disarray by this
decision, and approximately half the design team left.
By 1969, circuit design problems coupled with the achievements of the
cache-based S/360 Model 85, a slowdown in the national economy,
and East-coast/West-coast tensions within the company, led to the
cancellation of the ACS360.
Further work was done at IBM in the 1970s on superscalar S/360s and S/370s,
but apparently no public documents exist.
References
<UL>
<LI> John Cocke, "The search for performance in scientific processors,"
CACM, 31, 3 (March 1988) 250-253.
<LI> Emerson Pugh, Lyle Johnson, and John Palmer. IBM's 360 and
Early 370 Systems. Cambridge, MA: MIT Press, 1991.
<LI> Herb Schorr, "Design principles for a high-performance system,"
Proc. of the Symposium on Computers and Automata, New York,
April 1971, pp. 165-192. [Many thanks to Jim Smith for
pointing me to this paper.]
</UL>
Really?
>It's also quite probable that we're not building the fastest/cheapest/
>best computers we can. There may be superior approaches that we will
>never see because there is no locally optimal path to them from where
>we are. Imagine, for example, that there were an approach that would
>outperform our systems by a factor of 10 at the same level of capital
>investment. If someone proposed that approach today, the capital would
>never be available.
Only a factor of 10? Not 100? Not 1000?
The big problem is somewhat due to being stuck in the von Neumann
paradigm of computing. Years ago when I took an OS class prior to
taking a job I heard about dataflow machines lacking program counters.
The prof (name best not mentioned) had never heard of such a concept.
And architectural assumptions will impact software. Few are willing to
take bold steps.
If you continue to think about capital, then I would suggest that you
are a computer Republican (conservative). This does not make other people
computer Democrats. More like computer reformers.
The issue is whether you see this in comp.sys.super or comp.arch
and how your thinking is aligned.
Speed (device switching) will be made increasingly tough alas.
It can't all be done at that level.
32-bit fixed piont binary arithmatic.
Word (32-bits/word) only addressing.
Asyncronus math units (in discrete transisters).
Limited access index registers (ie. missing address bits due to the cost
of transisters used in the registers).
Dedicated function registers & hardware supported looping control
for repeat intensive operations.
Completely parallel I/O units with independant access to main memory.
So, why were the math units asyncronus on the Ford 102? Not for the same
reason that they are on the Cray systems! The Ford folks weren't capable
of running the syncronization through the units simply because the design
technology that they were using at that time wasn't capable of doing it!
(So it's kind of like "chaining" on the Cray systems, if you discover that
it enhances the operation of the system, then advertise & flaunt it!)
Of course, there were some drawbacks. Since transisters were so expensive,
the internal (system) registers were shared between the functional units &
the perphrial operations. Consequently, CPU calculations went to 1/2-speed
when I/O was operating (the I/O getting a 1/2-time interest in these
registers)!
Should I mention that Main Memory was a stack of core-planes in a separate
cabinet! We upgraded the systems by removing the core & putting a hinged
board containing chip memory into the CPU cabinet with a ribbon cable from
it to the CPU in 1980.
(PS. This machine has since moved, however Congress has cancelled most of
the appropriations for upgrades to the network that this computer supported,
so I suspect that it has been replaced, but I wouldn't bet on it! The
stateside portion of this network used RCA hardware, which was equivalent to
IBM 360 hardware & had, with little difference, the same instruction set.)
(Oh! and I forgot to mention that this was a 32-bit word system that used
Octal! Quite a nice introduction to the Cray, since Seymore liked Octal
and insisted on using it in all of his systems. ... Enjoy.)
--
David Ecale
ec...@cray.com Work = 612-683-3844 // 800-BUG-CRAY x33844
http://wwwsdiv.cray.com/~ecale Beep = 612-637-0873
Will hack UNIX(TM) for food!
--
Glen Clark
gl...@clarkcom.com
In article <Dzu8w...@mcc.com> r...@bromine.mcc.com (Richard S. Brice) writes:
>Fast hardware is nice, but if you can figure out a way to
>build computers that increase programmer productivity by
>a factor of 10 you won't have to worry about capital.
Whereas I am fully empathetic with your view (having left software
engineering some 14 years ago) I grab a quote Sopka's posting
(a good reference, BTW) of the Smithsonian's Cray interview:
that we have to remember the purpose of machines *IS* speed (thought I cut
and pasted that comment, oh well, I FAQ'ed John's ref in the c.s.s. FAQ
panel 18). Cray himself also said:
My guiding principle was simplicity.
I think there is an expression for that.
Don't put anything in that isn't necessary.
There are elements (people, agencies, institutions, etc.) in this country
which will literally kill for speed. One of the last great claims was
the IBM TF-1. I heard a lot of people including a former officemate
Eric (just had lunch with Eric) say that for 1000x speed he'd program
the machine in assembly language. If it were only that simple.
My guess is that you are reading this in comp.arch and not c.s.s.,
because of the cross post.
The experience at places like here, LLNL, and LANL and other places is
to expect an average (sustained) performance on vector machines of 10%
rated "peak" and that distribute member machines 1% was a common expectation.
That ASCI at LLNL is hoping for 33% is quite ambitious in this regard
(to reach a teraFLOP, too). That is what supercomputing and Mr. Cray
were about.
Cray didn't produce machines for the masses. That's one reason why he
was a National treasure of sorts.
10x is barely interesting performance gain for a lot of people who
aren't willing to pay more for their computing.
--Devil Screw Tape, Jr.
(I have to thank George Michael for introducing me to that CS Lewis book)
There are quite a few people hoping for that remarkable innovation.
Yeah, I think I attempted to take it in your broadest sense.
I used to push more for optical innovation (it's not that it hasn't
panned out, but that inertia is favoring MOS at the moment).
Others pushed GaAs, and even Cray in the 1982 (boy that's 14 years old!)
interview noted InP.
The great thing about integrated circuitry was that it placed a lot of
powerful building blocks in the hands of many people to try things like
build personal computers.
Edwin H. Land maintained that "discoveries are made by
some individual who has freed himself from a way of thinking that
is held by friends and associates who may be
more intelligent, better educated, better disciplined,
but who have not mastered the art of a fresh, clean look at the
old, old knowledge."
--from "Eyeball to Eyeball"
>Evolutionary systems are full of situations like this. Organisms reach
>a point of local optimization within an ecosystem and can't reach a
>more survivable form because the intermediate stages are less
>competitive. They can't get from A to B one step at a time.
>
>: ...I would suggest that you are a computer Republican (conservative).
>
>Perhaps I just have a higher standard for "significant innovation" than
>you do.
Portions of the US Government are hoping for 100-1000x performance gain.
Now, while I think some of our (not just my agency) managers must be
smoking something, they are attempting to make plans (the whole
Petaflops thing), which the way they are going about doing it scares me
that we are going to be pouring money down a blackhole.
The question is who's going to pay for that innovation? Who's going to
do it (or even offer it)?
I doubt certain elements of the government will, nor will the existing
US computer industry, that leaves universities: and I somehow doubt they will.
It's our interesting time.
>One of the wonders of the time was the wealth of detail that went into
>the hardware reference manuals.
>Joel Williamson
Somewhere buried some place, I have Denelcor HEP manuals buried.
Now, that is a vastly different architecture than most seen by people.
Quite a few books on interesting architect exist, but for many, one
can question their value (for instance, I have E.O.'s 432 book).
In talking this over with the Smithsonian, the Boston CM, and Patterson,
I think it's going to become essential to do some computer archeology
and try to locate useable emulators and simulators (like the ENIAC
[simple] emulator).
Many firms don't make architecture manuals available.
Serves them right.
When I said capital, I meant it in the broadest sense. The current
direction of computer technology has been investigated by tens of
thousands of people over the course of 40 years. It would take a
remarkable innovation to replace it with something radically
different. Most people would probably not even recognize it as
computing. To quote Seymour Cray: "It's always easy to do the next
step and it's impossible to do two steps at a time."
Evolutionary systems are full of situations like this. Organisms reach
a point of local optimization within an ecosystem and can't reach a
more survivable form because the intermediate stages are less
competitive. They can't get from A to B one step at a time.
: ...I would suggest that you are a computer Republican (conservative).
Perhaps I just have a higher standard for "significant innovation" than
you do.
--bob--
In spite of Supreme Court rulings, my employer is not a person and has
no views.
One answer is the caption noted at the bottom of the memo as it was shown
in "A few good men from Univac" by D. Lundstrom.
And repeated in Thorndyke's %T The Demise of ETA Systems
Paragraph 1: we started with too many people.
If you get stuck in the "more is better trap," heaven help you.
Fast hardware is nice, but if you can figure out a way to
build computers that increase programmer productivity by
a factor of 10 you won't have to worry about capital.
R. Brice
_________
I don't think so. According to the Pentium documentation: ``FWAIT causes
the processor to check for pending umasked numeric exceptions before
proceeding... Coding FWAIT after an [FPU] instruction ensures that any
unmasked floating-point exceptions the instruction may cause are handled
before the processor has a chance to modify the instruction's results.''
It's of course possible that this is implemented as all-engines-stop; a
program wouldn't be able to tell except by the timing. But I would guess
that it continues as soon as it's sure that no exceptions happen.
In any case, the FWAIT interface seems to match the TRAPB interface.
FWAIT was introduced in the 80387 a decade ago. It probably has much
earlier precedents.
---Dan
>>>We will do what we have always done when we've had our back to
>>>the wall; we will turn round and fight.
>Wait, if we've got our back to a wall, and we turn round and
>fight, won't we hurt ourselves punching the wall?
Yes. It is a quote from John Major, temporarily prime minister here.
-- Richard
--
"Nothing can stop me now... except microscopic germs"
Eugene Miya (eug...@cse.ucsc.edu) wrote:
: One answer is the caption noted at the bottom of the memo as it was shown
: in "A few good men from Univac" by D. Lundstrom.
: And repeated in Thorndyke's %T The Demise of ETA Systems
: Paragraph 1: we started with too many people.
The answer also came down to "It has to be 360 compatibile." One will note
that from an economic point of view, this was a brilliant management
decision.
To take this into modern times, would you rather be Intel (the most
profitable microprocessor) or DEC (the fastest microprocessor)? (And no,
I'm not saying there is a "right" answer to this question. Though the stock
market certainly thinks there is.)
-Z-