html://www.paralogos.com/DeadSuper
It is intended to be a hypertext archive of information on the
many failed attempts at making a business out of high-performance
computing over the past couple of decades. As the pages
themselves indicate, I am actively soliciting contributions
of material and links.
--
Opinions expressed may not be Kevin D. Kissell
those of the author, let alone Silicon Graphics MIPS Group
those of Silicon Graphics. European Technical Programs
For those of you who absolutely insist on clicking from
within their newsreaders, that should, of course, have read
http://www.paralogos.com/DeadSuper
--
Opinions expressed may not be Kevin D. Kissell
those of the author, let alone Silicon Graphics MIPS Group
those of Silicon Graphics. European Technical Programs
Not a new one :-) Anybody else remember the scads of IBM 1620
consoles that made up "the console" for Collosus, The Forbin Project?
How about the acres of Tab (Electro-mechanical tabulating) Equipment
in that old movie where Cary Grant is trying to seduce some squeaky-
clean "kitten" (The one that _nearly_ ends with her going to a sleazy
motel with John Astin).
Seriously, the movies don't lead public taste/perceptions,
for the most part, they follow it. The set decorators on these
films don't think they're being retro, they think they're being
"hip", "with it", or whatever was _last_ year's code-word for
"able to find bottom with both hands". It's just that this year's
hero stands a good chance of being next-year's (or the yer after
that's) goat.
(Of course, getting the stuff _cheap_ may have a lot to
do with it :-)
Mike
| alb...@agames.com, speaking only for myself
ETA in "Die Hard".
--
______________________________________________________________________
Steve Kappel steve....@iname.com http://www.visi.com/~skappel
Does anyone know what SGI paid (or donated) for the product placement in
_Lost_In_Space_? It also wasn't clear to me whether they were supposed to
be sponsoring the Jupiter 2 mission or just the news coverage of the
mission. Supposedly there is a shot of one of their new NT machines in the
movie, but I didn't catch it.
The powers that be at SGI should have gotten the right to rewrite the line
of dialog about "I just hacked into his operating system and accessed his
subroutines" as part of having their name associated with the film.
Great special effects. Wish they'd spent a tenth as much on a script...
-Z-
IBM 7094 in _Dr Strangelove_.
regards,
Ross
--
Oh Lord, won't you buy me a PDP-10?
My friends all hack Vaxen; I must make amends.
Ross Alexander, ve6pdq || rwa@cs
So who will be a major player in computing 30 or 60 years from now?
Very few companies have lasted more than a decade or two in the past
50 years. Even IBM had its scary moments. Last Sundays NY Times was
cautious about Intel introducing a new architecture (Merced 99),
recalling its failures to do so in the past.
It is my understanding that Merced will run the HP and Intel instruction
set so that existing apps under DOS/Windows and HP-UX can be executed.
This backward compatibility should minimize the risk of a new
architecture. Can someone who is familiar with the details of Merced
comment as to whether this is really true? My info comes from Byte
magazine.
--
=========================================================================
Jim Tuccillo j...@radia1.com
IBM j...@vnet.ibm.com
415 Loyd Road voice: 770.487.6694, fax: same, call first
Peachtree City, GA 30269 http://www.radia1.com/jjt1
=========================================================================
Lots of other sources beyond Byte. However, the question is, how *fast*
will it run existing x86 binary programs?
Apple demonstrated with the 68k->PPC transition that a combination of
a robust emulator, faster hardware, and recompiled OS carefully optimized
where programs were spending the most time could work as a transition
from one architecture to another. The key here was that basically every
program ran faster, or as fast, on the first PPC systems than it
did on the fastest 68040 macs at the time, and equally robustly.
To achive the same success, Microsoft will have to have a Merced
native Windows 98 and Windows NT ready to go, the x86 emulator will
have to be reasonably fast (however it is implimented), and the
systems will have to be both affordable and robust.
One cautionary tale here is to look at the Alpha NT experience.
The fastest CPU available right now for NT systems, by far, is
the Alpha. It runs native compiled NT. It has binary emulation
that runs quite fast enough. It even costs a reasonable amount.
But the market just isn't buying Alpha NT systems in bulk,
even though it technically appears to fulfil the need fully.
If the prices for Merced that are being bandied around,
$2k per chip for example, are accurate, I would worry a lot
if I worked at Intel. Even being TrueBlueIntelCertified
will not save it if the market shys away, as it did with
the i860, iapx32, etc. For now, we can only speculate.
The only people who know the architecture, performance,
and cost of the chips aren't talking yet, and the
range of speculated values is too wide for accurate
predictions made by outsiders.
-george william herbert
gher...@crl.com
I know this is picking a nit, but...
Wasn't iAPX32 an internal Intel designation for the 80386? If so, then I
don't think you can reasonably claim that the market "shied away" from it
(although the market did take a _long_ _time_ to actually start using some
of the architectural extensions).
Perhaps you meant to refer to the i432?
-- Patrick
Existing x86 code will be a dog on Merced. Software vendors will
have to recompile to take advantage of Merced - no different than
recompiling for Alpha today... and few are doing it.
Two instruction sets on one chip... please name one instance where
that has been done commercially (successfully).
I don't see any reduced risks.
We cannot hope to see this far into the future of technology, or even 15
years into the future. See http://www.shinma.org/ttk/vinge.html for some
(admittedly far-fetched) thoughts on this.
Just in the context of movie-making, it really doesn't matter who the
major player in computing will be 30 years from now. If we are to believe
Moore's Law, 500MHz 64-bit superscalar processors will cost less than two
cents apiece in 30 years. With compute power that cheap, anyone with a few
bucks to blow on hardware could render high-resolution 30fps movies in
realtime. I expect Moore's Law to break sometime in the next ten years or
so, though, unless we figure out how to use ultra-high frequency gamma rays
in our fab and solve those pesky quantum mechanical problems.
-- Bill Moyer
You must have a very low threshold for "high-resolution", or feel
that the interconnect to gang a herd of these things will be equally cheap.
From where I sit, anything that doesn't run Win95 is getting _more_
expensive, not less. Maybe VisualRenderman+++++ will come out and we
will see render-farms in your pocket, but then, who writes the scripts
(OOOOPS, forgot that they aren't necessary anymore, if current hit
movies are any indication :-)
: I expect Moore's Law to break sometime in the next ten years or
: so, though, unless we figure out how to use ultra-high frequency gamma rays
: in our fab and solve those pesky quantum mechanical problems.
The big issues will (still) be software. When an L2 cache miss
becomes more expensive (in cpu cycles) than a page-fault to an NFS-mounted
swap-file, and software is still essentially single-task, somethings
gotta give :-) A few more years of version-catfights among DLLs ought
to make things interesting, too :-)
In article <6gcag4$t8e$1...@aurora.cs.athabascau.ca>,
Ross Alexander <r...@cs.athabascau.ca> wrote:
>IBM 7094 in _Dr Strangelove_.
Really? Wow, the Museum has two of those. I'll have to rent that again.
I could use always use a good laugh: "going to hear from the Coca-Cola
Company..."
I was amused by the modified "Cray" in Sneakers.
I didn't see any special machines in Mercury Rising.
Graphics rendering is about as close as you can get to a perfectly
scalable algorithm. The interconnect does not need to be any faster
than 100b ethernet to distribute the data to be rendered and collect
it again. The processors do not need to communicate with each other
at all in order to render their allocated slice.
So yes, I do expect the interconnect to be equally cheap. Inexpensive
embedded processors are already available which integrate an ethernet
bus (qv, motorola's MPC850DC).
[..rediculous retort about needing W95 snipped..]
>: I expect Moore's Law to break sometime in the next ten years or
>: so, though, unless we figure out how to use ultra-high frequency gamma rays
>: in our fab and solve those pesky quantum mechanical problems.
>
> The big issues will (still) be software. When an L2 cache miss
>becomes more expensive (in cpu cycles) than a page-fault to an NFS-mounted
>swap-file, and software is still essentially single-task, somethings
>gotta give :-) A few more years of version-catfights among DLLs ought
>to make things interesting, too :-)
What are you talking about? This has nothing to do with graphics
rendering, which has good locality of access and is easily scaled
across multiple processes on multiple processors, and has been for
quite some time. Software in general is also becoming increasingly
multithreaded. I would expect multithreaded software to be the rule
rather than the exception in eight years.
Perhaps the problems you are alluding to are microsoft-related?
That would explain why they make no sense to me, a Unix user, at all.
-- Bill Moyer
There was a connection machine in Jurassic Park. Now *that* was cool.
--
Paul Hsieh
email,finger: q...@pobox.com
URL: http://www.pobox.com/~qed
>If the prices for Merced that are being bandied around,
>$2k per chip for example, are accurate, I would worry a lot
>if I worked at Intel. Even being TrueBlueIntelCertified
Why? Intel isn't going to stop making x86 compatible chips, reportedly
they have 6 more generations (however you count a "generation") of the
x86 on the roadmap before they project Merced can take over for the
whole market.
I don't think $2000/chip initial cost is going to hurt them one whit,
if they are targeting the high end market. How is adding $3000-$4000
to the cost of a big server or CAD workstation going to hurt them, if
it is really wicked fast? People pay a ridiculous premium for the
fastest x86, compared to a slightly slower model (how much more does a
333 MHz Pentium II cost versus a 266 MHz, and for how few % performance
gain?) HP, Sun, DEC, IBM, and SGI charge thousands more for their high
end model that is 10-20% faster than the next highest end, and sell
plenty. The people predicting the failure of Merced based on price
are either IMHO interested in seeing Alpha finally break through, or
are PC gamers pissed off because they won't be able to afford a Merced
based system right after intro :)
--
Douglas Siebert Director of Computing Facilities
douglas...@uiowa.edu Division of Mathematical Sciences, U of Iowa
A fool of sufficient magnitude can be found to overcome any foolproof system.
I think a realistic projection of movie graphics needs more bandwidth than
that, as the texture servers need to be either replicated many times or else
very high bandwidth. Graphics scalability depends on algorithm: effects
like radiosity don't subdivide as well as traditional triangle rendering.
It also may take more than 100Mb/s to describe the geometry being rendered.
So while I'd rate rendering as very scalable, it's definitely less scalable
than brute-force attacks on symmetric key crypto (my candidate for most
scalable real-world problem).
Fortunately for those doing graphics, other problems driving computer
development do a very good job of requiring machines that are perfectly
adequate for rendering.
> So yes, I do expect the interconnect to be equally cheap. Inexpensive
>embedded processors are already available which integrate an ethernet
>bus (qv, motorola's MPC850DC).
I also expect the interconnect for real-time graphics rendering to be
cheap, but I expect it to be faster and less cheap than 100Mb ethernet.
[snip]
> -- Bill Moyer
Jon Leonard
We could digress a long way into different types of "rendering"
algorithms and different data decompositions, and maybe we will, but
basically load balancing tends to kill scalability beyond dozens-hundreds of
processors for most purposes.
Which is, granted, going to be enough for quite a lot of work/pixel in a
few decades.
Jon
__@/
Not necessarily. There is the model description bottleneck. That is
the sence and motion description can be much larger than core
and even larger than the output images. This has to be [partially]
distributed to all rendering engines. I recall the Toy Story models
themselves where approaching terabyte size. Seismic imaging has
a similar model problem.
A great physicist said once "Predictions are difficult, especially those
of the future". 60 years ago, there was only one computer, and it was a
mechanical one, standing in the living room of Konrad Zuse's parents.
Today, some of the most important players are less than 30 years old
(well, except Linus Torvalds, I mean companies, not persons ;-). A hell
lot of things can change.
We may have an economy change. The communism collapsed in a few years
without prediction, the asiatic capitalism is in process of collapsing
(did anyone predict that?), the bet that western capitalism is still the
economic model in 60 years is at some risc. Perhaps in 60 years our
grandchilds say "Mom, I found a necktie in the cellar. Was grandpa a
pointy haired boss?" ;-))).
It's likely that we'll have a technology change. We had already four
different technologies (mechanical, tubes, transistors, ICs), each
lasting about twice as long as the predecessor.
--
Bernd Paysan
"Late answers are wrong answers!"
http://www.jwdt.com/~paysan/
>I didn't see any special machines in Mercury Rising.
They mentioned using 'dual cray supercomputers'
to validate the mercury code, but didn't show any
that I saw either.
Are you sure? After all, the existing "686" processors are not
really x86 processors at their heart; all of them translate the
x86 instructions on the fly to internal RISC-like operations.
Why don't you think that Merced can follow the same strategy?
Just the same:
>Software vendors will
>have to recompile to take advantage of Merced - no different than
>recompiling for Alpha today... and few are doing it.
This is, in the end, correct; Merced only wins if large numbers of
developers ship native Merced code.
>Two instruction sets on one chip... please name one instance where
>that has been done commercially (successfully).
Oh, come on. The Intel 386 (and all its successors) is such a processor.
Its 16-bit instruction set and 32-bit instruction set are really two
separate sets. The only reason it "won" is that it could run all
the 16-bit Windows/DOS code, and in the end, developers recompiled
to use the 32-bit instructions.
--
-- Joe Buck
See my daughter: http://www.byoc.com/homepage/130481/molly.html
Boring semi-official web page:
http://www.synopsys.com/news/pubs/research/people/jbuck.html
Considering the hype regarding Merced, a low penetration for Merced
due to high prices in the mass market could be considered a failure,
especially with companies supposedly wanting to buy cheap NT workstations
in droves.
I don't want to disagree with this excellent posting; I agree that the
future is unpredictable and technology change is likely. But
unpredictability implies that it's also possible that technology change
in computing might slow drastically. Exponential curves in nature--even
the increase in the cost of a jet fighter or the growth of the US
government debt--seem never to go on forever. Eventually, computing
will become a mature industry like steelmaking, railroads, oil, or
automobiles, though I won't venture to predict when that will happen.
For example, the evidence of the past 50 years is that GM will be the
dominant auto maker in the US no matter what happens. Innovations occur
only as variations on the stable theme of four wheels, doors on the
side, engine in front. Periodically a competitor will introduce a
wildly popular "innovation", and will flourish for a short time (Ford
with pony cars in the 60s, Toyota and Nissan with economy cars in the
70s, Chrysler with minivans in the 80s, SUV manufacturers in the 90s).
Then GM will embrace and extend the idea, introduce a copycat
Camaro/Chevette/Lumina/Jimmy, and continue to dominate the overall
market. GM has screwed up again and again--poor reliability, offering
large cars when consumers wanted small ones, throwing money at quixotic
automated-manufacturing technology while its competitors cut costs by
improving management and motivation of assembly workers--but its raw
economic power always seems to carry it through. And GM is far less
dominant in its industry than Microsoft and Intel are in computing. To
see a situation like Microsoft/Intel, one must go all the way back to
the days of Carnegie, Rockefeller, Gould, and Fisk. Perhaps
innovations in computing will settle down into variations on a stable
theme: after all, we've already discovered that you can add
peer-to-peer networking, internetworking, multimedia I/O, file servers,
and print servers, all built atop a firm foundation of logical drives
named "a:", "b:", "c:" and devices named "com1:", and "lpt:". :-)
--
Steven Correll == PO Box 66625, Scotts Valley, CA 95067 == s...@netcom.com
Can you say "fat binaries"? I just _knew_ you could!
======================================================================
/| | Linus's Law: |
\'o.O' | Given enough eyeballs, all bugs | Steve Siegfried
=(___)= | are shallow. Thus, debugging | s...@skypoint.com
U | is parallelizable. |
======================================================================
Lotsa lights but didn't go anywhere.
Cutting cross-posts.
I remember that, too.
I saw a 2 on a Fort visit (Red skins). I think they made an error in
the Agency seal.
The DOE did a study in 1978 which said that forecasting in computing was
even odds after 5 years 50/50: i.e., you might as well flip a coin.
>Very few companies have lasted more than a decade or two in the past
>50 years. Even IBM had its scary moments. Last Sundays NY Times was
>cautious about Intel introducing a new architecture (Merced 99),
>recalling its failures to do so in the past.
Clearly, what you are seeking is to find a V. Bush (Memex prediction)
rather than a Watson/Olsen/Gates faux pas. The realty is that computers
are finding their way into more and more areas of life. Bailing out a
failing IBM or SGI/CRI or some other firm (Unisys) to keep ancient
software running might become a national if not international priority.
That a the heart of a PDP-11 beats in an Alpha or a 360 (or even a
7090-series) somehow sits in whatever IBM's major offering 30 years from
now, is entirely possible. And how knows about Microsft and NT.
Windows'2028... Hmmmm Windows'2058...
I don't know what XXX will look like in the year 2028 [2058] but it will
be called YYY.
And who knows maybe we run out of certain resources by then.
Uh, no, there is one generation left, and a bunch of flavors
in current and that next gen left. And even that isn't as much
of a generation as anyone else's are... minor mods around the
same cores, shrinks, speedups, etc. aren't "generational",
in my humble opinion.
>I don't think $2000/chip initial cost is going to hurt them one whit,
>if they are targeting the high end market. How is adding $3000-$4000
>to the cost of a big server or CAD workstation going to hurt them, if
>it is really wicked fast? People pay a ridiculous premium for the
>fastest x86, compared to a slightly slower model (how much more does a
>333 MHz Pentium II cost versus a 266 MHz, and for how few % performance
>gain?) HP, Sun, DEC, IBM, and SGI charge thousands more for their high
>end model that is 10-20% faster than the next highest end, and sell
>plenty. The people predicting the failure of Merced based on price
>are either IMHO interested in seeing Alpha finally break through, or
>are PC gamers pissed off because they won't be able to afford a Merced
>based system right after intro :)
It will hurt them for one simple reason: it's not a commodity
item at $2k/chip. Fine, they want to build a high end engine
chip for servers, power workstation users, etc, that's fine.
Lots of people do or have done that. Most of them are
profitable. But the market for that is known, and not that
big compared to where x86 is right now. They're going to
be in fighting with Sun, and SGI, and IBM, and Dec,
in those guys home turf, against chips that look to
beat Merced in performance. At $2k/chip, they won't
be able to sell the tens millions of chips a year that
they need to to amortize off dedicating multiple plants
to producing it.
I have held plenty of $10k CPUs in my hands (don't even
*ask* what we paid for pre-production 300mhz UltraSPARC-II
modules... more than my car cost me). There is a market for
them out there. But people buying x86 have already
chosen to not buy the absolute best performance out
there available (Alpha, UltraSPARC, R10k, etc).
If they were willing to pay the extra $4k per
system they'd buy Alphas. The only thing that Merced
has as an advantage is (allegedly) native x86 execution,
but it remains to be seen if that ends up being faster
than Alphas in x86 emulation mode, and Alphas in x86
emulation mode have not driven x86es out of the top
of the NT server market by a long shot.
Again, the number of details of how Merced will
work, perform, and cost that are known approach
zero so speculation has possibly zero accuracy. 8-)
-george william herbert
gher...@crl.com
I wonder what on Earth you are basing this "look" on. Of course, if
Merced can't beat the performance of its UltraSparc direct competitor,
it won't do well at all. But what data do you have that suggest that?
Frankly, my opinion is that if HP/Intel can't beat UltraSparc, designing
an architecture essentially from scratch for that express purpose, and
taking several years to do it and to aim at a particular technology to
do it in, then they must have holes in their heads. And I don't think
that everyone that works at Intel is quite so dumb.
> The only thing that Merced has as an advantage is (allegedly) native
> x86 execution
No, it has (allegedly) much faster native performance than its direct
competitors. And that's very plausible, because it's designed from the
ground up to take maximum advantage of present technology, while
UltraSparc is locked into an architecture many years old.
> but it remains to be seen if that ends up being faster than Alphas in
> x86 emulation mode,
If it's faster than Alphas in native mode, and it is also expressly
designed for x86 emulation, then it will *certainly* be faster than
anything else at emulating x86. Otherwise its designers would *really*
have to have holes in their heads.
> Again, the number of details of how Merced will work, perform, and
> cost that are known approach zero so speculation has possibly zero
> accuracy. 8-)
So, if you don't have zero details, where are you getting your claims
from?
David desJardins
>big compared to where x86 is right now. They're going to
>be in fighting with Sun, and SGI, and IBM, and Dec,
>in those guys home turf, against chips that look to
>beat Merced in performance. At $2k/chip, they won't
This is a separate issue. The argument is whether the $2K/chip is a
killer. If it doesn't perform as well as DEC (let alone laggards like
Sun & SGI) that is a _totally_ different problem. I've said it before,
I'll say it again: I have a hard time believing HP & Intel could get
their best minds together, put Intel's leading process technology
behind it, and produce something that can't beat the other stuff out
there. Look what HP manages to do now with the PA-8200, running clock
speeds over 60% slower than the Alpha's top end, and in a .5u process
yet!
IM{quite inferior to many others who post regularly here}O, EPIC solves
some of the problems that RISC has run into, and even the fairly recently
designed Alpha starts to run into. Doing the hardware does not look to
be a difficult problem compared to the 21264 core with its 4 way integer
issue doing dynamic scheduling and register renaming. Getting the
compilers to cooperate is the big challenge for EPIC, as everyone knows.
I speculated over a year ago that Merced would be introduced with well
over 100 million transistors. HP is just about to ship the PA-8500,
created in a .25u process with over 120 million transistors on a chip.
It is designed to be a chip that will span the PA-RISC line from low
end to high end (since they no longer need the expensive large off-chip
L1, as the 8500 has 1.5MB of L1 on chip) so it surely won't be a part
that costs anything like $2000/chip. Given that Merced will be
introduced in .25u and possibly .18u, and the speculation is it'll cost
$2000/chip -- but how accurate that is who knows -- it may have closer
to a quarter billion. I'd assume such a thing would be mostly a really
fat cache and lots of replicated functional units for extra wide issue
(or possibly to avoid clock problems by being closer to the registers
involved....is the register file in IA-64 large enough for that to be a
concern?)
You kind of narrow down to several disjoint possibilies:
a) Nothing about Merced matters, because they can't get the compilers to
do everything that EPIC requires of them
b) Merced is less aggressive in transistor count than I am speculating,
in which case:
1) it costs much less than $2000/chip
2) they charge $2000/chip because it is so fast people will pay it (how
much did Intel charge for the first Pentium Pros, and how much of
that do you think was the variable cost of production -- knowing
they'd amortize the fixed costs over millions of units)
c) Merced is as aggressive as I speculate, so:
1) it costs so much because they are pushing the edge of the process
tech to go really fast, so it really costs $2000 if you want to make
a decent profit due to very low yields (at least initially)
2) if HP could/will do it in .25u with the PA-8500, Intel sure ought to
be able to do it in a mature .25u or nascent .18u on a design that
should be more "regular" (and less complicated to verify than a 4
way superscalar doing dynamic scheduling)
I'll just point out that Intel likes to create a new design that is just
a bit too complex for the process it is supposed to start out in. The
Pentium started in .8u, and struggled with heat problems at 66MHz. It
was right at home in .6u. The Pentium Pro was supposed to be introduced
in .6u, but for whatever reason Intel got it working in .35u in time for
a surprise introduction at much better performance than most were expecting
for a while. I sure was surprised (and more than a bit dismayed) to see
a CISC chip (and x86 yet!) briefly lead the field in SPECint.
It is sort of fun not having any info about Merced, it frees us to
speculate much more wildly than if we actually knew anything about it,
thus limiting our range :)
> I also expect the interconnect for real-time graphics rendering to be
> cheap, but I expect it to be faster and less cheap than 100Mb ethernet.
Given the relatively small distances involved, there's no reason that it
couldn't be fast _and_ cheaper than 100Mb ethernet (cf. DS-links).
Jan
Yes, it will, for sure. These "exponential" curves aren't really
exponential, they are more like saturation curves. It's easy to derive a
differential equation for these saturation curves.
Exponential grow means that the grow is proportional to the current
population, thus dx-x*c=0. Assume that there is a limited, but
constantly available resource (such as space to live), then you get
dx-x*c*(1-x/s)=0, where s is the saturation constant. Looks almost like
exponential if x is small compared to the saturation constant. It's
worse when there is a limited resource that is constantly needed, but
consumed in the process (like oil for cars). Then you get a second order
equation. All these curves have one in common: they look exponential in
the first phase.
Integration left as exercice for the reader.
I base that on statements Intel made some 18+ months ago.
I received email from someone who suggested that they had some
strong reason to believe that Merced would have an IA32 core
which I interpret to mean that it will have both an IA32 and
IA64 core with shared cache/MMU/etc. So perhaps Intel has
changed strategy over the last couple of years with regard to
running existing x86 code. Hey, you can do anything you want
when all you talk about is vaporware!
> Just the same:
>
>>Software vendors will
>>have to recompile to take advantage of Merced - no different than
>>recompiling for Alpha today... and few are doing it.
>
> This is, in the end, correct; Merced only wins if large numbers of
> developers ship native Merced code.
One reason why I suspect that Unix on Merced will attract far more
software than NT (at least native recompiled software).
NT will have to move to 64 bit in addition to fixing many other
problems before it will be competitive with Unix. Rather interesting
to see how things are shaping up - NT is a failure in the
enterprise/high-end and Intel, knowing this, is courting Unix
vendors for Merced. Things look a bit shaky in Wintel-land.
>>Two instruction sets on one chip... please name one instance where
>>that has been done commercially (successfully).
>
> Oh, come on. The Intel 386 (and all its successors) is such a processor.
> Its 16-bit instruction set and 32-bit instruction set are really two
> separate sets.
Well, that is a bit wider interpretation than I was thinking of.
If you want to look at it that way then almost every processor
line has multiple instruction sets! Hell, in the mid-80's Prime
must have had half a dozen instruction sets in one machine!
I wouldn't compare x86 and PA-RISC instruction sets as being
close enough in lineage to treat them the same way. However,
contrary to the initial HP/Intel spash about Merced some years
ago, this email I received claimed that there would not be anything
like PA-RISC instructions in Merced - that HP will "translate"
PA-RISC applications to Merced the same way DEC translated VAX
to Alpha. If true (I haven't seen any concrete statements to
support this), then HP must be having a tough time selling
PA-RISC systems these days! Anyone remember the assimilation
and annihilation of Apollo?
> The only reason it "won" is that it could run all
> the 16-bit Windows/DOS code, and in the end, developers recompiled
> to use the 32-bit instructions.
The need to move from 16-bit to 32-bit was a lot more obvious
than going from 32 to 64. I don't think we are going to see
the mass migration with Merced. Few corporations are willing
to roll over their entire base of PCs every 1-2 years; many
still run Win3.1 on what could be considered outdated hardware.
Until Merced gets down in the PentiumII price range, which will
probably take 5 years (Intel plans to keep going with the
Pentium series at least that long), Merced will remain strictly
a high-end chip. Compare this to UltraSPARC which goes all
the way down to under $3K systems and is 100% compatible with
UltraSPARC-III (which will be out before Merced and just as
fast, if not faster). The Wintel picture is nothing but
messy for the foreseeable future.
Unix on Merced is more of an unknown. Given that Merced is
successful in meeting its goals, and that robust Unix
implementations are tuned for it, and that system vendors
fix all the system architecture problems with prior Intel
systems, then it will probably do really well. However,
there are big Intel systems running supposedly tuned Unix
implementations now that should (according to various
benchmarks) be very competitive with the top-tier Unix
platforms. Yet these can hardly be classified as a market
success. I don't see Merced doing any more than moving
the Unix market to a Sun/SPARC, HP/Merced, and
IBM/PowerPC. I expect the market share won't be
much different than it is now. So about all that happens
is that PA-RISC dies and Intel gets that market share.
What data do you have to suggest that Merced will beat competitors?
We have various benchmark data and vendor claims about the other
processors - including in the Merced release timeframe. We have
nothing but VAPOR about Merced performance - show me a SPECint95
estimate! Sun, for one, has claimed a substantial boost in Ultra
that will ship before Merced. Given the technology roadmap and
lots of past experience, it is hard to see Merced being substantially
better than UltraIII or Alpha.
I don't buy the argument that Intel has superior process technology.
They have superior *volume* technology but whether that translates
to Merced is another thing altogether.
> Frankly, my opinion is that if HP/Intel can't beat UltraSparc, designing
> an architecture essentially from scratch for that express purpose, and
> taking several years to do it and to aim at a particular technology to
> do it in, then they must have holes in their heads. And I don't think
> that everyone that works at Intel is quite so dumb.
But the problem for Intel is that they did start from scratch. And
they blew lots of hot air about their plans.
The other vendors recognized Intel's potential market influence
and they would have to have holes in their heads not to accelerate
their processor roadmap plans. You think Sun would design UltraIII
to be inferior to a product coming out in the same timeframe that
they knew about for all these years? I think not. Sun learned
their lesson with SuperSPARC.
>> The only thing that Merced has as an advantage is (allegedly) native
>> x86 execution
>
> No, it has (allegedly) much faster native performance than its direct
> competitors. And that's very plausible, because it's designed from the
> ground up to take maximum advantage of present technology, while
> UltraSparc is locked into an architecture many years old.
For a locked architecture it sure is doing well. Recall
that Ultra has only been out for 2-3 years? How long has Alpha
been out? Seems to me that most industry people agree that
Alpha will still lead Merced.
Alpha also started from scratch. It has enjoyed a performance
advantage. Has it prospered? Nope.
Interesting analogy. I believe the late 19th century term
"robber baron" might apply.
Antitrust laws were created to limit this!
Two things. 1. Wintel hype is larger than Wintel reality. 2.
The government is not going to allow Wintel to continue
monopoly practices.
I predict the desktop is going to return to a form more like
the auto industry - Wintel will undoubtedly be the GM. But
GM (AFAIK) doesn't make sea transports, railways, highways,
motorcycles, personal watercraft, airplanes, oil, or
gasoline! And I don't think Ford is doing too bad either!
I'm not sure an introductory price of $2K is exceptionally high. Look at the
introductory costs of the last couple of generations of x86 chips. While I
am forgetting exact numbers (this is a que for someone who knows them to
jump in and supply them! :-) I remember seeing >$1K chip prices for
several of them.
If it actually provides a performance boost (even just on native-merced code),
and Microsoft supports it, it will sell.
I don't speak for Bit 3.
Brian
: >big compared to where x86 is right now. They're going to
: >be in fighting with Sun, and SGI, and IBM, and Dec,
: >in those guys home turf, against chips that look to
: >beat Merced in performance. At $2k/chip, they won't
: This is a separate issue. The argument is whether the $2K/chip is a
: killer. If it doesn't perform as well as DEC (let alone laggards like
: Sun & SGI) that is a _totally_ different problem. I've said it before,
: I'll say it again: I have a hard time believing HP & Intel could get
: their best minds together, put Intel's leading process technology
: behind it, and produce something that can't beat the other stuff out
: there. Look what HP manages to do now with the PA-8200, running clock
: speeds over 60% slower than the Alpha's top end, and in a .5u process
: yet!
I was wondering, does the PA-8000/8200's relatively high clock rate
for a braniac chip in a 0.5u process have anything to do with the
rather short effective channel length for HP's .5u process? But
regardless, remember past performance is no guarantee of future
success.
: IM{quite inferior to many others who post regularly here}O, EPIC solves
: some of the problems that RISC has run into, and even the fairly recently
: designed Alpha starts to run into. Doing the hardware does not look to
: be a difficult problem compared to the 21264 core with its 4 way integer
: issue doing dynamic scheduling and register renaming. Getting the
: compilers to cooperate is the big challenge for EPIC, as everyone knows.
IMVVHO, EPIC could create other problems which may not be well understood
yet.
: --
Compute cycles, for one. The company animating the movie didn't have
enough, so they used processing time on 128-processor Origin 2000's
at SGI, Boston University (which has a special arrangement with SGI),
and somewhere in Switzerland and Japan (which probably have similar
arrangements).
Other movies that feature gratuitous SGI presence:
Jurasic Park - The movie had the Connection Machines in the
background, but remember the scene in which that little
girl went through the file system with that 3D GL-based
file manager? That file manager actually shipped with
IRIX 6.2. Also, SGI specially constructed the display
cards of the SGIs featured in that movie to synchronize
the horizontal scan rate of the monitors to the shutter
speed of the movie camera, so that there would be no
appearance of scan lines or flicker.
Twister - Meteorologists running around with generic pc laptops with
that manufacturer's logo ripped off, a piece of masking tape
strategically placed beneath the display, upon which is
scrawled `Silicon Graphics' in magic marker. Tell me the
producers didn't go out of their way for that.
Disclosure - That movie about sexual harassment in a CD-ROM drive
manufacturing company; released just at the start of the
`multi-media PC' craze. Michael Douglas' desktop computer
was a SGI, with a special email reading application that,
if I remember correctly, featured a rotating SGI logo when
idle.
I believe also Contact, In the VLA operations room, and possibly
The Game.
Brian Mancuso
bri...@cs.bu.edu
Common sense is sufficient.
> The other vendors recognized Intel's potential market influence
> and they would have to have holes in their heads not to accelerate
> their processor roadmap plans. You think Sun would design UltraIII
> to be inferior to a product coming out in the same timeframe that
> they knew about for all these years?
The Sparc architecture is over 10 years old. The team designing a
processor for present technology has all the advantages over a team who
has to use an architecture designed when computers were 100 times
slower. If the two teams apply equal resources and skills, common sense
dictates that the former processor will be significantly faster.
> Seems to me that most industry people agree that Alpha will still lead
> Merced.
Name one.
> Alpha also started from scratch. It has enjoyed a performance
> advantage.
Which illustrates my point very well.
> Has it prospered? Nope.
That's an entirely different question. I'm only refuting your first
claim: that Merced will be ridiculously slow. Whether it will be a
commercial success, I have no idea.
David desJardins
Yep, I was given a copy. Fluffy piece. Sort of the proceedings of ACM'97.
Got the tee-shirt, too.
Edited by Bob Metcalfe and Peter Denning. James Burke (Connections) was
the MC and wrote the Intros.
Lacks detail (for more than one reason). It covers potential futures at a
gross level. It's sitting in the truck. You missed the last SV SIGGRAPH
meeting:
Ken Musgrave gave a talk which included an image he generated for the
cover, but that he had not seen.
The book has some shoddy editing (like blowing printing an author's name
wrong, etc.: i.e. Peter Metcalfe or Bob Denning).
It has many of the usual suspects. Gordon Bell (he's easy to grab on
quickies), but do you expect Brenda Laurel to predict where processors
are going? I think Larry Druffle wrote a chapter on information warfare.
Bran Ferrin. Nathan M. etc.
Do you want passages quoted? I can start with a TOC of author/topics.
I'm leaving for Death Valley tomorrow for a long weekend (looking for
fluorescent rocks), and I'm going to be in Berkeley Monday, so I'll post
if I see your selections before leaving. Otherwise, I'll get to you
list on Tuesday.
Let me add, that I like Peter: he and Dorothy are close friends, but
this book: I don't regard as representative of his potential.
>Thinking Machines computers were highlighted in the first "Jurassic Park"
movie.
>SGI is highlighted in the new "Lost in Space" movie.
>Is there a pattern here? (struggling super computer companies) :-) :-) :-)
Don't forget Apple who pay a whole lot of cash to appear on TV (Seinfeld)
and various movies. A question that is obvious to us and has probably
never occurred to the people sponsoring these ads is "Is there any
evidence this money is well spent?"
Maynard
--
My opinion only
steve....@iname.com (Steve Kappel) replies:
>Interesting analogy. I believe the late 19th century term
>"robber baron" might apply.
>
>Antitrust laws were created to limit this!
>
>Two things. 1. Wintel hype is larger than Wintel reality. 2.
>The government is not going to allow Wintel to continue
>monopoly practices.
Hmm. Possibly the best thing the government could do for Mr Gates is to break
up Microsoft. Rockefeller was wealthy thanks to Standard Oil. But he became
*filthy rich* after Standard Oil was broken up.
(p.s. I'm thinking about marketing a plain-text editor. Think I could get DOJ
to get Microsoft to stop including Notepad with Windows? j/k)
cb
Christopher A. Bohn Engr...@aol.com
"Oooh! What does THIS button do!?"
>> Seems to me that most industry people agree that Alpha will still lead
>> Merced.
>
>Name one.
>
COMPAQ ;-)
--
Dean P McCullough dpm...@zombie.ncsc.mil
<<
I'd wonder about the modification bit. I was crawling around the back of
an *OLD* SGI looking for a model number so we can resurect it.
R, G, B, CSync, out, and a Master Sync input! Looked like it was designed
to drop straight into a TV studio.
--
~paul ( prep ) Paul Repacholi,
1 Crescent Rd.,
erepa...@cc.curtin.edu.au Kalamunda,
+61 (08) 9257-1001 Western Australia. 6076
I've seen SGIs in a lot more movies than just Jurassic Park and Lost in
Space. ``The Relic'' springs to mind as another and, in almost all
circumstances, these machines are running SGIs multi-colour load
monitoring utility. :-)
Ben
---
Ben Elliston
b...@cygnus.com
I don't know what the major player will be in 30 or 60 years, but I know
that they'll deliver inferiour hardware and bloated, buggy software ;-).
And it will be important for every other player to be "compatible" to
the dominating player, whatever that will be.
>Eugene Miya wrote:
>> >So who will be a major player in computing 30 or 60 years from now?
>>
>> The DOE did a study in 1978 which said that forecasting in computing was
>> even odds after 5 years 50/50: i.e., you might as well flip a coin.
>I don't know what the major player will be in 30 or 60 years, but I know
>that they'll deliver inferiour hardware and bloated, buggy software ;-).
>And it will be important for every other player to be "compatible" to
>the dominating player, whatever that will be.
>Bernd Paysan
Are you inferring that if the dominating player used XZ for their operating
system, then all other vendors would immediately go the same way? Think
that could get to be a very expensive proposition for the user community.
cheers, john
SGI's are are all over the place in computer graphics and film editing houses,
so they may be seen as the advanced computer of choice.
To me the most interesting slide in the Joel Birnbaum presentation
cited recently in this newsgroup was the one which said HP sought an
alliance with Intel because HP alone couldn't justify a fab for a new
architecture, and software vendors wouldn't compile programs for a new
HP architecture. This sounds like what economists call "barriers to
entry" which discourage new competitors (or, in this case, an old
one). Also interesting is a press release from a major ERP vendor
announcing that its products (written in SQL, Cobol, and Visual Basic,
not assembly) will be available on Merced, even though the same vendor
has declined to offer them on Dec Alpha/NT. These people are betting
that the age of diverse computer architectures is ending.
--
Steven Correll == PO Box 66625, Scotts Valley, CA 95067 == s...@netcom.com
I'll buy these.
This man might be a candidate for a tee-shirt.
I grew up not far from Hollywood and was briefly an ASC (Amer. Soc. of
Cinematographers) member, and I can tell you just from that
the answer is yes. Probably the classic example was use of Reeses
pieces rather than Mars M&M in the film E.T. Sales exploded.
Few items have had "Pet rock" popularity (expense has to also be factored in).
It is never really clear what takes off, but there are literally
hundreds if not thousands of people on Sunset Blvd and NYC attempting to
figure what, where, and when the next big product placement will be.
If you could figure that out consistently, there are jobs waiting for you.
The trick is the entry.
You also want to keep quiet about your successes. Good ad agencies do.
Page xiii (the first page of the Preface) has the authors
Robert J. Denning and Robert Metcalfe. "Tacky." Not a problem in a newsgroup,
but tacky in print. The whole book is like this. Will this be a
reflection of the future? Possibly.
3 parts:
The Coming Revolution
Computers and Human Identity
Business and Innovation
Part of what needs to be done is the identification of what you are
hoping to achieve by forecasting, Richard. Are we trying to make a
stock killing? I doubt that this book can help you. SO select an author(s).
The Coming Revolution
The Revolution Yet to Happen
Bell and Gray [two very good people]
I do not expect either of them to give away the MS farm.
When They're Everywhere
Vint Cerf [a very good person]
I do not expect him to give up the MCI farm.
Beyond Limits
Bob Frankston [never heard of him, someone else can ID him]
The Tide, Not the Waves
E. Dijkstra [some good ideas, others doesn't hold him in regard]
How to Think About Trends
Richard Hamming [brilliant], RIP.
The Coming Age of Calm Technology
M. Weiser and John Seeley Brown [bright people]
Computers and Human Identity
Growing up in the Culture of Simulation
Sherry Turkle [I would not rely on her for a good technology
prediction. She clearly associates with bright technologists,
but I don't agree with some of her software ideas. This does
not that this fluff piece is bad, I've not read it yet.]
Why It's Good that Computer Don't Work Like the Brain
Don Norman [Smart] NDA
The Logic if Dreams
David Gelernter [DG used to read this group, c.a./c.parallel/c.s.s.]
Is Linda dead? Had David solved all the problems of
parallel computing?
End-Running Human Intelligence
Franz Alt [Never heard of him.]
A World w/o Work
Paul Abrahams [Never heard of him, sounds like "Power too cheap
to meter. This is a realistic forecast? Or is the author
merely trying to get our attention?]
The Design of Interaction
Terry Winograd [Blocks World AI] not NDA but ...
Business and Innovation
The Stumbling Titan
Bob Evans [Never heard of him; I like the title and the implication.]
The Leaders of the Future
F. Flores [Perhaps an eye opener.]
Information Warfare
Larry Druffel [Has Ada died?]
Virtual Feudalism
Abbe Mowshowitz [never heard of him, sound like the Usenet
alright,]
Sharing Our Planet
Donald Chamberlin [never heard of him, idealistic sounding, true,
we are going to have to do this.]
There and Not There
William Mitchell and Oliver Strimpel [never heard of them,
sound like a very popular net topic, reserve judgment]
The Dynamics of Innovation
Dennis Tsichritzis [never heard of him, might have good ideas]
How We Will Learn
Peter Denning [former Dir. of RIACS, now at GMU]
Choose wisely Richard.
What do you want summarized?
This will force me to read this book.
Could someone please post a bibliographic reference for this book?
I can't find it with ACM's search engine.
Thank you.
Uli
> To me the most interesting slide in the Joel Birnbaum presentation
> ... was the one which said HP sought an alliance with Intel because
> HP alone couldn't justify a fab for a new architecture, and software
> vendors wouldn't compile programs for a new HP architecture.
HP also discontinued their IC research, shutting down the division, reasoning
that they couldn't keep up with Intel, and they'd just use Intel to fab
everything they needed.
Less than a year later, as I understand it, they started building a fab....
--
***********************************************
* Allen J. Baum *
* Digital Semiconductor *
* 181 Lytton Ave. *
* Palo Alto, CA 94306 *
***********************************************
This was the proceedings of ACM'97 back last last Feb.
Do you need the ISBN?
I have to read this book for work. I'll summarize as best I can and
make comment.
> Franz Alt [Never heard of him.]
Probably the German TV reporter, philantropist, amateur philosopher and
theologician (?)...not my coup of tea.
> Dennis Tsichritzis [never heard of him, might have good ideas]
Boss of the GMD, the German computer science research institute. Not my cup of
tea either (applies to both DT and GMD).
Jan
>steve....@iname.com (Steve Kappel) replies:
>>Interesting analogy. I believe the late 19th century term
>>"robber baron" might apply.
>>Antitrust laws were created to limit this!
Yes, I agree, interesting analogy. I just finished reading four
biographies of Andrew Carnegie. I don't believe that "robber baron"
applies in this case.
>>Two things. 1. Wintel hype is larger than Wintel reality. 2.
>>The government is not going to allow Wintel to continue
>>monopoly practices.
1: True. If that is true 2. is groundless. The problem is the the bull
headedness of lawyers and their poorly constructed logical system to the
ruthless efficiency of the software created by us. The govt. can't go
after someone purely on the basis of hype. The reality has to exceed
the hype.
In article <199804101141...@ladder01.news.aol.com>,
Engr Bohn <engr...@aol.com> wrote:
>Hmm. Possibly the best thing the government could do for Mr Gates is to break
>up Microsoft. Rockefeller was wealthy thanks to Standard Oil. But he became
>*filthy rich* after Standard Oil was broken up.
Having viewed the recent tape with Gates et al, and having sent some
time on Capitol Hill, let me offer you some impressions.
US politicians are clueless about the electronics and computer industries.
This is quite evident by the nature of the questioning. The most visit
Senators were O. Hatch and P. Leahy, but quite a few others attempted to
ask questions including: Kennedy, Feinstein, and Strom Thurman (a joke
was even made after he had to leave early (him being almost 100)).
This hearing was one of the most quiet shown. These elected officials
are trying to size up the "new kids" on the block. Gates is half the
age of many of these guys.
Actually, this is the second time Gates has come under the eyes of the
govt. And I think that Gates has to come out ahead in this one. Gates
has to win. He might lose in a future, but I don't quite see how you
would intend to break MS up.
If the information, electronics, telecommunications, and computer
industries are to gave power over the old rust belt industries (cars,
steel, oil, etc.) as per past trade talks in places like Japan, Gates
has to come out on top. This hearing is similar to the sizing up of
schoolyard bullies, and Gates is the best our industry has.
>(p.s. I'm thinking about marketing a plain-text editor. Think I could get DOJ
>to get Microsoft to stop including Notepad with Windows? j/k)
I doubt it. DOJ pirates copies of MS/software internally because the
procurement process is too slow. It perceives that it can't get its
work down if it can't do this: and it will let licensing catch up.
One of the founders of VisiCalc, I believe.
Then we are doomed.
The web site
www.acm.org/acm97/book/book.htm
has pointers to the preface (alludes to past fax pas [predictions])
and the forward (by James Burke). Each section as an intro by the
editors summarizing each distinguished author and what they write about.
I flip between my paper copy and my 21 inch diag. screen.
The paper has some advantages, and the screen has some advantages.
I am clearly typing on a screen. But now I have to move to paper.
CHAPTER 1:
The Revolution Yet to Happen
G. Bell and J. Gray
If there is any chapter in this book relevant to a discussion on
computer architecture and supercomputing (the cross-post to which this
is a follow up) this is it.
Summary
By section
Introduction
A view of cyberspace
Computer platforms: The computer and transistor revolution
Hierarchies of logical and physical computers:
many from one and one from many
Semiconductors: Computers in all shapes and sizes
The memory hierarchy
Connecting to the physical world
Paper, the first stored-program computer ... where does it go?
Networks: A convergence of interoperability among all nets
Future platforms, their interfaces, and supporting networks
Web computers
Scalable computers replace nonscalable multiprocessor servers
Useful self-maintaing computers versus user as system managers
Telepresence for work is the long-term killer app.
Comuters devices, and applications that understand speech
Video: synthesis, analysis, and understanding
Robots enabled by computers that see and know where they are
Body nets -- Interconnecting all computers that we carry
Computers disappear to become components for everything
On predictions ... and what could go wrong
Acks
References
I'll also include figures and table captions:
I have modified these tables to minimize keying.
By section Figures and Tables
Introduction
Opening sentence:
By 2047 all information will be in cyberspace...
A view of cyberspace
Fig. 1.1 Cyberspace consists of a hierarchy of networks...
Tab. 1.1 Functional levels of the cyberspace infrastructure
6 cyberspace-user environments
5 content
4 applications
3 h/w and s/w computing plat. and nets
2 h/w components
1 materials and phenomena
[this table is greatly summarized.]
Tab. 1.2 Data rates and storage req. /hr. /day, /life
read text 60-300 GB
speech text 12 BPS
speech (compressed)
video (compressed) .5M BPS 1 PB
[just a few numbers]
Computer platforms: The computer and transistor revolution
Hierarchies of logical and physical computers:
many from one and one from many
Semiconductors: Computers in all shapes and sizes
Fig. 1.2 Evol. of comp. proc. speed (IPS) vs. log memory.
The memory hierarchy
Connecting to the physical world
Paper, the first stored-program computer ... where does it go?
Tab. 1.3 Interface technologies and their apps.
Displays (big and port.)
PID
speech
...
GPS
...
images, radar, sonar, lidar
Networks: A convergence of interoperability among all nets
Tab. 1.4 Nets and their apps.
Last Mile, LAN, wLAN, HAN, SAN, BAN
Fig. 1.3. Evolution of WAN, LAN, and POTS (plain old teleph.)
Future platforms, their interfaces, and supporting networks
MicroSystems: Systems on-a-chip
Tab. 1.5 New computer classes and their enabling components
19 Gen. Plat. UI Net
Logic/mem/OS
51: Begin tube cards none
65: Time-Shar Minis glass tty POTS
81: wks uPs, PCs WIMP WAN,LAN
94: WWW PCs, wks Browser fiber
98: Web comps client s/w phone xSDL POTS
98: SNAP uni/multi PC sprovisioning SAN
2010: 1 info dialtone
video capable devs.
video as pri.
data type home net
2001: Do what I say
embedded PCs, phones and PDAs
speech
IR, radio LANs
2020: Embedded speech/vision Body net
2020: anticipatory
monitoring&gesture
vision&gesture
Home net
2025: Body net: vision/hearing/etc.
glasses for display
implants
??: Robot for home, office, factory
Web computers
Scalable computers replace nonscalable multiprocessor servers
Useful self-maintaing computers versus user as system managers
Telepresence for work is the long-term killer app.
Comuters devices, and applications that understand speech
Video: synthesis, analysis, and understanding
Robots enabled by computers that see and know where they are
Body nets -- Interconnecting all computers that we carry
Computers disappear to become components for everything
On predictions ... and what could go wrong
Acks
References
A nice list including paper by Bell and Gray, V. Bush's famous
article (web site), Gibson's Neuromancer [which you should read
if you have not], a Kurzweil paper, papers by Patterson and Moore
and others.
Impressions:
I suspect that if are reading this in c.s.s.: you will be disappointed
at the commodity orientation of this this paper.
Big areas of work: communication at all levels. This tells me:
since Rick asked the question which computer companies will be around in
15 to 30 years:
Don't invest in computer companies, invest in AT&T
[which I'm sure they would love to hear].
{Shades of The President's Analyst.}
Introduction has information, the brain, Moore's law, and some of Bell's
forecasts (right and wrong) for 1997 (that he made in 1975). They next
compare to Bush's Atlantic paper [don't ask me for this Web site,
as my very first use of Web Crawler, I found this paper: there' smore
noise now, but it still there].
The problem I see (as a non-EE by training) is something which
Terry Winograd (one of the other authors is also accused of):
that's presuming that the computer is a good model of the brain and vice
versa. We really don't understand very well how the brain works.
A lot of AI (software people can guess) has declared that we don't have
to know how the wetware functions to model, say "thought."
Hence Deeper Blue while impressive as a chess engine doesn't do it like
people do it. But that's an issue (Hofstader's really) for Terry's Chapter.
Gordon is a good hardware architect (he's fervant with his passions to
built things right: we agreed in out last "argument", the one before
that however, ....). Jim is renown as a software DB type (just met him
last year for the first time). Neither knows communications.
Dick Watson, Kleinrock, Cerf, might be an interesting adjunct to
this paper (Cerf has the next chapter).
"Algorithm speeds have improved..."
I would question that. Maybe some algorithms. Euclid's algorithm
hasn't had major improvements (has had some) in thousands of years.
They declare their view conservative. Okay. I think they point out
what Feynman pointed out: we talk about things we don't know.
We are all guilty of that.
Cyberspace = (platform + context) + the interface + the network
and they describe functions and the problems with language (they should
say "natural language" to avoid confusion with the programming lanuage
problem. So they examine those components (3: Minsky would like that:
see computer people don't just think in diatic terms (binary or 2s)).
Platforms
Platforms = f(materials && phenomena) + f'(fabrication)
from this you can get new architectures and new apps.
Well kinda. Rare is the non-Von Neumann architecture, and I think apps
just tend to come along.
So they look at computers: from the uniprocessor to the multi. They
mention CyberBricks (computational lego blocks: all these terms have
trademark meanings which they are all trying to skirt). SAN: Aystem Area Net
like for clusters. The justification they give for multis: 1) performance
and 2) econmomy allows use to get more than 1.
Semiconductors
They cover Moore's law. And they mention their first firms: Intel and
National Semi. (they plan 8 GBs (not bits) by 2010). Well invest in
those guys if you think they will be around in 30 years. Is that assured?
Maybe. Density: smaller, faster, with no mention of limits.
So they do not cover fabrication which was mentioned above.
The Figure 1.2 graph has performance capacity predictions
3 decimal orders of magnitude for 2047 (low peta to high exabyte
memories). Typo two 10^12s (tera and peta) appear on the graph.
Memories:
Access time, secondary and tertiary storage, electro-optical.
Most of these last two subsections has been retrospective.
2047: 20 TBs on a CD is what they are predicting.
Interface
Alan Kay (Smalltalk, Alto, etc.) first used the word "transducer" in the
sense that used here. WIMP interface: Windows, icon, mouse, pull-down menus.
GPS, speech, video.
Paper
"We know of no technology in 1997 to attack paper's broad use!"
Actually paper is only have the proble, the ink is the other half and
MIT has some interesting ideas.
Mag tape limited to 15 years. CDs are noted here to last 50 years
(ah two good people who know: too many people I've encountered say that
CD are forever), microfilm: 200 years, paper 500 years. WIll HTML work
50 years from now? They offer no solution.
Networks
Cite's Metcalfe's law w/o a source (remember Metcalfe?).
Has two parts:
value(network) = f(O((subscribers^2))
value(subscriber) = f(O(scribers))
They note that the bottleneck is "the last mile;" this is an
infrastructure/installed base problem. They cover LANs, WANs, HANs, SANs, etc.
They hope (not predict) for 1 dial tone in 2047. They look for wireless
and briefly look at the spectrum. They don't appear to mention the
cost, or the need for things like multiplexing.
Future (the three components)
3 paths: evolution, new price classes (we've seen this), commoditization
[assimilation: if you can take a joke] into appliances and other devices.
This does not sound good for solving Grand Challenge problems.....
Minis, mainframes, PDAs and their follow-ons [pocket E-dictionaries], ...
these are "evolution, not revolution." Revolutions have a period of
10-15 years (they appear to avoid the old popular word "generation"
[I wonder if this was deliberate?]).
MicroSystems
Well, SUN is will positioned with a name. I wonder if this is a tip to
their name? The MS {not their employer} industry will have customers,
about 12 firms making them {generous}, custom designers, existing comp.coms,
and IP (Intellectual Property) firms with 5 subpoints: ECAD, "circuit wizards,"
[memories], processors, apps, and RAMbus like interface cos.
They ask Who these firms will be?
Web Comps
Mentions Web TV and set top boxes.
Scalable computers
For OLTP (On line transaction processing). Mentions the Web.
There is not enough information to talk about software here.
There is the c.s.s. telling quote:
It's unlikely that clusters that are more than a collection of loosely
connected computers will be a useful base for technical computing
because these problems requre substantial communcation among the
computers for each calculation. Exploting the underlying parallelism
of multicomputers is a challenge that has escaped computer
science and applications developers for deacdes, despite billions of
dollars of governemnt funding. More than likely, for the foreseeable
future scientific computing will be performaced on computers that evole
from the high-specialized, Cray-style, multiple vector-processor
architecture with shared memory.
So Cray may be around in some form.
Useful Computers
The software paradox: complexity, cost, and the need for simplicity.
They mention diskless computers. This is ironic because Gordon at a
talk two months ago noted that diskless machines were a joke because the
first thing we all did with diskless machines was add a disk. Something
akin to the paperless office.
Telepresence
Mechanism, group size/structure, purpose, work by disciple. They cite
teleconferencing's growth.
Speech
Bell admits progress in this area slower than predicted, but they figure
by 2010 it will catch up.
I think of of the major problems here was that until the Mac, computer
peripherals largely didn't make sound. They were a bet a "beep" or bell.
Sound based peripherals were the province of a few places who did speech
recognition or synthesis, or sound synthesis or analysis (e.g. music).
Ames (where I work) retained a v6 Unix license and actually had Vortrax
hardware was was one of the few sites to use the BTL speech software.
I wonder about other peripherals we (the users of computer) lack in our
experience (I've also used MIT's force feedback joystick: interesting,
but not clear)... Digress.
Body nets
No problem here. I carry MBs in cargo pockets. The Web.
Real PAs as opposed to PDAs.
Assimilation into appliances
2047: 100,000 times the number of coputers that we might expect.
Maybe. Quite ubquitous. Certainly Weiser's vision.
Predictions and errors
Watson's famous old prediction and Olsen (but not Gates: that was left for
the Preface, or "Don't bite the hand which feeds you?"). Notes various
other predictions from another Olson, a Delphi method prediction from
a couple of decades ago. A Ramo prediction.
Bell has another one of his notable bets (2) with Raj Reddy:
2003: AI will have a significant social effect on par with the
transistor (Bell is Con: I side with Bell on this).
A production model care that drives itself (Bell Con, probably)
Moore will not make predictions beyond 2010. It's noted that Moore
never thought that 5volt would change as a power supply.
Bell and Gray conclude wondering there the investment will continue to
find new apps in 2047.
So based on that Rick, what computer company do you think will be around
in 30 years???
I believe that some of the fine research folk in IBM said that
(my friends who work at Almaden (I've only recently visit Watson)) made
your statement when they say Unix and the rise of PCs (the Apple II at
the time). But at the time, their mainframe bias was well known.
They had trouble getting thing out of the research lab into products.
You (the electronics industry rhetorically) have to imagine such issues
like foreign policy being set by oil companies, or car makers, or timber
interests. Some of these industries are your customers of course.
Or aircraft makers. You guys are all working at at odds with each other
whether you realize it or not.
The "mechanical rigidity" of computers has some to be said for the
priesthood of programmers and engineers and scientists.
This title reminds me of a joke which came from the Berkeley
/usr/games/fortune program:
If the machines take over,
all we have to do is organize them into committees.
I apologize for the last review, it was a complicated article.
[Editing note: See added note at the tail end for a sense of dejavu.]
This chapter is much more serial. This is a simpler paper with
no figures or tables.
Structure:
Premise
Prologue
Future Thoughts
Statistics
Sensor Networks
Media Integration
Mobile software
Virtual worlds
Final note
Premise
Opening line:
"In 2047, the Internet is everywhere...."
Prologue
A vignette: Bob's house computer on the 54th story on his Taos, NM home.
[54 stories? is this progress?] The "computer" is named "Jeeves."
Finger mouse (credit to Engelbart), wrist computer, glasses as output device.
This is short. Similar to the Apple Knowledge Navigator or a
Turing test type of scene: short, (too short), and very optimistic.
Future Thoughts
Common place or merely feasible? And in what increments?
IP version 6 with 128-bit address (for your light bulbs).
Internet telephony is as misunderstood as horseless carriage.
How can we predict the future?
Statistics
World population in 2047: 11.5 billion people [I bet John McCarthy would love
that.] 30% penetration of internet devices : 3-4 billion including
wearable ones, 660 million phone terminations, 38 THz fibers.
Multiplexing will be essential. [Agreed.]
Sensor Networks
[I wonder if I should read Caroline Kennedy's Right to Privacy?
I think privacy's loss is a complete and forgone conclusion.]
"I predict that an inhabitant from 1997 visiting 1947 would almost
certainly get a black eye or a broken nose from walking into a door that
did not automatically open." Cute: but I think that's stretching it.
This chapter doesn't really say a lot about sensors, but I think it
points out that what we hook to computers in the way of peripherals is
very limited. Years ago it was a card read, tape and disk drives or
drums, then later a CRT. We moved away with nice, and laser printers,
and scanners, cameras, mikes, etc.
Two paragraphs deal with GPS. I don't believe this is enough. Real
sensors should have been covered like radar and sonar and even lidars.
These sensors have really distinct characteristics and simple problems
like geometry and hidden objects pose interesting problems. I take
heart when some tinkerer attempts to attach one of those Polaroid
rangefinders to systems as hack jobs.
This is called the sensor/data fusion problem in the military software
market. Just a matter of time before the civil community gets bogged
down in the same problems. I've practically written more than Cerf's section.
Media Integration
The Internet will supplement, not replace radio, TV, TPh, etc.
Convenience is the driving force in consumer electronics.
At last, our VCRs will be programmable via the Web! The clock will
reset itself after power failures (I think Don Norman will remain
skeptical). We will still have power failures in 2047? I have to
wonder about the world power situation.
Mobile software
Downloads are the future. 2047 tax software on a 2040 platform.
Virtual worlds
Short section. I doubt that these are enough to drive the economy.
Will VR interfaces take over for travel? Airlines and plane makers
don't hope so. Depends where fuel and its cost go.
Final note
Two great nets: telephone and the Internet (not merged?).
No references. No figures. No graphs. No equations.
I point this out because I just ran into Gordon Bell, the author of
Chap. 1 just now at lunch (talk about coincidences). We had a good laugh.
I showed his chapter's typos. But over all I think Gordon's chapter is
probabl the best in detail in the book.
The is probably the most utopian of the two chapters so far.
Is Vint hiding MCI's vision of the future?
I doubt it. Communication is only one part of the puzzle which Chapter 1
pointed out. I think I'd ask, what are the problems people will be
working on in 2047. Bob in the Prologue works with people from Los Alamos,
well, I bet that's comforting to the LANL posters/reader. They will
still be employed. I just don't see a lot of depth.
Cerf does not address content other than the usual: audio, video, and
information (not data). What are we going to be doing besides
teleconferencing? I'm left wondering.
I did build/invent the first UART in '61, made the Ethernet deal with Intel
and Xerox to have LANs, and DECnet was the first commercial
implementation or ARPAnet, so I have had a long term interest in communcations.
Every notice that the bell rings when you push ctrl g on the old
teletypes?
Also, built first commercial timesharing system.
-----
Well so much for comments about software and communications (which were
not in depth, mine). I'm going to stop at Chap. two to go skiing.
I'll get back to 3 and more on Monday.
> 2020: Embedded speech/vision Body net
> 2020: anticipatory
> monitoring&gesture
> vision&gesture
Both about a decade or so too late.
> 2003: AI will have a significant social effect on par with the
> transistor (Bell is Con: I side with Bell on this).
Hah! So waht does "AI" mean to you, or the authors!?
> A production model care that drives itself (Bell Con, probably)
For socio-legal reasons, not technical. Mercedes could put one on the market
today, if they wanted.
Jan
I think they mean in common use.
>> 2003: AI will have a significant social effect on par with the
>> transistor (Bell is Con: I side with Bell on this).
>
>Hah! So what does "AI" mean to you, or the authors!?
That's just between Gordon and Raj Reddy. I can ask him in two weeks maybe.
Personally, I think AI is almost dead: Intellicorp and Teknowledge
aren't a visible part of the Santa Clara Valley anymore. Those were
"expert system" corps. AI is many things. Raj is a very optimistic guy.
I've met him, and I've seen various labs at CMU: an interesting place.
Let me get to your next Q. As part of answering that.
>> A production model car that drives itself (Bell Con, probably)
>
>For socio-legal reasons, not technical. Mercedes could put one on the market
>today, if they wanted.
Would you ride in it at high speed?
CMU built several autonomous vehicles. I've ridden in the blue van (before
the HUMMV). Self driving cars (humans can also drive them wthout using
a keyboard) fall into several categories: those using vision systems
(like CMU's), those using concealed wires, etc. I suspect when
Reddy means AI, I think he means using vision systems and robots (and
CMU's speech systems) in an integrated fashion. What people do to drive
down the road is amazing. Car vision system have problems with shadows
on pavement. And those are on good (sunny) days.
So when you say that Mercedes can do it, that invokes all kinds of
questions I have, and probably more from the autonomous vehicle folk
as to how they solved problems like driving around in snow and rain, or
at night. If they follow a cable, that's another thing.
What their car would do in an emergency is another set of problems.
And if a person cites say GPS, then that person is clueless.
I mean these cars have GPS in them for taking data, but not for navigation.
These systems aren't accurate enough.
What's important about a CMU visit is to see their variety of robots.
MIT is another good place to see robots (I think Brooks is a better
engineer). And this is the area in AI which I think Raj is referring.
Bell also spent a lot of time at CMU and built a lot of machines there
(with a lot of communication.... and software).
Frankly, I think Gordon is going to win $100 or $200 from this bet.
I'll ask him for ya.
They can put one on the street, but not on the market. It is known to
fail in certain situations (wet road with low sun in drive direction,
snow on the road and such like). Drivers often fail in these situations,
too. It would be too expensive, today, too. "Market" does mean "sells".
> "Algorithm speeds have improved..."
> I would question that. Maybe some algorithms. Euclid's algorithm
> hasn't had major improvements (has had some) in thousands of years.
Isn't the point that slow algorithms have been supplanted by faster
ones? E.g. instead of doing Gaussian elimination, you now do Conjugate
Gradient or Multigrid iteration for solving similar problems?
(Euclid's algorithm runs pretty fast as it is, is there really a lot of
benefit if it could be made to run faster? And, to be utterly anal,
since we have faster hardware, yes, algorithms do run faster these days
:-)
~kzm
--
If I haven't seen further, it is by standing in the footprints of giants
> >Both about a decade or so too late.
>
> I think they mean in common use.
That's what I meant as well.
> >For socio-legal reasons, not technical. Mercedes could put one on the market
> >today, if they wanted.
>
> Would you ride in it at high speed?
Yes, on the Autobahn (which is where it's meant to be used in its current
version).
> I suspect when Reddy means AI, I think he means using vision systems and
> robots (and CMU's speech systems) in an integrated fashion.
That's what I consider interesting, as well.
> What people do to drive down the road is amazing.
Indeed.
> Car vision system have problems with shadows on pavement.
That's a sensor problem. The CMOS cameras are getting ready to go; in a few
years time, that will be somewhat of a non-problem (well, actually a different
problem, but one we think we know how to handle).
> So when you say that Mercedes can do it, that invokes all kinds of
> questions I have, and probably more from the autonomous vehicle folk
> as to how they solved problems like driving around in snow and rain, or
> at night.
Ah, you want absolutely everything in the first step 8-)?
Dunno know about snow. I think it can handle moderate rain. Personally, I
wouldn't be driving cars with any sort of driver (including human) during
a tropical downpour.
> And if a person cites say GPS, then that person is clueless.
Just so.
> MIT is another good place to see robots (I think Brooks is a better
> engineer).
Yeah, but I think he went overboard with COG. In my opinion, it's difficult
enough (and more interesting) to get the lower level of the hierarchy to work.
The cognitive/planning/conscious aspects of behaviour will be left for a
future grant proposal.
Jan
> It would be too expensive, today, too. "Market" does mean "sells".
I don't think it would be more expensive than, say, the first ABS systems.
Those possible failure cases are indeed the heart of the problem. The savings
and optimizations that can be had through "modern" techniques will drive, IMO,
society to come to grips with the legal problems associated with this. After
all, we have a counterproductive and inconsistent view of these things today:
if the technical system is on its own, it has to be failure-free; if a human
is involved (even if only in the position of vetting the system's decision -
medical systems are often built that way), actual liability goes way down,
because we don't expect humans to make failure-free decisions. This is stupid,
but not news.
Jan
Well, I don't want a autopilot to drive with the same quality as the
average driver, and since the average driver considers himself (or
herself?) "above average", most people won't accept an autopilot until
the autopilot drives significantly better than them.
It should be also noted that it isn't feasible to let the autopilot
detect whether it is in a condition where it is better when a human
takes over. The human backup-driver isn't concentrated on the road, and
therefore can't take over in a few seconds; especially since both humans
and autopilots have difficulties in the same situations (entry of a road
under construction, snow on the road...).
If many cars have an autopilot and those can communicate, it's easier.
The path the car before you went is save enough, and instead of relying
on visual inspection, the autopilot can "know" about the quality of the
road.
On the long run, this is certainly good news. Since noone will accept an
autopilot that kills people, but the industry will want to make
autopilots, the killing on the roads will eventually stop. Our
grandchilds will ask us how we did accept a situation where many
thousands of people were killed every year. Automobilist veteranes will
moan that "not every driver was a killer" ;-).
I think this statement is certainly false for any near-future autonomous
or semi-autonomous vehicle. The human driver must be able to react at
least as quickly using the autopilot as if driving "manually"; otherwise
the system would be clearly unacceptable, on present roads. As far as I
know, all of the systems that have been road tested do have immediate
human intervention, and do assume a fully attentive human driver
(probably more attentive than the "typical" road driver, given the
driver's knowledge of being in an experimental test vehicle).
Clearly there is a lot of human-factors work to ensure that the human
driver actually *is* attentive to road conditions and emergencies. If
not well designed, there is the potential for the driver to become
inattentive during the time that the car is "driving itself." On the
other hand, many drivers become quite inattentive while driving standard
vehicles, particularly on high-speed expressways; it's entirely
conceivable that a system that eliminates some of the monotony of
driving could lead to a more relaxed and in-control driver. It's a
matter of design.
I find Jan Vorbrueggen's assertion that a (semi-)autonomous vehicle
could be put on the road today in a production mode truly incredible,
because the above human-factors problems seem not nearly solved. Even
if he would ride in it, I certainly wouldn't want him on the same road
as me! (Perhaps this is the "socio-legal" problem to which he
referred.)
David desJardins
But there are some (to my mind, being an analysis, not discrete math guy)
pretty bizarre ways of testing primality that are much faster than
anything 50 yrs ago.
There are also constant changes in old problems---for example the
elliptical math stuff for crypto, or the effects (still to be played out)
of that new winnow-and-chaff stuff.
Finally there are new algorithms---for example the last ten years have
seen some pretty neat work in audio compression, and the sorts of
algorithms being discussed for MPEG-4 are rather different from what's
come before.
Maybe once an algorithm has been defined, it per se does not get much
faster, but that's kinda true by definition. What changes is new ways of
looking at or solving the problem, hence new (perhaps very different)
algorithms. (Eg change from "finding all the primes in this range" to
testing "is this particular number prime".
Maynard
--
My opinion only
The problem with inattentive drivers isn't just a problem of monotony.
It's just impossible to be attentive all the time. German autobahns
aren't monotone, since you always have slow drivers on the right and
fast drivers on the left (or behind you), and it's always possible that
you need to react. They have many curves, too. After some hours on a
german autobahn, you feel like having played Doom. Not the feeling I
want to have when the danger is real. American motorways are IMHO much
more relaxed, albeight the danger of monotony. And since you already see
people there treating the fact that the car drives straight with
constant velocity as "autopilot", I doubt that they'll intervene when
they have a true autopilot.
From the KISS principle, the autopilot is the wrong solution. Railways
are the right solution.
Well, *part* of the right solution, but not *the* right solution, maybe.
(Railways, even electronic guidepaths, can't be instantly
instantiated in place of all roadways in a place like the USA, so
seamless transition from roadway to its railway replacement is
almost certainly necessary. And that implies that some sort of
autopilot-with-attentive-driver is necessary, at least to
transition from railway back to roadway, which requires the
automated system to either come to a complete stop or be able
to assure the attentiveness of the operator in time to take over.)
The thing I think David was talking about, which I'm not sure you
recognized, is that an auto-pilot system must be able to recognize
that its driver has *become* inattentive over a span long enough
to pose a risk to humans, and take corrective action. That's a
human factors problem, it's been addressed to varying degrees of
success in various older technologies, but auto-pilot cars pose a
whole bunch of new problems.
(Solutions have been applied in trains and trolleys with "dead-man's
switches", even in areas as minor as, e.g., UNIX "rm -i" prompting
whether to remove a file. I'm sure we've all had the experience
of answering our computer wrongly due to being inattentive, the
cost being maybe losing a file or something; assuming the cost was
much higher, what sorts of human-factors engineering would be needed
to drastically decrease the likelihood of such a wrong answer?)
Auto-pilot *jets* have similar problems, if I'm to believe stuff
I've seen on TV -- pilots napping at the helm, etc. Some of
these problems seem to be human mistakes, but if there's
evidence they occur more often than in non-autopilot systems,
blaming the system cannot be ruled out. (And, yes, lots of these
particular mistakes do not lead to crashes, just to queazy
feelings among the flying public; after all, it's mostly
because auto-pilot systems are *so* good at what they do that
pilots are able to sleep. Cruise control in cars can have a
similar effect.)
In another article, Jan Vorbrueggen <j...@mailhost.neuroinformatik.ruhr-uni-bochum.de> wrote:
>Those possible failure cases are indeed the heart of the problem. The savings
>and optimizations that can be had through "modern" techniques will drive, IMO,
>society to come to grips with the legal problems associated with this. After
>all, we have a counterproductive and inconsistent view of these things today:
>if the technical system is on its own, it has to be failure-free; if a human
>is involved (even if only in the position of vetting the system's decision -
>medical systems are often built that way), actual liability goes way down,
>because we don't expect humans to make failure-free decisions. This is stupid,
>but not news.
Because of the distinctions I see between auto-pilot cars and other
auto-pilot situations (including, somewhat, ABS), I'm not sure that
"this" is stupid per se, even though I agree our liability laws, and
sometimes the judges, juries, plaintiffs, and defendants circulating
around them, probably are in many ways.
The crux of my objection is the domain over which humans *make* decisions
in the first place. Accepting that humans make mistakes in *instances*
of a situation is not stupid, of course. But neither is accepting
that making mistakes in designing *classes* of situations amounts
to quite a different degree of liability.
Put another way, it could be said that the decision to put human
beings in control of cars was "stupid" in terms of creating an
entire class of situations, one that is among the leading causes
of death in the USA. But, for the most part, people get to choose
whether to participate in that risk pool (though the proliferation
of roads in most areas of the USA makes it *quite* hard to avoid
bad drivers, pretty hard to even avoid ever being in any).
So, people driving cars might have been stupid, but it's what
today's reality consists of. Liability law as a whole, and thus
the economy, already takes the status quo into account.
(With planes and, to some extent, trains, ordinary citizens can't
nearly as easily avoid being part of the "risk pool" simply by
not taking those forms of transportation. Planes fall out of the
sky on whoever happens to be under them, and trains, when they
derail, can take out a much wider swath of non-railroad materials
and people than roadway vehicles can. Even a toxic-substance-
carrying eighteen-wheeler probably poses, typically, less danger
when it overturns than a train carrying the same substance when
it derails, due to the quantities involved, at least to people
avoiding those modes of transportation.)
Whoever puts an auto-pilot system in cars will, among other things,
have to accept responsibility for introducing a new *class* of
pilot: the computer (or, more likely, the computer/human combination).
And, despite the fact that the track record of such a combination
might well have a better "record" than the current approach, it's
not going to be entirely "stupid" to conclude that more of the
blame for accidents under the new system go to the *system*, not
the human involved -- even more than would be expected as a result
of the human operator having less involvement in the piloting
process.
That is, it would be much more stupid to market a no-human-
involved technical system that replaces a human-involved one
*unless* the new system is failure-free, because humans don't
like having other entities control, especially destroy, life
and limb, and are much more willing to take the blame for
personal mistakes than for "well, the system sometimes makes
mistakes, then you lose, tough luck", especially when the
system they're using is a *simple* and *direct* interface to
the underlying physical model (wheels, engine, wings, whatever).
(I'm not saying the "you-have-to-blame-somebody" and "deep-pockets"
syndrome won't stupidly amplify this in many cases, as it already
does, just that stupidity isn't the only thing involved here in
the belief that all-technology systems must be much vastly closer
to failure-free than human or human-plus-technology systems.)
In particular, drivers already basically *know* how to drive cars
using the current system, just as pilots already know how to
fly planes using traditional systems, etc. Failures happen too
often, but we recognize that people make decisions based on some
reasonable degree of up-front info that they'll be "in charge"
of the operation, they've got to know what they're doing, remain
attentive, be cool under pressure, etc. If they fail, they fail,
and get most of the blame, usually.
But, introduce a new system for these people to operate, one
that is much more highly automated, and several things change:
- There's typically less "feel" for how the human and underlying
physics of the machine interact. E.g. I prefer standard-shift
over automatic in cars when driving on wet roads, ice, snow,
etc., even though for most driving I find automatic more
convenient. ("My" car is standard-shift and feels every bump;
when renting for a vacation, typically in the South, I prefer
automatic, and I enjoy driving my wife's car, also an automatic,
even though it's a VW Rabbit, because it's nice to not have to
shift sometimes.)
- There's less up-front information on how risky operating the
new interface is going to be, especially when taking into
account situations we *know* happens with humans (becoming
inattentive for a short period). E.g. we know we can't
become inattentive when there's a lot of rapidly moving cars
turning in and out of roads right in front of us. What if
the new system chooses to do some kind of self-diagnosis or
adjusting of navigation in the middle of a long stretch of
quiet road, and our becoming inattentive can inadvertently
result in our causing the system to crash the vehicle then,
or, even worse, later on? The answer isn't just "the human
was at fault for being inattentive", because the human *knows*
that's an okay time to adjust the stereo -- a time that *used* to
be a very low-risk one using the old system. So the new system
must anticipate this sort of behavior.
- There's less know-how available in the culture (including
the jury pool) regarding how human mistakes are at fault
for disasters. Until we understand our machines (or
compilers ;-) very well, we tend to blame them, not ourselves
or eacy other, for disasters, especially as they get more
complicated.
With both ABS and Airbus, the above elements have applied to
varying degrees.
I gather most ABS "failures" (in the comprehensize sense of "didn't
stop the car in time to avoid disaster") are due to people trying to
pump the brakes themselves instead of leaving it to the ABS.
That doesn't seem like as clear a case of "operator error" as in
the non-ABS case where the human stomps on the brakes in wet
weather and skids, because the latter case is what most of us have
been *trained* for (if we've taken drivers' ed, etc.).
By trying to compensate for the latter sorts of operator error, ABS
introduces a new *class* of problem: failure when the new system is
operated according to old principle.
IMO, it's reasonable to expect the *designers* of the new system to
mitigate that problem, since they're explicitly taking on the
task of solving the overall problem of not stopping quickly enough.
This is done either by detecting pumping and treating it as "stomp-stop",
or forcing the driver, before putting the car in Drive, to prove he
knows how to perform a "stomp stop" by putting the driver through
a quick test (akin to a VR experience, but nothing that fancy),
and otherwise train or reject the driver right there. (Or just
accepting the added liabilities when the failures happen.) Dunno
what has been the actual recent history of ABS failure and litigation,
though, so I'm just speculating here.
Re Airbus, put pilots in a plane that is piloted by setting little
dials, and you get crashes when those little dials aren't perfectly
human-factor-engineered, just as always. But since the pilots
have been given *less* control of the plane to recover from
pending disaster, the *system* must take more of the blame. (By
this I mean they have less control in the way they're accustomed
to. "Pull up" no longer works the way it used to on the planes
I'm talking about, or at least it didn't. There are incredible
videos I've seen of these jets doing the most amazing maneuvers,
including, unfortunately, crashing, due to pilots trying to fly
them "normally".)
Here, maybe the "stupid thing" to have done is put pilots in control,
instead of using computer operators or programmers, who know to look
for things like missing or extra decimal points, and try to respond
to pending disaster by inputting better data instead of just yanking
on a nearly-irrelevant steering wheel, etc.
These issues recur through all sorts of human-factor engineering,
whether applied to new programming languages, new computer
architectures (issues like shift-left-by-large-amount), or
whatever. They're brought into sharp relief by considering large-scale,
high-risk, high-cost situations like flight, rail, automobile,
and nuclear facilities, and we should learn from them so as to
try and reduce the chances of error in our own little (growing) pond.
In the end, I think that we won't have auto-pilot in cars until
a couple of decades after we do in planes and trains. There
are lots better reasons to do planes and trains than cars (the
liabilities for failure are higher against the manufacturers;
you have to pay people to pilot them, and they do it as a job,
not as something they really enjoy or see as part of getting from
point A to B; they don't typically take their family along or
otherwise feel their mistakes will directly impact family and
friends, e.g. neighbors; there are far fewer qualified pilots
available; the paths of travel are much more strictly controlled;
etc.).
Only after the *public* considers an auto-pilot system to be
a smashing success in the air and/or on rail will I believe it
is feasible to introduce it on the roads. Introducing it
earlier would work in very narrow cases (say, on an island
with better controls on overall traffic, e.g. Bermuda), but
I'm talking about automating the driving experiences of the
citizens of the USA, Italy, England, Japan, Indonesia, etc.,
etc. I think that's a huge job fraught with peril, and if
it hasn't been proved out in the air and on rail first, its
occasional failures will probably be perceived by the public
the way Frankenstein's monster was by the villagers, *not*
the way Windows 95 has been today. That's because people won't
feel they have much choice whether to join the "risk pool"
involved in automated driving, while they do *choose* to use
computers that crash twice a day.
And, personally, as much as I'm against lots of the stupidity
in tort law (if not tort law entirely), in this case I'm
*quite* grateful that some degree of "stupidity" makes
introducing auto-piloted cars much more economically risky
than it might otherwise. It's one thing to have to share the
Internet with Microsoft-piloted computers; I'd sure hate to
share the road with Microsoft-piloted cars.
--
"Practice random senselessness and act kind of beautiful."
James Craig Burley, Software Craftsperson bur...@gnu.org
Craig Burley wrote:
> But, for the most part, people get to choose
> whether to participate in that risk pool (though the proliferation
> of roads in most areas of the USA makes it *quite* hard to avoid
> bad drivers, pretty hard to even avoid ever being in any).
> (With planes and, to some extent, trains, ordinary citizens can't
> nearly as easily avoid being part of the "risk pool" simply by
> not taking those forms of transportation. Planes fall out of the
> sky on whoever happens to be under them, and trains, when they
> derail, can take out a much wider swath of non-railroad materials
> and people than roadway vehicles can.
I don't think you are making a correct comparison here.
If I am a pedestrian, a major risk is getting hit by a
car. If I am on a bicycle, a major risk is getting hit
by a car. Cars "fall off the road" much more often
the planes fall out of the air. I know that I'd much rather
ride my bicycle alongside a train track than alongside a
road with cars on it, because trains always signal before
turning (though not before derailing).
> In particular, drivers already basically *know* how to drive cars
> using the current system ...
You traveled outside the Boston area to gather evidence
for this interesting assertion, right? :-) I think you
mean that drivers *think* they know how to drive cars.
> ... That's because people won't
> feel they have much choice whether to join the "risk pool"
> involved in automated driving, while they do *choose* to use
> computers that crash twice a day.
I think you should consider the non-drivers as well as the
drivers; anyone living next to a moderately busy street is
in a "risk pool" created by the bozos on the street. One
thing I haven't seen mentioned for smart cars is whether
they will possess "smart governors" that interact with "smart
speed limit signs"; that is, if these cars are dealing with
hard problems like following the road, they can surely also
deal with transponders on speed limit signs that say "25mph"
or whatever the limit is in your favorite residential
neighborhood. Somehow, I'm not sure that this will be viewed
as a "feature" by people buying these cars, but how can you
justify not including it?
I guess what I find interesting is the things that proponents
of "smart" cars would like to put in them. Rather than have
the cars be driven by computers, why not have the computers
reduce risky behavior? Enforcing sensible following distances,
how hard would that be? Cutting the accident rate in half on
some of the big urban freeways would increase their effective
capacity.
Also interesting is that despite all the smarts that people
are talking about putting into the cars, I get the impression
that the cars themselves are not expected to change. If we trust
this technology, is it really necessary to surround ourselves
with 2000 pounds of protective structure? Why not make the
smart cars much, much lighter?
--
David Chase, ch...@world.std.com
Very fast algorithms have existed for a long time, but most software
engineers do not use them. I see the same slow algorithms used over
and over again by people who should know how to do it right, but prefer
to slap something together quickly to minimize development time. The
increasing importance of time-to-market has only encouraged this.
The emergence of a standard library of algorithmically optimized
ADT's would go a long ways to solving this problem. String manipu-
lation is already done mostly through libc function calls, which
tend to be well-optimized, but for more complex operations, either
the libraries already available are insufficiently generalized for
many people to use them, or they are unavailable (ie, cost $$$, or
simply aren't well-known or widely distributed).
The promise of C++ was that such libraries of objects would arise,
simultaneously reducing development time and improving application
performance. It hasn't lived up to this promise yet. Everyone is
still writing their own objects instead of using each others'.
-- Bill Moyer
: The emergence of a standard library of algorithmically optimized
: ADT's would go a long ways to solving this problem. String manipu-
: lation is already done mostly through libc function calls, which
: tend to be well-optimized, but for more complex operations, either
: the libraries already available are insufficiently generalized for
: many people to use them, or they are unavailable (ie, cost $$$, or
: simply aren't well-known or widely distributed).
: The promise of C++ was that such libraries of objects would arise,
: simultaneously reducing development time and improving application
: performance. It hasn't lived up to this promise yet. Everyone is
: still writing their own objects instead of using each others'.
IMHO, "Fischer's Fundamental Theorem" ("The Psychology of Computer
Programming", Weinberg) may have something to do with that. Briefly put,
Generality and "efficiency" (by some definition) are un-alterably opposed.
Note that even in libc, memcpy() is much more useful, and likely to be
used, than scanf(). The former does a reasonably well-specified task
reasonably well, and is amenable to optimization, and is unlikely to
contain subtle bugs (from most vendors anyway :-). The latter has a
much larger specification, and the existing implementations are none
too careful about meeting it. The result is that the audience consists
essentially of those who are "experienced" enough know of its existence
and a few of the worst gotchas, but not experienced enough to "hand roll"
a more specific and reliable solution on their own, with less work than
understanding (and debugging) the "spec". I submit that the average
programmer transits through that particular stage of development fairly
rapidly. Sure, an experienced programmer will still use it (well,
fgets and sscanf(), :-) for "off the cuff", not intended to be used by
anybody else, stuff. In C++ the situation is worse, because the spec
itself is a "moving target".
In short, when the difficulty of figuring out how to correctly
invoke "canned" code surpasses the difficultly of writing the code,
the prudent person says "no thanks".
Also, keep in mind that this bounty of re-usable classes is
allegedly coming from the same vendors that can't even get strtol()
and memcmp() right. Would you _really_ want to "bet the product" on
them getting "Generic spreadsheet/word-processor" class right? :-)
The bizarre thing is that code-re-use works amazingly well
when sources are available, to be "bent" to the task at hand, but the
impetus for much of the _commercial_ thrust of code-re-use is from
the sort of folks who would rather eat a live toad than release
accurate documentation, let alone source. Go figure...
Mike
| alb...@agames.com, speaking only for myself
That would not be a bad price, but...
>Those possible failure cases are indeed the heart of the problem. The savings
>and optimizations that can be had through "modern" techniques will drive, IMO,
>society to come to grips with the legal problems associated with this. After
>all, we have a counterproductive and inconsistent view of these things today:
>if the technical system is on its own, it has to be failure-free; if a human
>is involved (even if only in the position of vetting the system's decision -
>medical systems are often built that way), actual liability goes way down,
>because we don't expect humans to make failure-free decisions. This is stupid,
>but not news.
I can't even tell what kind of system you are talking about:
Guided wire, vision based, radar, etc. etc.
The natural world doesn't follow what human's think is logical.
Logic conforms to empirical information. Most people confuse rational,
logical thought with consistency.
I'd recommend the www.abcnews.com web site looking for transcripts of
Nightline back in Nov./Dec. for Danny Hillis's comments about
human-computer mediated systems: Europe has sided with the computer, the
US has sided with human.
While I will be diaing in, I will be doing bed time reading.
I did skim the rest of the book yet again. I think Bell/Gray's chapter
is the best in this book, and that's not just telling to his face.
Unlike most of the other authors, they have interpolateable graphs,
Venn diagrams to separate ideas, references, etc.
Gordon thinks that if you guys want the "real scoop" that you should
down load the talks off the ACM web site (I've one of them) as their
presentations in some cases differed with the text.
Then that sounds like a very good reason not to give drivers a "true
autopilot." However, there are still lots of ways that a hypothetical
semi-autonomous vehicle could be easier and safer to drive than a
current vehicle, that don't rise to the level of what you are calling
"an autopilot" (aka "a car that drives itself"). You have a straw-man
proposal (put fully autonomous vehicles on the road today with present
technology) which is obviously out of the question, so you can certainly
shoot holes in it. But that doesn't say much of anything about the
overall concept of using computer and "artificial intelligence"
technology to make personal vehicles safer and more relaxing to operate.
It seems clear to me that any such technology introduction is going to
be a gradual process, which in all probability will follow a similar
progression (at a delay) to that used in larger and more sophisticated
vehicles. Airplanes have moved from "stick shaker" technology, to
terrain detection, to collision avoidance, to future proposed systems
with more and more automation. This process has, as we all know, led to
both benefits and risks; some accidents have been caused by reliance
on automation. That's one of many reasons that it is a gradual process
of improvement, rather than a sudden replacement of "dumb vehicles" with
"smart vehicles." Engineers can learn from the failings of such
systems, and particularly can learn how to design systems with
user-friendly features, that *do* simultaneously make the pilot's job
easier, and yet not promote inattention or carelessness.
All of the concepts above for making airplanes easier and safer to fly,
as well as lots of others, are just as applicable, in principle, to
automobiles. I don't see any reason to believe that they won't follow a
similar course of incremental development.
> From the KISS principle, the autopilot is the wrong solution. Railways
> are the right solution.
There's a term for absurd statements designed to create argument and end
any serious discussion. They are called "trolls". The above seems like
an excellent example.
David desJarins
>It seems clear to me that any such technology introduction is going to
>be a gradual process, which in all probability will follow a similar
>progression (at a delay) to that used in larger and more sophisticated
>vehicles. Airplanes have moved from "stick shaker" technology, to
>terrain detection, to collision avoidance, to future proposed systems
>with more and more automation. This process has, as we all know, led to
>both benefits and risks; some accidents have been caused by reliance
>on automation. That's one of many reasons that it is a gradual process
>of improvement, rather than a sudden replacement of "dumb vehicles" with
>"smart vehicles." Engineers can learn from the failings of such
>systems, and particularly can learn how to design systems with
>user-friendly features, that *do* simultaneously make the pilot's job
>easier, and yet not promote inattention or carelessness.
>
>All of the concepts above for making airplanes easier and safer to fly,
>as well as lots of others, are just as applicable, in principle, to
>automobiles. I don't see any reason to believe that they won't follow a
>similar course of incremental development.
>
> David desJarins
IMO, the first autonomous vehicles should fly. They could
pickup/deliver parcels, and later pickup/deliver people.
There's less traffic in the air. GPS is good enough for coarse
navigation. Birds and fowl can be seen, detected by radar, or
detected with ultrasound. All other flying vehicles can have beacons
and commuinications capablity.
The Airborne Parcel Service article on my home page has more detail.
Regards,
> I can't even tell what kind of system you are talking about:
> Guided wire, vision based, radar, etc. etc.
That was a general comment in how non-human-in-the-loop approaches to control
problems are being handled by today's society and its legal system, especially
in the US of A. The actual technology itself is almost irrelevant.
> I'd recommend the www.abcnews.com web site looking for transcripts of
> Nightline back in Nov./Dec. for Danny Hillis's comments about
> human-computer mediated systems: Europe has sided with the computer, the
> US has sided with human.
We have? He might be thinking of the different trade-offs Airbus and Boeing
have made in their avionics systems, but even there I wouldn't quite agree.
In general, I get the impression of _less_ acceptance hereabouts of high tech,
including computers, than across the pond.
Jan
It is my understanding that it was SGI who went out of their way
on that one. They actually built half a dozen or so portable Indys
(and a few more dummy units) for the film. It would probably have
been the world's most expensive portable had it gone into production.
--
Opinions expressed may not be Kevin D. Kissell
those of the author, let alone Silicon Graphics MIPS Group
those of Silicon Graphics. European Technical Programs
I suggest that the average programmer doesn't transit that stage,
but thinks that s\he does. Yes, libraries have bugs, but the
workarounds are often worse, in practice.
> In short, when the difficulty of figuring out how to correctly
> invoke "canned" code surpasses the difficultly of writing the code,
> the prudent person says "no thanks".
As does the typical programmer.
-andy
--
andy@hmsi
-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/ Now offering spam-free web-based newsreading
>eug...@cse.ucsc.edu (Eugene Miya) writes:
{snip}
>That was a general comment in how non-human-in-the-loop approaches to control
>problems are being handled by today's society and its legal system, especially
>in the US of A. The actual technology itself is almost irrelevant.
Would appreciate specificity especially with regards to USA.....
{snip}
> Jan
cheers, john
Unfortunately for SGI, their lead is declining, and other manufacturers
are starting to make inroads on their turf. Their gold-plated boxes
are beginning to be matched by commodity hardware: PC stuff, possibly
with an Alpha rather than an x86 processor, giving more bang for the buck.
--
-- Joe Buck
See my daughter: http://www.byoc.com/homepage/130481/molly.html
Boring semi-official web page:
http://www.synopsys.com/news/pubs/research/people/jbuck.html
<snip - David writes about some "semi-auto-pilot" systems etc.>
IIRC Toyota is prototyping (or maybe selling in Japan) a cruise control
system that includes sone form of range finder system that slows down as
the car approaches too close to the vehicle in front. I find that the
most irritating feature of using cruise control on a crowded highway - I
have to keep switching off & resuming whenever I encounter someone with
their control set 0.5mph slower.
Also (again IIRC) BMW have a system that automatically applies the
brakes if the car approaches withing the "danger-zone" of an object in
front. I assume this would be done gently at first with more brake
applied as the object gets closer. This could be made to automatically
stop in the optimal manner when a snarl-up occurs - thus eliminating
highway pile-ups (also making "ram-raiders" lives very difficult!). This
is akin to the automatic terrain avoidance in some modern fly-by-wire
aircraft.
This overall model appears to be developing along the "freedom within
the envolope" lines. Perhaps in the future we will see "the envelope"
defined by more & more sensors and parameters. Each time a new boundary
is added we move one step closer to the "full auto pilot - or choice of
sub-optimal human control" situation. I believe the first example of
this was ABS braking (were there worries about lawsuits if a driver
relied on the ABS & it didn't perform?). Perhaps the last stage before
the "full auto pilot" will be "total safety control" such that I get in
my car, press the accelerator to the floor & it goes at the maximum safe
speed all the way to my destination.
Hugh.
< not speaking on behalf of Cisco or General Motors >
Actually, when my insurer stopped offering a discount
for ABS they said it was because it didn't lower claims
because people with ABS just drove "worse" under the
assumption that ABS would save their bacon.
And, in fact, this seems to be an important human factors
principal -- people have a level of (perceived) risk they
accept -- if you make things safer, they act accordingly
to return to "their level".
-------> Street is Dangerous ------
| |
| V
--- Drive Faster <---- Widen Street
The problem with this particular feedback loop is that
it increases risk to those outside of it (e.g., pedestrians).
}That doesn't seem like as clear a case of "operator error" as in
}the non-ABS case where the human stomps on the brakes in wet
}weather and skids, because the latter case is what most of us have
}been *trained* for (if we've taken drivers' ed, etc.).
Assuming Americans have (and remember) and usefully
significant amount of driver's training is dangerous.
}This is done either by detecting pumping and treating it as "stomp-stop",
}or forcing the driver, before putting the car in Drive, to prove he
}knows how to perform a "stomp stop" by putting the driver through
}a quick test (akin to a VR experience, but nothing that fancy),
}and otherwise train or reject the driver right there.
Could we get this same test applied to finding the turn signals? :)
}In the end, I think that we won't have auto-pilot in cars until
}a couple of decades after we do in planes and trains.
Much more likely in my opinion is leaving the wheel in
people's hands (for liability reasons) and giving them
more information (e.g., radar detection of other objects)
which will "allow" people to pay less attention (I can
type a few numbers in here on my spreadsheet, the collision
detect will keep its eyes on the road for me...)
}And, personally, as much as I'm against lots of the stupidity
}in tort law (if not tort law entirely), in this case I'm
}*quite* grateful that some degree of "stupidity" makes
}introducing auto-piloted cars much more economically risky
}than it might otherwise. It's one thing to have to share the
}Internet with Microsoft-piloted computers; I'd sure hate to
}share the road with Microsoft-piloted cars.
You mean the ones dead in the left lane with:
It is now safe to turn off your car
on the dashboard :)
John
--
John Hascall, Software Engr. Shut up, be happy. The conveniences you
ISU Computation Center demanded are now mandatory. -Jello Biafra
mailto:jo...@iastate.edu
http://www.cc.iastate.edu/staff/systems/john/welcome.html <-- the usual crud
Actually actually, nobody knows why ABS systems don't seem to improve
safety in studies. The above is one of many theories.
If the above theory is correct, then ABS systems still have (possibly
significant) value: they allow people to drive with less care, which
they presumably prefer (i.e., increasing the utility they derive from
the driving experience), while not decreasing their safety. That's a
good thing.
> The problem with this particular feedback loop is that it increases
> risk to those outside of it (e.g., pedestrians).
I understand this effect in some circumstances, but I don't know of any
evidence that ABS systems increase risk to others on the road.
David desJardins
I've looked into SPARCbooks. And it's not really an issue of choice.
I can't tell whether you saw last week's episode of Nova, the US's
science TV program, but I work with some of those guys studying the West
Antarctic ice sheet. If you looked in the background you would have
seen SUNs and PCs. But there is clearly no short-term market for an
SGI machine (I did take an SGI Post-It[tm] pad with me grabbed at the last
moment before clearly out of my office for three months), and that
people asking me whether I had access to an SGI machine down there
(no such luck except over the net {considerably slower}). The real
problems are power and cold. Oh, and cost. But then this isn't a
supercomputing issue until the data gets back to the States.
Well, I would not exactly call SGI boxes "gold plated."
That depends, too, if you specifically attempt to imply that to the
somewhat separate CRI line or other acquisitions.
"Niche" might be a better word. Depends on the application.
"Gold-plated" is hardly an accurate description of SGI servers, and
indicates that someone has not done his homework....
Here is a table that I posted in comp.arch yesterday:
Some examples: SPECfp95 fprate SPECint95 intrate
Machine MHz cpus RAM US list rate per k$ rate per k$
----------------------------------------------------------------------------
SGI Origin2000 250 16 4 GB $491k 3019 6.2 2088 4.3
HP V2250 240 16 4 GB $777k 2471 3.2 2209 2.8
Sun UE10000 336 16 4 GB $1043k ~2200 2.1 1718 1.6
DEC 8400 612 14 4 GB $834k ~1600 1.9 2143 2.6
----------------------------------------------------------------------------
Notes:
(0) Pricing for HP, Sun, and DEC are derived from an independent
consulting firm. Prices include system, cpus, RAM, enough
I/O to boot, and the O/S license. Other software, required
maintenance, and miscellaneous are not included. SGI pricing
is from my unofficial reading of the SGI price book, effective
May 1, 1998.
(1) The Sun SPEC95 rates were estimated as 1/2 of the 32-cpu results,
and then scaled linearly from the published 250 MHz results to
the new 336 MHz processors. The first step may be slightly
pessimistic, but the second step is seriously optimistic.
(2) The DEC SPECfp_rate result was scaled linearly from 12 to 14 cpus.
(This is quite generous, since the 12-cpu result is strongly
bandwidth-limited already, and adding more cpus will just make
that contention worse.)
So.....
If SGI boxes are "gold-plated", why are the prices so much lower than
similar (but generally slower) systems from DEC, HP, and Sun ?
It is certainly true that single-processor peecees can give better
price/performance -- if your performance needs are low enough! But
among servers relevant to the HPC space, SGI has the lowest list prices
and (typically) the highest performance.
--
--
John D. McCalpin, Ph.D. Server System Architect
Server Platform Engineering http://reality.sgi.com/mccalpin/
Silicon Graphics, Inc. mcca...@sgi.com 650-933-7407
It is further noted: "We are doomed."
Now that I just returned from travel and one firefight and moving to the next.
Let me make note of two references:
Sunday last San Jose Mercury news: opinion section, I think page 7.
In the middle of the page is the woman from the White House asking
computer makers to slow and stop the pace of technology. If you need
more of a reference, let me know and I'll get her name. The Merc has a
pretty good web site (The NY Times is jealous).
Second, locate ex-Senator Bill Bradley's book. He is listening to
Jeremy Rifkin (who was just through the Bay Area on a speaking
engagement on his latest book: I missed that and I would have raise this
about ex-Senator Bradley's endorsement). Bradley for those outside the
US was a former basketball player, and a favored Democratic party member
for possible future Presidential bids.
Congress is basically populated by somewhat-idealistic, former jocks
and lawyers/poli sci majors, etc. There are a couple of scientist/engineer
types and medical doctors. The Democrats are literally saddled
by labor who don't like automation (and others distrustful of technology).
The Republicans are largely old money conservatives. Do not assume that
if a party nuzzles up to you that they have your best interests in mind.
Policy makers get mixed signals from hi-tech.
Perot: he actually was a signal for a while. He represented something
of a trend (petered out) which the traditional political parties watched.
Other political parties: not well represented in the US Congress.
That leaves Mr. Gates: young, brash, arrogant, rich by way of means
which traditional policy makers don't understand. O. Hatch and P. Leahy
come right out in their questioning and note that "No one" these days makes
a 28% profit margin. Except for Gates. Independent of his competition in the
computer and electronics industry, policy makers admire the size of his fortune
but also resent his fortune (old money versus news money). To them,
he's a young whippersnapper (they resent him for the age at which he made
his fortune), as well as the size. That's how they gain his respect.
Congress has aspects like a playground.
Computers and electronics, compared to rust belt industries, until
recently, reperesent one of the future industries not asking Congress
for money, directly, excepting Petaflops, GC, HPC, etc. [but those tend
more to be users asking than manufacturers]. Additionallly computers
themselves lack the "moral compass" which a lot of people seek in
"intelligent" systems [actually, a lot of us probably like that].
Entrepeneurism: witness where VP Gore visited on his recent visit to
Sunnyvale: He went to Lockheed, not Netscape, not Intel. About 15
years ago, very briefly, entrepeneurism did mean Jobs and Woz. Now,
it more means Hewlett and Packard.
Regardless of your personal position on things like H1-B visas, taxing
the Internet, Gates is seen as representing the computer industry and
will likely do so for some time. This is why it's important to stay in
touch with your representatives. Because if you don't, oil, steel, car makers,
airplane makers, and other USERS of computer will. And they will have
agendas differing what you might want.
You can't simply ask Congress to passively stay out of electronics.
You have to actively monitor their staying out because that's how every
other industry does it. Or maybe you want them involved. Then you act
accordingly.
That's representative government.
While I appreciate your comment I have to disagree on your comment about
irrelevance. I made a statement in a meeting about 1980 in DC which
went like this:
If say your object is going to Mars, when if you ignore some of
the technology, then in your decision to do research, a Star Trek
transporter has as much right to research dollars as
ballistic rockets, etc. We have no idea idea how to do the
former, we have a little experience with the latter.
About that time, John Mashey gave a keynote at Usenix on "software
armies on the march." He cited a short sci fi story about a general
awaiting execution because his planet lost a war because it kept trying
to perfect its technology while its enemy plodded along.
And I know from reading about Gordon Moore that he would disagree with you.
>> I'd recommend the www.abcnews.com web site looking for transcripts of
>> Nightline back in Dec. for Danny Hillis's comments about
>> human-computer mediated systems: Europe has sided with the computer, the
>> US has sided with human.
>
>We have? He might be thinking of the different trade-offs Airbus and Boeing
>have made in their avionics systems, but even there I wouldn't quite agree.
http://archive.abcnews.com/onair/nightline/html_files/transcripts/ntl1222.html
Remember, all posts are gross generalizations.
>In general, I get the impression of _less_ acceptance hereabouts of high tech,
>including computers, than across the pond.
You mean the US side of the pond? 8^)
We have many Luddites here, and some of them don't even acknowledge it.
I don't trust certain aspects of technology, not because of the technology,
but because of the human in the loop. And I would think there are people
in Europe, too. The problem is complexity and a one-size-fits-all desire
to simplify sometimes a little too much.
We question everything in the US. Well, almost. Well, maybe a bit. ...
In article <6i5maj$3n8$2...@murrow.corp.sgi.com>, mcca...@frakir.engr.sgi.com
(John McCalpin) writes...
>
>"Gold-plated" is hardly an accurate description of SGI servers, and
>indicates that someone has not done his homework....
>
>Here is a table that I posted in comp.arch yesterday:
>
>Some examples: SPECfp95 fprate SPECint95 intrate
>Machine MHz cpus RAM US list rate per k$ rate per k$
>----------------------------------------------------------------------------
>SGI Origin2000 250 16 4 GB $491k 3019 6.2 2088 4.3
>HP V2250 240 16 4 GB $777k 2471 3.2 2209 2.8
>Sun UE10000 336 16 4 GB $1043k ~2200 2.1 1718 1.6
>DEC 8400 612 14 4 GB $834k ~1600 1.9 2143 2.6
>----------------------------------------------------------------------------
>
>So.....
>
>If SGI boxes are "gold-plated", why are the prices so much lower than
>similar (but generally slower) systems from DEC, HP, and Sun ?
>
>It is certainly true that single-processor peecees can give better
>price/performance -- if your performance needs are low enough! But
>among servers relevant to the HPC space, SGI has the lowest list prices
>and (typically) the highest performance.
>
>--
>--
>John D. McCalpin, Ph.D. Server System Architect
>Server Platform Engineering http://reality.sgi.com/mccalpin/
>Silicon Graphics, Inc. mcca...@sgi.com 650-933-7407
John,
The original `gold-plated' comment was in reference to SGIs in the film
and graphics industry. That isn't exactly where we try to place the 8400
box. However we do have a 4100 configuration with special pricing that
marketing has named RenderTowers targeted for the film industry. If I
follow your lead of using SPECrate/$ to characterize value you can see
there is plenty of it with the 4100 RenderTowers.
My prices are from Steve Briggs, AlphaServer marketing for film industry
products.
Some examples: SPECfp95 fprate SPECint95 intrate
Machine MHz cpus RAM US list rate per k$ rate per k$
----------------------------------------------------------------------------
SGI Origin2000 250 16 4 GB $491 3019 6.2 2088 4.3
DEC 4100
RenderTower4 600 4 1 GB $60 858 14.3 657 10.95
DEC 4100
RenderTower16 600 16 4 GB $210 3432 16.34 2628 12.5(1)
(1) Rendering is a fully parallelize-able application, a 16 CPU render tower
delivers 4x performance of the 4 CPU box. I've taken the liberty to
portray the RenderTower16 as 4x the SPECrate of the 4 CPU box since
that is the increase that the application would see.