I wanted to raise a question about how rapid increases in compute
power available to individual users will affect the way we use
computers.
Just lately, the amount of sheer compute power available to an
individual user has been taking huge leaps. While noting that
MIPS is a poorly defined term (some say Meaningless Indicator
of Performance / Second), there is no doubt that there are
about to be a lot of them out there. My company (Apollo) recently
announced a workstation that will offer 40 - 100+ MIPS, depending
on configuration. Startups Ardent and Stellar have also announced
high-performance products and we may reasonably expect that Sun,
HP, and Silicon-Graphics will have competing products on the market.
Currently prices for these machines are in the $70K - $90K range
but competition and a growing market will, no doubt, lower them.
Modern workstations also allow the programmer to treat the network
as a virtual computer. Paging across the network, subroutine calls
to other nodes, and distributed processing are all common to
architectures such as Apollo's. If I want to do a 'build' or 'make'
involving dozens of compiles, I can distribute them across the net
so they will take little more time than one or two compiles on a
single machine. Furthermore, the disk resources of the network,
which may be many 10's or 100's of gigabytes are all transparently
accessable to me. I suspect that ('the network is the computer')
Sun may offer something along the same lines, and while I think that
our other major competitors are still playing catch up in this area,
clearly this is the way of the future.
A few years ago the compute resources available to a single user
may have been 1 or 2 MIPS and a few 10's of megabytes of virtual
address space. A few years from now a typical user will have 100
MIPS and a seamless virtual address space of gigabytes, not to
mention decent graphics, for a change. A transparent heterogeneous
CPU-environment will round out the improvements
I was wondering whether any of this will change the way we use com-
puters or the kinds of things we do with them. Most of what I've
seen so far is people doing the Same Old Things, just faster. Now
we can ray-trace an image in 5 minutes that used to take an hour; now
we can do a circuit simulation in an hour that used to run overnight;
now we can do a 75-compile 'build' in 5 minutes that used to take hours,
etc.
I'm concerned that we (or I, anyway) may lack imagination. The basic
tools of my trade (software engineer) are compilers, linkers, interactive
debuggers and software control products (DSEE, in this case). I've
used things like this for years. The ones I have now are faster, and
fancier than what I had a few years ago but they're not fundamentally
different in concept. CAD packages allow the user to enter a schematic,
say, and do a simulation or do the routing and ultimately even the chip
geometry, but except that they can do it faster now, and handle more
gates, tighter design rules, etc, they are not fundamentally different
in concept than what engineers were using 5 years ago. Database systems
still do similar things to what they've always done as well, just faster,
with more data, and better pie-charts (or whatever) 8-).
Does anyone have any thoughts about whether (of if or when) huge leaps
in compute resources might result in fundamentally *different* ways of
using computers? We always used to worry about being 'disk-bound' or
'CPU-bound' or 'network-bound'. Are we in any danger of becoming
'imagination bound'?
--Peter Nelson
I've been talking for a year or so with vendors like Encore who are
threatening (?) to deliver "minis" in the range of 1000MIPs or more in
the near future (12..24 months.) I suspect this might breathe new
interest into a currently rather boring mini market. One interesting
question also is what will some of the folks who build mainframes do
when all this happens (although they retain a big lead in disk i/o
performance some of that will surely falter also.) I mean, really,
deliver 10,000MIPS in the same time frame?
No, I don't think that will be the rational response to workstations
with 100MIPs (<$100K) and minis with 1000MIPs ($100K..$500K.) There
will be customers (JC Penneys) who just need the I/O channels of the
big mainframes and computes are secondary (an IBM3090/600 should
deliver around 180MIPs right now, that's not *that* shameful :-), but
I can't help but think there are a lot of customers out there who may
hesitate to blow $10M on a mainframe if they can do it on a $200K mini
(not to mention the power, real estate, operational philosophy [you
don't need 50 people to run a mini like you do a mainframe] etc.)
Seriously, how big can their jobs be? How much can they possibly have
grown since they managed on a 10MIPs mainframe 5 years ago?
Specifically, I have been interested in something I refer to as
"wasteful computing". We have reached a point where ther is probably
more "waste" from idle processors than over-utilization (or are about
to.) The software hasn't kept up (as was predicted by most everyone.)
Note that you should avoid the value-laden meaning of "waste" in
this context. There's nothing wrong with an idle CPU, it just is
interesting.
Modern workstations already exhibit the beginnings of wasteful
computing. If you showed those bitmap screens and the computations
involved in updating them to someone in the 60's they would have run
screaming out the door. Imagine paying for the cycles and
kilocore-ticks for dragging a window across the screen? Probably would
have cost you $15 or so in 1975 at government rates (they have std
rates is all I mean, form A21.)
Here's something specific that occurred to me the other day. Everyone
remember the "histogram" or "frequency" sort? That's a sort algorithm
that runs linear to N (N = # of elements.) It works like this:
DECLARE BIGARRAY[ADDRESS_RANGE]
while not EOF
READITEM(I)
BIGARRAY[I] = BIGARRAY[I] + 1
end
That's it, when you hit EOF the data is sorted. It occurred to me that
you can now easily sort 24 bit data items on a modest workstation.
Most of this is due to larger memories and not MIPs but I think most
of you old timers would agree that you were taught this sort as
basically a useless algorithm due to its vast memory usage. Hmm,
seems useful again all of a sudden! What happened?!
Anyhow, more later...
-Barry Shein, Boston University
I keep gauging current memory (10M wasn't uncommon 3-5 years ago on most
VAX-size systems, and 2-4 years ago smart terminals would be delivered with
0.5-2M, and now we're talking about 100M and in some cases, gigabyte memory
on machines) and comparing to the 3/4M we had on the 11/70 back home. There
are those of you who think THAT is a big number, and remember the 1130 days
when 8K or 32K was a big deal. (I may be off in the numbers - the 11/70 was
my first pseudo-real experience with this sort of thing.)
I wonder what's wasteful and what's not. Certainly, if you use the amount
of memory/MIPS/disk to keep you from having to THINK about a problem, you've
become wasteful - several examples follow:
1) Why houseclean and remove/archive unused files if there's loads of disk
space?
2) Why not use the simplest sorting algorithms all the time, if CPU is easy
to come by?
3) Why use your language's equivalent to "packed array of..." when you have
lots of memory?
Many portability arguments come up as answers to these sorts of things - just
because you have it cushy doesn't mean everyone who receives copies of your
software will have it as nice. Also, more memory/disk/MIPS enable you to get
farther into a problem, as opposed to solving it faster (sometimes) - I believ
that Tom Duff quotes a law about graphics, that generating high-res images
will take N seconds, and if you double the speed of the CPU, it'll still take
N seconds - because you're now interested in generating more complex images,
not twice as many of the older images.
Jeff Bowles
Date: 18 Apr 88 20:09:00 GMT
From: apollo!nelson_p%apollo.uucp%beaver.cs.washington.edu%bu-cs....@buita.BU.EDU
Subject: What to do with all those MIPS
Message-Id: <3b8a86...@apollo.uucp>
Sender: info-futures-request%bu-cs....@buita.BU.EDU
To: info-f...@bu-cs.bu.edu
Does anyone have any thoughts about whether (of if or when) huge leaps
in compute resources might result in fundamentally *different* ways of
using computers? ...
--Peter Nelson
I am reading the science fiction book "Necromancer" by ?????????? that
seems to explore some of the possibilities of computing in the far-
distant future. The basic concept is that their is a one big network
that everyone can have access to through a user interface that effectively
plugs into your brain. And that accessing information becomes the real
interest of computer wizards. Solving problems takes a back seat and
is mainly directed at breaking through protection software.
Maybe a little shadow of the present.
Steven Seida
I have read articles on cryptoanalysis, number theory, fluid dynamics,
and ray-tracing graphics that make it plain that there the next few
orders of magnitude of CPU cycles are already spoken for, at least in
those fields. However, it will be interesting to observe the impact of
two orders of magnitude improvement in performance for personal computers,
and their applications in the home (speech-recognizing vacuum cleaners?
Image-processing toasters?)
Peter Scott (pjs%gro...@jpl-mil.jpl.nasa.gov)
This always happens when new technology gets not so new. Brings to mind
the story of how Western Union could have been what AT&T is (or was before
the breakup). Their response was "We're not in the phone business, we're
in the telegraph business." When things start getting stale there is always
someone on the horizon with a new way of thinking, brought about because
he/she hasn't used the old way.
I think that parallel computers are going to bring about a big change in
our current way of thinking. We are going to have to think of programs
that can be written taking advantage of parallellism (sp?), instead of
the current linear way of thinking.
--
Michael Sullivan {uunet|attmail}!vsi!sullivan
sull...@vsi.com
HE V MTL
Sometime in the next ten years, there will be a two-year period where
before, there are virtually no household robots, and after, they're
common. I reason by analogy to cd's or vcr's, and on the theory
that there are plenty of yuppies out there who would easily spend
$5000 for a household factotum. (Myself included.)
We're at the point, I think, where the underlying technology is moving
enough faster than the product developers that a sort of catastrophe-
theory effect happens: We'll be well beyond the stage where robots
become feasible (and cost-effective) before the marketing departments
realize they are even possible. Part of the effect will be due to
the push-and-flop of household robots of the past decade, which will
cause a justifiable reticence on the part of management.
In common, general-purpose computers, I'm sure we can soak 10 or 20
thousand MIPS into the scheduler and window system as if it were
never there at all ... :^)
--JoSH
Because an O(n log n) sort will still handle a million elements
decently, while your quadratic sort will be running till the the next
eclipse of the moon, even if you have a Cray. And besides, what's so
complicated about mergesort? I still have problems understanding
shellsort.
Ken
that might be taking the current "just say no" campaign a little too
literally, don't you think? (that was the full text of your message.)
-B
>
>I am reading the science fiction book "Necromancer" by ?????????? that
>seems to explore some of the possibilities of computing in the far-
the book in question is actually called Neuromancer...., but i dont remember
the author. definately some strangeness in there.
Well, I can see a move away from big, bare, specially-configured
mainframes for TP (Transaction processing) and toward physically
smaller, more powerfull machines with more-or-less ordinary
operating systems instead of TP monitors. Of course, the company I
work for noticed that a **number** of years ago [1], so I'm not
saying anything new.
A portion of the market needs a medium or large machine, able to
run a few hundred terminals, with a reasonably large and fast set of
disks, so that they can service businesses which are physically
centralized (eg, all the departments of a large library), but may
have relationships with other, distant, businesses (eg, bank
branches dealing with AMs and clearinghouses). At the low end,
they can use machines much like current workstations with a couple
of extra, rather dumb, terminals attached. At the high end, they're
still having to buy machines like the Honeywell DPS-8 (GCOS)
machine. From experience, I'd say that the high-mips minis could get
into the market if they had enough intelligence in inexpensive
front-end processors or front-end machines.
--dave (and if they buy IBM, they get to regret it) c-b
[1] Geac is a Canadian manufacturer of transaction-processing
machines, mostly in the library and financial markets. The two
lines (8000 & 9000) both run fairly normal-looking operating
systems, and are about as far from CICS and even TPS-8 as you
can get. Why, we even write operating system code in high-level
languages and applications in purpose-built ones. (The
preceding has been a paid political announcement of the
we-hate-CICS association (:-))
--
David Collier-Brown. {mnetor yunexus utgpu}!geac!daveb
Geac Computers International Inc., | Computer Science loses its
350 Steelcase Road,Markham, Ontario, | memory (if not its mind)
CANADA, L3R 1B3 (416) 475-0525 x3279 | every 6 months.
---
* Origin: ISIS International H.Q. (II) (Opus 1:221/162)
SEEN-BY: 221/0 162 172
William Gibson wrote "Count Zero", "Neuromancer" and "Burning Chrome". All
recommended for anyone interested in computers and fiction. Ask about him in
alt.cyberpunk.
--
Dragos Ruiu ru...@dragos.UUCP
...alberta!dragos!ruiu "cat ansi.c | grep -v noalias >proper.c"
It was written by William Gibson, and if you haven't read it, I highly reccomend
getting a copy. It contains some EXTREMELY intesting ideas about the future of
computing, so it's worth picking up even if you don't particularly like SF.
()()()()()()()()()()()()()()()
Eric Kessner | "Oh no! John's been eaten by rats!"
kes...@tramp.colorado.EDU | "You mean he's been 'E-rat-icated'?"
()()()()()()()()()()()()()()()
It's _Neuromancer_, by William Gibson. Other books by Gibson:
_Count Zero_, a sort-of sequel to _Neuromancer_.
_Burning Chrome_, a collection of short stories, some of which were incorporated
into _Neuro_ and _Zero_.
New book on the way out: _Mona_Lisa_Overdrive_ (I've got my copy pre-ordered!)
Gibson is currently working on the first re-write of the script to Alien III.
Movie based on _New_Rose_Hotel_ to start filming this fall. Script written
by Gibson and John Shirley (I think). Gibson's actually involved in the
production of this one.
I don't know that _Neuro_ will give as much help in "what to do with all
the MIPS" as it will with "what should we try to build/interface with?"
--
Just another journalist with too many spare MIPS...
"The truth of an opinion is part of its utility." -- John Stuart Mill
J. Eric Townsend ->uunet!nuchat!flatline!erict smail:511Parker#2,Hstn,Tx,77007