Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What to do with all those MIPS

3 views
Skip to first unread message

nels...@apollo.uucp

unread,
Apr 18, 1988, 4:09:00 PM4/18/88
to comp.society.futures@news

I wanted to raise a question about how rapid increases in compute
power available to individual users will affect the way we use
computers.

Just lately, the amount of sheer compute power available to an
individual user has been taking huge leaps. While noting that
MIPS is a poorly defined term (some say Meaningless Indicator
of Performance / Second), there is no doubt that there are
about to be a lot of them out there. My company (Apollo) recently
announced a workstation that will offer 40 - 100+ MIPS, depending
on configuration. Startups Ardent and Stellar have also announced
high-performance products and we may reasonably expect that Sun,
HP, and Silicon-Graphics will have competing products on the market.
Currently prices for these machines are in the $70K - $90K range
but competition and a growing market will, no doubt, lower them.

Modern workstations also allow the programmer to treat the network
as a virtual computer. Paging across the network, subroutine calls
to other nodes, and distributed processing are all common to
architectures such as Apollo's. If I want to do a 'build' or 'make'
involving dozens of compiles, I can distribute them across the net
so they will take little more time than one or two compiles on a
single machine. Furthermore, the disk resources of the network,
which may be many 10's or 100's of gigabytes are all transparently
accessable to me. I suspect that ('the network is the computer')
Sun may offer something along the same lines, and while I think that
our other major competitors are still playing catch up in this area,
clearly this is the way of the future.

A few years ago the compute resources available to a single user
may have been 1 or 2 MIPS and a few 10's of megabytes of virtual
address space. A few years from now a typical user will have 100
MIPS and a seamless virtual address space of gigabytes, not to
mention decent graphics, for a change. A transparent heterogeneous
CPU-environment will round out the improvements

I was wondering whether any of this will change the way we use com-
puters or the kinds of things we do with them. Most of what I've
seen so far is people doing the Same Old Things, just faster. Now
we can ray-trace an image in 5 minutes that used to take an hour; now
we can do a circuit simulation in an hour that used to run overnight;
now we can do a 75-compile 'build' in 5 minutes that used to take hours,
etc.

I'm concerned that we (or I, anyway) may lack imagination. The basic
tools of my trade (software engineer) are compilers, linkers, interactive
debuggers and software control products (DSEE, in this case). I've
used things like this for years. The ones I have now are faster, and
fancier than what I had a few years ago but they're not fundamentally
different in concept. CAD packages allow the user to enter a schematic,
say, and do a simulation or do the routing and ultimately even the chip
geometry, but except that they can do it faster now, and handle more
gates, tighter design rules, etc, they are not fundamentally different
in concept than what engineers were using 5 years ago. Database systems
still do similar things to what they've always done as well, just faster,
with more data, and better pie-charts (or whatever) 8-).

Does anyone have any thoughts about whether (of if or when) huge leaps
in compute resources might result in fundamentally *different* ways of
using computers? We always used to worry about being 'disk-bound' or
'CPU-bound' or 'network-bound'. Are we in any danger of becoming
'imagination bound'?

--Peter Nelson

Barry Shein

unread,
Apr 19, 1988, 9:03:12 AM4/19/88
to

Ah, my favorite subject...

I've been talking for a year or so with vendors like Encore who are
threatening (?) to deliver "minis" in the range of 1000MIPs or more in
the near future (12..24 months.) I suspect this might breathe new
interest into a currently rather boring mini market. One interesting
question also is what will some of the folks who build mainframes do
when all this happens (although they retain a big lead in disk i/o
performance some of that will surely falter also.) I mean, really,
deliver 10,000MIPS in the same time frame?

No, I don't think that will be the rational response to workstations
with 100MIPs (<$100K) and minis with 1000MIPs ($100K..$500K.) There
will be customers (JC Penneys) who just need the I/O channels of the
big mainframes and computes are secondary (an IBM3090/600 should
deliver around 180MIPs right now, that's not *that* shameful :-), but
I can't help but think there are a lot of customers out there who may
hesitate to blow $10M on a mainframe if they can do it on a $200K mini
(not to mention the power, real estate, operational philosophy [you
don't need 50 people to run a mini like you do a mainframe] etc.)
Seriously, how big can their jobs be? How much can they possibly have
grown since they managed on a 10MIPs mainframe 5 years ago?

Specifically, I have been interested in something I refer to as
"wasteful computing". We have reached a point where ther is probably
more "waste" from idle processors than over-utilization (or are about
to.) The software hasn't kept up (as was predicted by most everyone.)

Note that you should avoid the value-laden meaning of "waste" in
this context. There's nothing wrong with an idle CPU, it just is
interesting.

Modern workstations already exhibit the beginnings of wasteful
computing. If you showed those bitmap screens and the computations
involved in updating them to someone in the 60's they would have run
screaming out the door. Imagine paying for the cycles and
kilocore-ticks for dragging a window across the screen? Probably would
have cost you $15 or so in 1975 at government rates (they have std
rates is all I mean, form A21.)

Here's something specific that occurred to me the other day. Everyone
remember the "histogram" or "frequency" sort? That's a sort algorithm
that runs linear to N (N = # of elements.) It works like this:

DECLARE BIGARRAY[ADDRESS_RANGE]

while not EOF
READITEM(I)
BIGARRAY[I] = BIGARRAY[I] + 1
end

That's it, when you hit EOF the data is sorted. It occurred to me that
you can now easily sort 24 bit data items on a modest workstation.
Most of this is due to larger memories and not MIPs but I think most
of you old timers would agree that you were taught this sort as
basically a useless algorithm due to its vast memory usage. Hmm,
seems useful again all of a sudden! What happened?!

Anyhow, more later...

-Barry Shein, Boston University

Jeff Bowles

unread,
Apr 19, 1988, 11:39:34 AM4/19/88
to
I remember seeing a video in 1985 of the lucasfilms graphics machine
that had so many cycles to spare that its cursor was a buzzing bumblebee.
How much did it take to do that? A fair amount of CPU, especially measured
by what you were used to five years ago.

I keep gauging current memory (10M wasn't uncommon 3-5 years ago on most
VAX-size systems, and 2-4 years ago smart terminals would be delivered with
0.5-2M, and now we're talking about 100M and in some cases, gigabyte memory
on machines) and comparing to the 3/4M we had on the 11/70 back home. There
are those of you who think THAT is a big number, and remember the 1130 days
when 8K or 32K was a big deal. (I may be off in the numbers - the 11/70 was
my first pseudo-real experience with this sort of thing.)

I wonder what's wasteful and what's not. Certainly, if you use the amount
of memory/MIPS/disk to keep you from having to THINK about a problem, you've
become wasteful - several examples follow:
1) Why houseclean and remove/archive unused files if there's loads of disk
space?
2) Why not use the simplest sorting algorithms all the time, if CPU is easy
to come by?
3) Why use your language's equivalent to "packed array of..." when you have
lots of memory?

Many portability arguments come up as answers to these sorts of things - just
because you have it cushy doesn't mean everyone who receives copies of your
software will have it as nice. Also, more memory/disk/MIPS enable you to get
farther into a problem, as opposed to solving it faster (sometimes) - I believ
that Tom Duff quotes a law about graphics, that generating high-res images
will take N seconds, and if you double the speed of the CPU, it'll still take
N seconds - because you're now interested in generating more complex images,
not twice as many of the older images.

Jeff Bowles

Steven Seida

unread,
Apr 19, 1988, 12:27:12 PM4/19/88
to
In response to:

Date: 18 Apr 88 20:09:00 GMT
From: apollo!nelson_p%apollo.uucp%beaver.cs.washington.edu%bu-cs....@buita.BU.EDU
Subject: What to do with all those MIPS
Message-Id: <3b8a86...@apollo.uucp>
Sender: info-futures-request%bu-cs....@buita.BU.EDU
To: info-f...@bu-cs.bu.edu




Does anyone have any thoughts about whether (of if or when) huge leaps
in compute resources might result in fundamentally *different* ways of

using computers? ...

--Peter Nelson

I am reading the science fiction book "Necromancer" by ?????????? that
seems to explore some of the possibilities of computing in the far-
distant future. The basic concept is that their is a one big network
that everyone can have access to through a user interface that effectively
plugs into your brain. And that accessing information becomes the real
interest of computer wizards. Solving problems takes a back seat and
is mainly directed at breaking through protection software.

Maybe a little shadow of the present.

Steven Seida

gold...@osiris.cso.uiuc.edu

unread,
Apr 19, 1988, 1:44:00 PM4/19/88
to

No.

Peter Scott

unread,
Apr 19, 1988, 2:20:20 PM4/19/88
to
Sorry to burst your bubble, but it takes only a few instructions to
implement a buzzing bumblebee cursor (erase last sprite; increment count
MOD N; display count-th sprite at new coordinates). Has anyone done this
for the Mac? Should be easy.

I have read articles on cryptoanalysis, number theory, fluid dynamics,
and ray-tracing graphics that make it plain that there the next few
orders of magnitude of CPU cycles are already spoken for, at least in
those fields. However, it will be interesting to observe the impact of
two orders of magnitude improvement in performance for personal computers,
and their applications in the home (speech-recognizing vacuum cleaners?
Image-processing toasters?)

Peter Scott (pjs%gro...@jpl-mil.jpl.nasa.gov)

Michael T Sullivan

unread,
Apr 19, 1988, 3:49:03 PM4/19/88
to
In article <3b8a86...@apollo.uucp>, nels...@apollo.uucp writes:
> ...

> Does anyone have any thoughts about whether (of if or when) huge leaps
> in compute resources might result in fundamentally *different* ways of
> using computers? We always used to worry about being 'disk-bound' or
> 'CPU-bound' or 'network-bound'. Are we in any danger of becoming
> 'imagination bound'?

This always happens when new technology gets not so new. Brings to mind
the story of how Western Union could have been what AT&T is (or was before
the breakup). Their response was "We're not in the phone business, we're
in the telegraph business." When things start getting stale there is always
someone on the horizon with a new way of thinking, brought about because
he/she hasn't used the old way.

I think that parallel computers are going to bring about a big change in
our current way of thinking. We are going to have to think of programs
that can be written taking advantage of parallellism (sp?), instead of
the current linear way of thinking.

--
Michael Sullivan {uunet|attmail}!vsi!sullivan
sull...@vsi.com
HE V MTL

J Storrs Hall

unread,
Apr 19, 1988, 4:39:14 PM4/19/88
to
My guess is that "all those MIPS" are going to make all those robots
that everybody thought were coming ten years ago, possible in the
next ten. The crux is vision. There's a host of tasks that you can
do with cheap, low-precision "effectors" if you have visual feedback,
that are virtually impossible without it.

Sometime in the next ten years, there will be a two-year period where
before, there are virtually no household robots, and after, they're
common. I reason by analogy to cd's or vcr's, and on the theory
that there are plenty of yuppies out there who would easily spend
$5000 for a household factotum. (Myself included.)

We're at the point, I think, where the underlying technology is moving
enough faster than the product developers that a sort of catastrophe-
theory effect happens: We'll be well beyond the stage where robots
become feasible (and cost-effective) before the marketing departments
realize they are even possible. Part of the effect will be due to
the push-and-flop of household robots of the past decade, which will
cause a justifiable reticence on the part of management.

In common, general-purpose computers, I'm sure we can soak 10 or 20
thousand MIPS into the scheduler and window system as if it were
never there at all ... :^)

--JoSH

Ken Yap

unread,
Apr 19, 1988, 6:45:22 PM4/19/88
to
|2) Why not use the simplest sorting algorithms all the time, if CPU is easy
| to come by?

Because an O(n log n) sort will still handle a million elements
decently, while your quadratic sort will be running till the the next
eclipse of the moon, even if you have a Cray. And besides, what's so
complicated about mergesort? I still have problems understanding
shellsort.

Ken

Barry Shein

unread,
Apr 21, 1988, 12:10:54 AM4/21/88
to

>No.

that might be taking the current "just say no" campaign a little too
literally, don't you think? (that was the full text of your message.)

-B

Peter da Silva

unread,
Apr 21, 1988, 9:31:20 PM4/21/88
to
Doesn't anyone else remember a recent toy called "Julie", a talking doll
that handles pattern recognition and contains a 33 MIPS digital signal
processor? One thing that will happen with all those MIPS is fancier toys.
--
-- Peter da Silva `-_-' ...!hoptoad!academ!uhnix1!sugar!peter
-- "Have you hugged your U wolf today?" ...!bellcore!tness1!sugar!peter
-- Disclaimer: These aren't mere opinions, these are *values*.

Andrew MacLeod

unread,
Apr 22, 1988, 2:38:25 PM4/22/88
to
In article <880419162...@martin-den.ARPA> seida%martin-...@BUITA.BU.EDU (Steven Seida) writes:

>
>I am reading the science fiction book "Necromancer" by ?????????? that
>seems to explore some of the possibilities of computing in the far-

the book in question is actually called Neuromancer...., but i dont remember
the author. definately some strangeness in there.

David Collier-Brown

unread,
Apr 23, 1988, 1:49:47 PM4/23/88
to
In article <880419130...@bu-cs.bu.edu> b...@BU-CS.BU.EDU (Barry Shein) writes:
| Ah, my favorite subject...
|
| I've been talking for a year or so with vendors like Encore who are
| threatening (?) to deliver "minis" in the range of 1000MIPs or more in
| the near future (12..24 months.) I suspect this might breathe new
| interest into a currently rather boring mini market. One interesting
| question also is what will some of the folks who build mainframes do
| when all this happens (although they retain a big lead in disk i/o
| performance some of that will surely falter also.) I mean, really,
| deliver 10,000MIPS in the same time frame?

Well, I can see a move away from big, bare, specially-configured
mainframes for TP (Transaction processing) and toward physically
smaller, more powerfull machines with more-or-less ordinary
operating systems instead of TP monitors. Of course, the company I
work for noticed that a **number** of years ago [1], so I'm not
saying anything new.

A portion of the market needs a medium or large machine, able to
run a few hundred terminals, with a reasonably large and fast set of
disks, so that they can service businesses which are physically
centralized (eg, all the departments of a large library), but may
have relationships with other, distant, businesses (eg, bank
branches dealing with AMs and clearinghouses). At the low end,
they can use machines much like current workstations with a couple
of extra, rather dumb, terminals attached. At the high end, they're
still having to buy machines like the Honeywell DPS-8 (GCOS)
machine. From experience, I'd say that the high-mips minis could get
into the market if they had enough intelligence in inexpensive
front-end processors or front-end machines.

--dave (and if they buy IBM, they get to regret it) c-b
[1] Geac is a Canadian manufacturer of transaction-processing
machines, mostly in the library and financial markets. The two
lines (8000 & 9000) both run fairly normal-looking operating
systems, and are about as far from CICS and even TPS-8 as you
can get. Why, we even write operating system code in high-level
languages and applications in purpose-built ones. (The
preceding has been a paid political announcement of the
we-hate-CICS association (:-))
--
David Collier-Brown. {mnetor yunexus utgpu}!geac!daveb
Geac Computers International Inc., | Computer Science loses its
350 Steelcase Road,Markham, Ontario, | memory (if not its mind)
CANADA, L3R 1B3 (416) 475-0525 x3279 | every 6 months.

Doug Thompson

unread,
Apr 24, 1988, 1:36:43 AM4/24/88
to


UN>From: P...@grouch.JPL.NASA.GOV (Peter Scott)

UN>least in
UN>those fields. However, it will be interesting to observe the
UN>impact of
UN>two orders of magnitude improvement in performance for personal
UN>computers,
UN>and their applications in the home (speech-recognizing vacuum
UN>cleaners?
UN>Image-processing toasters?)
UN>

Yeah, I kinda think you're on to something there. The decline in price
of computer hardware continues. Micros in the home are coming to have
more and more useful and usable computing power.

As an example, I am writing this on an IBM AT in my living room. As I
write, a uucp mailer is running in the background importing news from a
Vax at the university. But I still have enough memory and a spare modem
and com port so I can shell out of emacs here and fetch data from one of
thousands of computers around the world for which I have phone numbers
and log-on scripts.

This is a home computer. When the task in the background is not
exchanging mail and news with other computers it is open for the general
modem owning public to call up and read news, up or download files, or
whatever.

Again, this is a home computer. While it probably is slightly
heavier-duty than the average home computer, with 1Mb RAM and 60Mb hard
disk, it is by no means a remarkable or unusual machine. Perhaps the
software is remarkable. It is all experimental, beta-test and whatnot,
with a tendency toward instability and the odd bug -- but -- it has
completely displaced the TV since newsgroups are so much more
interesting. And, with access to the library card-catalogue by modem and
the IPS news service, it is getting to the point of replacing the
newspaper and it has replaced a lot of research hours in the library.

As mass storage devices continue to decline in price, and achieve orders
of magnitude leaps in cpacity, we are rapidly approaching the point
where the entire information economy could be conducted at a computer
terminal in the home. Newspapers, books, magazines, television -- all
can be delivered on data lines and stored on magentic media and
displayed on a graphics monitor.

This is the home entertainment centre of the future. But it is a very
different sort of home entertainment centre. More than just a recipient
of a stream of data, as a TV is, it can store, process sort, correlate
that data, and allow you to immediately respond, send and receive mail,
pass on interesting documents, or fire off a complaint to the Prime
Minister.

I suspect that the biggest impact computers will have on society as a
whole in the next 25 years will be in transforming the home and the
information industry by putting significant computing power in the hands
of everyone.

It was not very many years ago that Usenet meant a Vax, and hundreds of
thousands of dollars worth of hardware. Today, it is running on PCs. No
hardware revolution brought this about. Indeed, the software is all PD
or shareware. As it is perfected and moves beyond beta, the home usenet
site may not be all that much of an oddity.

Should be great fun!

Well, 2 hours of news at 2400 BAUD has arrived, and unbatching has
begun. In another 15 minutes it'll be done and I can play with all the
newest stuff -- and this message will be fired back out to the net.

It doesn't take many MIPs, it takes ingeneous software and a lot of disk
space. :-)
------------------------------------------------------------------------
Fido 1:221/162 -- 1:221/0 280 Phillip St.,
UUCP: !watmath!isishq!doug Unit B-3-11
Waterloo, Ontario
Bitnet: fido@water Canada N2L 3X1
Internet: do...@isishq.math.waterloo.edu (519) 746-5022
------------------------------------------------------------------------

---
* Origin: ISIS International H.Q. (II) (Opus 1:221/162)
SEEN-BY: 221/0 162 172

Dragos Ruiu

unread,
Apr 24, 1988, 11:42:06 PM4/24/88
to

William Gibson wrote "Count Zero", "Neuromancer" and "Burning Chrome". All
recommended for anyone interested in computers and fiction. Ask about him in
alt.cyberpunk.
--
Dragos Ruiu ru...@dragos.UUCP
...alberta!dragos!ruiu "cat ansi.c | grep -v noalias >proper.c"

Eric M. Kessner, K.S.C.

unread,
Apr 27, 1988, 5:47:20 PM4/27/88
to


It was written by William Gibson, and if you haven't read it, I highly reccomend
getting a copy. It contains some EXTREMELY intesting ideas about the future of
computing, so it's worth picking up even if you don't particularly like SF.

()()()()()()()()()()()()()()()
Eric Kessner | "Oh no! John's been eaten by rats!"
kes...@tramp.colorado.EDU | "You mean he's been 'E-rat-icated'?"
()()()()()()()()()()()()()()()

eric townsend

unread,
Apr 30, 1988, 4:32:56 AM4/30/88
to
In article <10...@aucs.UUCP>, 8207...@aucs.UUCP (Andrew MacLeod) writes:


It's _Neuromancer_, by William Gibson. Other books by Gibson:
_Count Zero_, a sort-of sequel to _Neuromancer_.
_Burning Chrome_, a collection of short stories, some of which were incorporated
into _Neuro_ and _Zero_.

New book on the way out: _Mona_Lisa_Overdrive_ (I've got my copy pre-ordered!)

Gibson is currently working on the first re-write of the script to Alien III.
Movie based on _New_Rose_Hotel_ to start filming this fall. Script written
by Gibson and John Shirley (I think). Gibson's actually involved in the
production of this one.

I don't know that _Neuro_ will give as much help in "what to do with all
the MIPS" as it will with "what should we try to build/interface with?"
--
Just another journalist with too many spare MIPS...
"The truth of an opinion is part of its utility." -- John Stuart Mill
J. Eric Townsend ->uunet!nuchat!flatline!erict smail:511Parker#2,Hstn,Tx,77007

0 new messages