Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Killer Micros and vectorized code

22 views
Skip to first unread message

Eugene Brooks

unread,
Mar 10, 1990, 6:04:48 PM3/10/90
to

I apologize if you are seeing more than one copy of this. An earlier
version had a badly placed typo and I cancelled it. The cancel worked
locally, but I do not know if "cancel chasers" follow and kill off
copies of the bad article which may have been delivered.

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

To those of you who say I have missed a third item in the cost equation,
the value of your time which is "wasted" if a 1 YMP CPU hour job is
stretched to 5 hours real time, I would like to know where I can get
a job in computational physics that will pay my salary at the same rate
that is currently paid for a YMP CPU hour. I would happily accept it.

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

There has been at least one posting asking how well the IBM 6000 series does
on vectorizable code. I have even gotten some mail accusing me of simply
being careless and making a drastic mistake on performance measurements.

The performance numbers I posted were for two "real full blown application
codes" which the performance results of can not be "tricked" by a clever
compiler. These codes ran full size problems and produced the right answers.
The network simulator SIM is something that computer scientists who are
interested in the simulation of a scalable packet switch might run and it
does no floating point, the Monte Carlo is something that a physicist
interested in the simulation of a nuclear reactor might run (it does a lot
of floating point in the form of exp(), log(), sin(), cos(), with lots
of conditional branching). These were both scalar codes, and extreme effort
has been made to vectorize both of them with poor performance improvements
(50% or so) on traditional supercomputers. The vectorized versions were not
readable or maintainable, and were backed away from given the poor
performance improvement.

I think that everyone understands that vectorizable code is the
last class of codes which will fall prey to the Killer Micros,
given that it is the class of codes which traditional supercomputers
were optimized for. I think that Killer Micros, which have clearly
taken control of scalar workloads by delivering the same performance
at 1/100th the price, will eventually take control of vectorizable
workloads as well. Many will say that you have to have "real memory
bandwidth" to do this, and I agree with this statement. Real memory
bandwidth is the next step for microprocessors and the internal design
of memory chips can be modified to deliver the required bandwidth.


So, where do the Killer Micros stand at the moment vector codes?

The Livermore Fortran Kernels data, of which the majority are
vectorizable with only 5 out of 24 kernels being fundamentally
scalar, provides a good means of examining this issue. In the
past, if you examined the performances in MFLOPS on the 24 LFK
tests for either mini computers or microprocessors,
you found that the performance is a rather uninteresting flat
function of the test index. For supercomputers which are
vectorizing many of the vectorizable loops, you see spreads in
performance of more than one order of magnitude and some times
close to two. This spread in performance is characteristic
of a highly pipelined architecture with a compiler which
exploits the pipelines, and PISS POOR SCALAR PERFORMANCE INDUCED
BY THOSE SAME LONG PIPELINES AND LONG MEMORY BANK BUSY TIMES.

If you examine the LFK data for the new IBM 6000 chip set,
for the fastest of the lot, you find that the fastest of the
LFK tests is 36MFLOPS and the slowest is 1.7MFLOPS. This
is very characteristic of a machine which is exploiting
pipelines and multiple functional units well. The geometric
mean of the LFK data, which is a good predictor of the average
LLNL workload, shows the IBM 6000 series running at 1/3 of the
performance of the YMP. The arithmetic mean of the LFK data,
which is dominated by the large number of much more highly
performing vectorizable LFK tests, shows the IBM 6000 series
running at 1/5 the performance of the YMP, per CPU.

For the two crufty scalar codes I posted, the network simulator and
the Monte Carlo, the IBM 6000 series comes quite close to the
performance of the YMP CPU, surpassing it in the case of the network
simulator. I come to this conclusion assuming a 20% performance
improvement between the 530 and the 540, derived from the clock speed
bump from 25 to 30 MHZ, which puts the performance of the Monte Carlo
code at 32 percent faster than the XMP. I have the run the Monte Carlo
code on the YMP and it is 50% faster than the XMP. On the slower IBM
530, the network simulator code is 50% faster than the XMP. The same
speed ratio between the XMP and YMP occurs for the network simulator,
50%. The faster clock speed of the 540 should put it over the top.

For scalar codes, the situation is clear. You are going go to for the 13K
dollar, we all saw the add in the Wall Street Journal didn't we, and not the
~3 million dollar (per CPU) solution. For vectorizable codes, which way you
go depends on whether you are buying capability at any price or are buying
cost/performance. Some institutions need the former, some need the latter.
Most of the institutions which needed the former last year are experiencing
budget cuts this year, note the lack of a smiley here.

Soon, the performance of Killer Micro powered systems will come close to
matching the performance of traditional supercomputers on all but the very
rarest "long vector" codes. Traditional supercomputers interleave hundreds,
if not thousands of memory banks (by this I mean independent arrays of
memory chips 64 bits wide), and to get speed you have to keep them all busy
with well organized long vector accesses. There are diminishing returns
here, as you shrink the clock speed you must increase the number of memory
banks and run longer vectors to get good performance. The clock speeds
of traditional supercomputers have already reached the point of diminishing
returns for average workloads, the latest models only shine on highly
vectorized workloads which process long vectors.

Killer Micros are not stalling on the vectorization issue, having conquered
the domain of scalar codes they are proceeding into vector territory.
Yes, you need main memory bandwidth for this. The Intel i860 went after
main memory bandwidth by moving from a 32 bit to a 64 bit data bus, the
IBM 6000 series took another step by switching to 128 bits on their higher
performance models. You can't go much further with this strategy because it
gets expensive.

The next step is to interleave directly on the memory chip. The last time
I sneaked in on a campfire of Killer Micros plotting world conquest, they
were discussing this very situation. Technically, its a piece cake.
The conclusion seemed to be that they only had to convince their marketing
departments of how many of these special memory chips they could sell.
The Killer Micros see the long term gains, the marketing departments only
see short term profits...

NO ONE WILL SURVIVE THE ATTACK OF THE KILLER MICROS!


bro...@maddog.llnl.gov, bro...@maddog.uucp


bro...@maddog.llnl.gov, bro...@maddog.uucp

Ed Hamrick

unread,
Mar 14, 1990, 7:24:35 PM3/14/90
to
Mr. Brooks,

I've greatly enjoyed the articles you've written regarding the performance
of "Killer Micros" relative to larger, more costly machines. Even though
there are exceptions to any general rule, I agree with much of what you've
been saying, but must disagree with the overall conclusion. The key
generalizations that I agree with are:

(1) The price/performance ratio of a wide range of applications is better
on smaller machines than larger machines. This applies primarily
to applications dominated by scalar code that aren't amenable to
vectorization or massive parallelism. This is particularly applicable
if applications have a locality of reference that can make effective
use of high-speed cache.

(2) The price per megabyte of disk storage is better for lower-speed and
lower-density disk drives.

(3) The price per megabyte of memory is better when memory is slower and
interleaved less.

Many people will argue with all of these generalizations by citing specific
counter-examples, but I believe reasonable people would agree that these
generalizations have some merit. I also believe that these generalizations
have been valid only in the past five years, and that there have been times
in the past that the opposite has been true.

The conclusion you've reached, and that I must admit I have been tempted to
reach myself over the past few years, is that "No one will survive the
attack of the killer micros!". As a number of people have pointed out, there
are many factors counterbalancing the price/performance advantage of
smaller systems. One of the key counter-arguments that a number of people have
made is that machines ought to be judged on price per productivity improvement.
A faster machine gives people higher productivity because of less time
wasted waiting for jobs, and more design cycles that can be performed
in a given time. Anything that decreases time-to-market or improves
product quality is worth intrinsically more. This is one of the traditional
justifications for supercomputers. You noted that a Cray CPU-hour costs
significantly more than people earn per hour, but this doesn't take
into account that companies can significantly improve their time-to-market
and product quality with faster machines, albeit machines that cost more
per unit of useful work. This may not matter in some application areas
such as computational physics, but a company like Boeing or McDonnell
Douglas can lose billions of dollars if they are six months late with
getting new products designed. There are also significant cost multipliers
involved in producing a better product - for instance a small increase
in airplane fuel efficiency can result in significantly larger market
share than your competition. Some people have noted that some companies
are willing to pay almost anything to get the fastest computers, and this
is one of the underlying economic reasons for this willingness.

Big companies and government labs tend to use this rationale to justify
procuring computers based on single-job performance. However, when you
visit these facilities, generally large Cray sites, the machines are generally
used as large timesharing facilities. People are finding that machines that
were procured to run large jobs in hours are instead running small jobs in
days. Further inflaming the problem of having 500 users on a supercomputer is
the tendency of these companies and labs to make the use of these machines
"free". (Just in passing I'd like to note that the direct result of making
CPU time on Crays "free" is that 90% of the CPU cycles get used by 10% of the
users, which can hurt time-to-market and reduce productivity. Charging for
CPU time causes a vicious feedback loop where fewer users cause higher costs
which in turn cause fewer users, etc. The Share Scheduler fixes much of this.)

I've felt for some time that there are fundamental reasons that large
computer system makers are still surviving, and in the case of CONVEX, growing
and prospering. Even though the argument is made that faster machines improve
time-to-market, they are almost always used as timesharing systems, often
giving no better job turn-around time than workstations. Some companies are
surviving because of the immense base of existing applications. Some companies
prosper because of good customer service, some by finding vertical market
segments to dominate. Every company has unique, non-architectural ways of
marketing products that may not have the best price/performance ratio.

However, I believe that there are several key strategic reasons that larger,
centralized/departmentalized computer systems will in the long run prevail
over the killer micros:

(1) A single computer user usually consumes CPU cycles irregularly. A user
often will have short periods of intense computer activity, followed by
long periods of low utilization. I've analyzed almost a years worth of
data from a typical engineering computer system (more than 500,000 data
samples), and have seen that the number of jobs an individual (or group
of individuals) runs at a time approximates a Poisson distribution.
This matches what one would expect intuitively - that even heavily
loaded systems have some percentage of their CPU cycles that go to the
null process. If J is the average number of jobs a person runs at any
given time, then EXP(-J) is the percentage of wasted CPU cycles on a
single-user system. For instance, if someone is performing a task where
they are running 4 jobs at a time on average (sometimes 6, sometimes 2),
then the workstation they are using will have EXP(-4) or 2% wasted cycles.
Similarly, if there is an average of 1 job at a time, there will be 36%
wasted cycles, and 0.25 jobs results in 78% wasted cycles. I would
maintain that the average number of runnable jobs on workstations is less
than 0.1, resulting in greater than 90% wasted CPU cycles. This statistical
character of workloads provides strong economic incentives to people to
pool their resources and purchase departmentalized/centralized computer
resources. A group of 20 people using a single machine will result in
14% idle CPU time compared with 90% idle CPU time if they use 20
workstations (assuming each user runs an average of 0.1 jobs at a time).
This gives a factor of 10 advantage in usable price/performance to the
centralized/departmentalized machine.

(2) The argument for the centralization/departmentalization of disk resources
closely parallels the argument for CPU resources. If each user is given
dedicated disks on workstations, then significant amounts of total disk
space and total disk bandwidth goes to waste. There is significant
economic incentive to centralizing/departmentalizing disk storage for
this reason, as well as other reasons relating to data security and
data archiving.

(3) I would maintain that the amount of memory needed by a job is roughly
proportional to the amount of CPU time needed to run the job. This is
a very imprecise correlation, but is true to some degree across a wide
range of problems. I would also maintain that if an N-Megabyte program
takes M seconds to run in N megabytes of physical memory, then it will
take approximately 6*M seconds to run in N/2 megabytes of physical memory.
This factor of 6 performance degradation holds true for a wide range of
large memory application programs. This gives a strong economic incentive
to users to centralize/departmentalize their memory, and run large memory
jobs in series. For instance, assume two workstation users each have
64 MBytes of memory and need to run 128 MByte jobs. Assume these jobs
take 12 hours apiece when run in 64 MBytes. If the two workstation users
put all 128 MBytes of memory on one workstation, and junked the second
workstation, they could get both jobs done in 4 hours (2 hours per job)
by running the two jobs in series on the large-memory workstation. There
is an additional economic incentive to centralizing memory that comes from
the statistical nature of memory utilization by a group of users. Using
similar arguments to (1) above, you can easily show that a computing
architecture with centralized/departmentalized high-speed memory is much
more cost effective than distributing memory across multiple workstations.

Obviously, there is much more involved in selecting the optimal computing
architecture for a given workload. Just as I disagree with you that simple
measures of price/performance will predict the success or demise of a product,
many people would probably maintain that my arguments about centralizing
compute/disk/memory resources are also simplistic. There are many counter
arguments favoring distributed computing solutions, and many more arguments
favoring centralization. The main point I wanted to make in this note is
that simple price/performance measures are poor predictors of the long-term
viability of a company's products. I'm sure that most readers of this
newsgroup could post a long list of companies that had/have excellent
price/performance but that are/will be out of business.

Regards,
Ed Hamrick (ham...@convex.com)
Area Systems Engineer
CONVEX Computer Corporation

Wm. Scott `Spot' Draves

unread,
Mar 15, 1990, 3:38:57 AM3/15/90
to
In article <100...@convex.convex.com> ham...@convex1.convex.com (Ed Hamrick) writes:

...

However, I believe that there are several key strategic reasons that larger,
centralized/departmentalized computer systems will in the long run prevail
over the killer micros:

(1) A single computer user usually consumes CPU cycles irregularly. A user
often will have short periods of intense computer activity, followed by
long periods of low utilization.

[ personal workstations' CPU's are underutilized
compared to centralized CPUs ]

This is true today, but I think it will change. Some (many?)
applications can be distributed over a network of workstations. With
the right software this can be nearly transparent to both the person
getting the work done, and to those whose workstation's cycles are
being "borrowed".

...

Regards,
Ed Hamrick (ham...@convex.com)
Area Systems Engineer
CONVEX Computer Corporation


Scott Draves Space... The Final Frontier
w...@cs.brown.edu
uunet!brunix!wsd
Box 2555 Brown U Prov RI 02912

Eugene Brooks

unread,
Mar 17, 1990, 2:06:51 AM3/17/90
to
writes a long article discussing the problems of memory and disk resource
distribution and low processor utilization in "single user systems."

I hope that no one took my articles as an inference that I think that
single user systems are a good thing, I agree with Ed's position completely.
I have utilization data for a large population of single user workstations
at LLNL, on the order of 300 work stations, and the data is so compelling
with regard to the "utilization argument" that I have been requested not
to distribute it. Companies with a large population of work stations
should use the "rup" command to collect similar data, first sitting down
before looking at the results. You will be completely shocked to see how
low the processor utilization of single user work stations are. The
small size of the utilization factor completely negates the cost performance
edge of the Killer Micro inside it. This is not, however, an argument against
the Killer Micros themselves. It is an argument against single user workstations
that spend almost ALL their time in the kernel idle loop, or the X screen lock
display program as is often the case.

Computers are best utilized as shared resources, your Killer Micros should
be many to a box and sitting in the computer room where the fan noise does
not drive you nuts. This is where I keep MY Killer Micros.

The sentence I have often used, "No one will survive the attack of the
Killer Micros," is not to be misinterpreted as "No one will survive the
attach of the Killer Single User WorkStations." The single user workstations
are indeed Killers, but they are essentially wasted computer resources.
Corporate America will eventually catch on to this and switch to
X display stations and efficiently shared computer resources.

To use the "efficient utilization argument" to support the notion that
low volume custom processor architectures might possibly survive the
attach of the Killer Micros is pretty foolish, however. Ed, would you
care to run the network simulator and Monte Carlo code I posted results
of on the Convex C210, and post the results to this group? I won't
ruin the surprise by telling you how it is going to come out...

Perhaps we can get the fellows at Alliant to do the same with their new
28 processor Killer Micro powered machine. That i860 is definitely a
Killer Micro. After we compare single CPU performances, perhaps we could
then run the MIMD parallel versions on the Convex C240 and the Alliant 28
processor Killer Micro powered box. Yes, there are MIMD parallel versions
of both codes which could probably be made to run on both machines.

Steve Jay

unread,
Mar 17, 1990, 9:35:23 PM3/17/90
to
bro...@maddog.llnl.gov (Eugene Brooks) writes:

>The
>small size of the utilization factor completely negates the cost performance
>edge of the Killer Micro inside it. This is not, however, an argument against
>the Killer Micros themselves. It is an argument against single user workstations
>that spend almost ALL their time in the kernel idle loop, or the X screen lock
>display program as is often the case.

>Computers are best utilized as shared resources, your Killer Micros should
>be many to a box and sitting in the computer room where the fan noise does
>not drive you nuts. This is where I keep MY Killer Micros.

If someone measured the time that I spend using the stapler, tape
dispenser, or pocket calculator that I have in my office, they'd
find that each sits idle 99.9...% of the time. Does this mean that
I shouldn't have exclusive use of these items, and I should have to
go to some central facility whenever I want to staple, tape, or
calculate?

Obviously, single user work stations are not yet so cheap as to be in
the same category as staplers. But, a $20,000 workstation dedicated
to a > $100,000/year engineer or scientist doesn't seem that outrageous.
The argument that an idle CPU is a wasted CPU becomes less and less
convincing as the cost comes down. An idle CPU that I can use when-
ever I want, which is then 100% dedicated to me when I want it, could
be the way to optimize MY time. Improving people productivity is the
name of the game, not improving computer utilization.

I'd be happy to have my CPU (with its maddening fan) in a remote location,
where it could share power supplies, cooling, and disk space. But I still
want it to be mine.

Steve Jay
s...@ultra.com ...ames!ultra!shj
Ultra Network Technologies / 101 Dagget Drive / San Jose, CA 95134 / USA
(408) 922-0100 x130 "Home of the 1 Gigabit/Second network"

Eugene Brooks

unread,
Mar 18, 1990, 2:50:16 PM3/18/90
to
In article <1990Mar18....@ultra.com> s...@ultra.com (Steve Jay) writes:
>If someone measured the time that I spend using the stapler, tape
>dispenser, or pocket calculator that I have in my office, they'd
>find that each sits idle 99.9...% of the time. Does this mean that
>I shouldn't have exclusive use of these items, and I should have to
>go to some central facility whenever I want to staple, tape, or
>calculate?
The analogy people use here is comparing their car to their personal
computer. The price tags are even comparable in this case. The
argument does not hold water. The car can't be switched between
users in milliseconds. The computer is an entirely different animal.
You CAN have exclusive access to a CPU in a suitably parallel resource
composed of Killer Micros, yet efficiently share it with others.

By sharing your computer among a small group of people, large enough
to bring the utilization level up to perhaps 50%, you end up with
more computer, not less. I do think that you should have your own
X display station, however, this can not be switched between users in
a millisecond or two.


>I'd be happy to have my CPU (with its maddening fan) in a remote location,
>where it could share power supplies, cooling, and disk space. But I still
>want it to be mine.

We are considering engraving users names on the cpu boards of our massively
parallel Killer Micro powered machine arriving here. It will give them
that good feeling of ownership. Last time I checked there were more processors
than users. I think that we might also ask them to make the LTO payments for
the machine in return for this feeling of ownership...


bro...@maddog.llnl.gov, bro...@maddog.uucp

Stan Lackey

unread,
Mar 19, 1990, 10:12:16 AM3/19/90
to
In article <52...@lll-winken.LLNL.GOV> bro...@maddog.llnl.gov (Eugene Brooks) writes:
>In article <1990Mar18....@ultra.com> s...@ultra.com (Steve Jay) writes:
>>If someone measured the time that I spend using the stapler, tape
>>dispenser, or pocket calculator that I have in my office, they'd
>>find that each sits idle 99.9...% of the time. Does this mean that
>users in milliseconds. The computer is an entirely different animal.
>You CAN have exclusive access to a CPU in a suitably parallel resource
>composed of Killer Micros, yet efficiently share it with others.
>
>By sharing your computer among a small group of people, large enough
>to bring the utilization level up to perhaps 50%, you end up with
>more computer, not less.

Interesting discussion going on here. I think though that the choice of
computing style has to be based on the workload. The situation of a small
group (10?) running large batch style jobs vs. a large group (>25?) running
lots of small interactive jobs seems to inherently fit different models.
The model of say a publishing company with 25 writers all using desktop
publishing seems to be more suited to distributed workstations; highly
interactive, compute bound (constant reformatting, spellcheck, etc); if
this workload were centralized, it seems far more horsepower is necessary
to deal with the overheads of sharing and interconnect management, to get
the same response speed.
-Stan

Wm E Davidsen Jr

unread,
Mar 19, 1990, 11:16:56 AM3/19/90
to
In article <52...@lll-winken.LLNL.GOV> bro...@maddog.llnl.gov (Eugene Brooks) writes:

| By sharing your computer among a small group of people, large enough
| to bring the utilization level up to perhaps 50%, you end up with
| more computer, not less. I do think that you should have your own
| X display station, however, this can not be switched between users in
| a millisecond or two.

The problem with sharing a computer is that someone gets to be
administrator. And that means making decisions about software and o/s
versions which will impact users. On of the nicest things about a system
of your own, even is small, is that backups happen when you want,
upgrades happen when you want (and more importantly don't happen when
you don't want), and the configuration is dedicated without compromise
to the productivity of one user.

Work which must be shared can be on shared machines, and should be.
But work which has a well defined interface can be done on a machine
setup to make it's one user productive. For example: my boss doesn't
care what editor I use to write a report, what version of the o/s, etc.
Nor what spreadsheet or other tool I use to bash the numbers. One person
uses 1-2-3 and Word in DOS, I use MicroEMACS and an awk script, someone
else uses vi and sc.

A shared machine is always a compromise. The administrator does not
want fifteen editors, ten spreadsheets, etc, to keep working. We hit the
problem frequently that under VMS one user needs a new o/s version to
run one thing, while another user has no budget to upgrade another
package to the new o/s, or that the upgrade just isn't available.

Workstations and central computers both perform valuable functions in
terms of productivity, and I don't think that any central system or
network will replace the workstation, or vice versa.

--
bill davidsen (davi...@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
"Stupidity, like virtue, is its own reward" -me

Jon

unread,
Mar 19, 1990, 9:57:11 AM3/19/90
to
bro...@maddog.llnl.gov (Eugene Brooks) writes:

>You will be completely shocked to see how
>low the processor utilization of single user work stations are. The
>small size of the utilization factor completely negates the cost performance
>edge of the Killer Micro inside it.

This is quite correct, and therefore we should stop using personal
automobiles, too. Instead we should use taxis, car pools, and
other forms of better sharing the same basic hardware. This will
increase the <10% utilization of most cars.

OK, ob. smiley. Yes, we like having our own cars, and we like having
our own local source of computation, and we're going to continue
to choose this whenever we have a choice. It's a fact of life.

The point that you can "switch between processors in milliseconds", is
quite correct and equally compelling when applied on the other side of
the arguement. When the individual grants use of his processor to
others, more generally when individuals share their processors with
each other, they make use of millisecond-flexible sharing, but retain
control of local resources.

No question about it, you waste a lot of resources by keeping them
isolated and idle. The point here is that this isn't a technology
decision, it's a policy decision. The ability for the individual
to have 100% of his local computational power available to him
on demand is a policy widely favored by individuals. The ability
to get the most computation per dollar is a policy widely favored
by central planners.

No one argues that these policies are in any way compatible. They both
exist, and each drives a different kind of purchase decision. Neither
has anything to do with how you build technology. Both have much to do
with you how you buy it, and rather little to do with computer
architecture, at this late date.

-- Jon
--
Jonathan Krueger jkru...@dtic.dla.mil uunet!dgis!jkrueger
The Philip Morris Companies, Inc: without question the strongest
and best argument for an anti-flag-waving amendment.

Andy Glew

unread,
Mar 19, 1990, 2:23:32 PM3/19/90
to
..> Single user workstations vs. centralized computing

I want to by a killer micro single user workstation, but that's
because I want to *own* it myself, and not have it taken away from me
when I change jobs/universities etc. As for my day to day work I
don't mind sharing cycles, as long as I never experience any slowdown.
Fair-share schedulers are a must for any centralized copmputing
facility that expects me to pay for my portion of the system.

I am much more interested in single user dismountable mass storage in
my office than I am in a single user workstation. Give my a floptical
disk on my desktop, connected to that centralized compute server!
Plus a color laserprinter and scanner on my desktop. Give me anything
that I have to get up and move down the hall to use on my desktop (and
a new generation of flabby, unexercised, computer users is born).

--
Andy Glew, ag...@uiuc.edu

Peter da Silva

unread,
Mar 19, 1990, 12:31:47 PM3/19/90
to
Even if your MIPS in the workstation are wasted, it might still be worthwhile
to put them there. It all depends on how much the MIPS cost. Certainly if you
have a 20 MIPS processor in a $2000 box, it really doesn't matter that you
only need a 0.5 MIPS processor in a $1800 box... the marginal cost of the
extra 19.5 MIPS is low enough that you might as well get them. If they go
to waste 95% of the time, who cares?

Too bad you can't run NeWS on it and really benefit from those extra server
CPU cycles...
--
_--_|\ `-_-' Peter da Silva. +1 713 274 5180. <pe...@ficc.uu.net>.
/ \ 'U`
\_.--._/
v

Robert D. Silverman

unread,
Mar 19, 1990, 2:07:51 PM3/19/90
to
In article <7...@dgis.dtic.dla.mil> jkru...@dgis.dtic.dla.mil (Jon) writes:

:bro...@maddog.llnl.gov (Eugene Brooks) writes:
:
:>You will be completely shocked to see how
:>low the processor utilization of single user work stations are. The
:>small size of the utilization factor completely negates the cost performance
:>edge of the Killer Micro inside it.
:
:This is quite correct, and therefore we should stop using personal
:automobiles, too. Instead we should use taxis, car pools, and
:other forms of better sharing the same basic hardware. This will
:increase the <10% utilization of most cars.
:
:OK, ob. smiley. Yes, we like having our own cars, and we like having
:our own local source of computation, and we're going to continue
:to choose this whenever we have a choice. It's a fact of life.

I invite everyone to read the following paper:

Robert Silverman & Sidney Stuart
"A Network Batching System for Parallel Processing"
Software Practice & Experience Vol 19, #12, pp. 1163-1174

We describe a system that allowed us to soak up all the excess
processing time on a SUN network, while not impairing the interactive
use of workstations.

--
Bob Silverman
#include <std.disclaimer>
Mitre Corporation, Bedford, MA 01730
"You can lead a horse's ass to knowledge, but you can't make him think"

bruce.f.wong

unread,
Mar 19, 1990, 10:34:09 AM3/19/90
to
In article <1990Mar18....@ultra.com> s...@ultra.com (Steve Jay) writes:
>bro...@maddog.llnl.gov (Eugene Brooks) writes:
...

>>Computers are best utilized as shared resources, your Killer Micros should
>>be many to a box and sitting in the computer room where the fan noise does
>>not drive you nuts. This is where I keep MY Killer Micros.
...

>If someone measured the time that I spend using the stapler, tape
>dispenser, or pocket calculator that I have in my office, they'd
...

>Obviously, single user work stations are not yet so cheap as to be in
>the same category as staplers. But, a $20,000 workstation dedicated
>to a > $100,000/year engineer or scientist doesn't seem that outrageous.
>The argument that an idle CPU is a wasted CPU becomes less and less
>convincing as the cost comes down. An idle CPU that I can use when-
>ever I want, which is then 100% dedicated to me when I want it, could
>be the way to optimize MY time. Improving people productivity is the
>name of the game, not improving computer utilization.
...

>Ultra Network Technologies / 101 Dagget Drive / San Jose, CA 95134 / USA

Your company, "home of the 1 gigabit network", is making computers easier
to share. Sharing computing resources on a network should not be equated
to the bad old days of timesharing. The computing network can be
engineered to give 100% of the processing power a $100k scientist needs
to get the work done in the proper manner. When extra computing
resources are available (fellow $100k scientist stepped out to get a
coffee and jelly donut) they will get more than 100%.

Staplers and tape dispensers can't be pushed across a wire or fiber
so sharing is very inconvienent but computing power can. In almost
all situations it doesn't matter that the application ran on a machine
or machines that are located 1 kilometer away instead of a machine
sitting close enough for you to kick. The only pieces of equipment that
a computer user should be allowed to physically abuse are those that are
needed for interaction with the computing network: display, keyboard,
mouse; essentially human I/O devices. An X terminal fits the bill.
(Also, an X terminal will become obsolete at a slower rate than a
super-mini or killer micro; a business argument that I will not develop
here)

I think this is more of a psychological issue than a technical
or business issue. The attitudes that I encounter when I propose such
a sharing scheme can be summed as:
"You mean that someone will be using *MY* workstation!"
My reply:
"Calm down, you'll be using our computing network."

(There's also a case for mips envy: mine is -----er than yours.)

Finally, the cost argument doesn't negate the advantages of sharing,
it just makes sharing cheaper.

Note that SUN trumpets: ``The Network is the Computer.''
Note also that SUN is not offering X terminals like DEC, DG, MIPSco...
--
Bruce F. Wong ATT Bell Laboratories
att!iexist!bwong 200 Park Plaza, Rm 1B-232
708-713-5111 Naperville, Ill 60566-7050

Doug Mohney

unread,
Mar 19, 1990, 4:10:37 PM3/19/90
to
In article <52...@lll-winken.LLNL.GOV>, bro...@maddog.llnl.gov (Eugene Brooks)
writes:

>The sentence I have often used, "No one will survive the attack of the
>Killer Micros," is not to be misinterpreted as "No one will survive the
>attach of the Killer Single User WorkStations." The single user workstations
>are indeed Killers, but they are essentially wasted computer resources.
>Corporate America will eventually catch on to this and switch to
>X display stations and efficiently shared computer resources.

By the time you buy a loaded X-terminal with 4MB of RAM and a large
screen, you might as well pay the $2K extra for a small swappin 'disk and a
full-blown CPU. The jury (at least MY jury) is still out on X-terminals.

If shared resources are such wonderful critters, how come multiuser Macs
aren't popular? Or '386es? You could conceivably hang multiple terminals
from a '386 or '486 box, but I haven't heard of people rushing out to do so.

Long live the revolution; I'll figure out what to do with all the MIPS later.
Solbourne is supposed to be coming out with a 40MIPS/10K workstation by
the end of the year. Batten the hatches folks; life is going to get more,
not less, interesting....

Steve Jay

unread,
Mar 19, 1990, 4:41:12 PM3/19/90
to
bro...@maddog.llnl.gov (Eugene Brooks) writes:

>You CAN have exclusive access to a CPU in a suitably parallel resource
>composed of Killer Micros, yet efficiently share it with others.

Maybe you CAN do it (and I'm not sure you can, but that's a different
argument), but will your system administrator LET you do it?

-Steve
s...@ultra.com

Hugh LaMaster

unread,
Mar 19, 1990, 5:38:29 PM3/19/90
to
In article <7...@dgis.dtic.dla.mil> jkru...@dgis.dtic.dla.mil (Jon) writes:
>bro...@maddog.llnl.gov (Eugene Brooks) writes:

<Various arguments deleted.>

Eugene Brooks first argument in this thread, many months ago, was that
commodity micros are going to become the basic building blocks of *most*
systems; he supplied some rather humorous descriptions of how microprocessor
based systems are now as fast, or faster, than some of the fastest systems
based on specially designed CPUs: e.g. Cray.

Then, he added a correction stating that he was *not* arguing that desktop
*systems* were going to replace all other systems.

Various polemics followed :-)

****************************************************

The question: what is a more optimal *system*: a network of personal
workstations or fewer more centralized servers giving individual users
X terminals?

Answer: (Mine, of course): *it depends*. On the same campus, some
people are better served by one model of computing, some by another. In
my experience, it depends on a number of factors:

1) Availability on the existing staff of *experienced system administrators*.
If you already have someone, you have more choices. If you don't, usually
a more centralized system will serve you better, because someone else will
do all the system care and feeding. Someone who is an expert may be able
to take care of the job with only a little overhead; a novice may get
consumed by it.

2) Whether or not *time critical* work is done. Most people believe, and
rightly so, in my experience, that time critical data acquisition and
analysis *cannot* be done reliably on shared resources. It just doesn't
cut it to say that you are losing $10,000 an hour on an expensive test
because the shared compute resource is saturated.

3) The nature of the job, and what kind of *networking* resources it
demands. "MIPS", whatever they are, are almost free these days for most
general purpose computing. But, *systems* are not free. Systems require
memory, access to data, and may require moving data across various low
bandwidth and expensive channels, like networks. The cost of the
processors is a relatively low part of the overall system cost these
days for many systems. You have to look at entire picture. It may
be cheaper to keep many processors idle most of the time if it means
better *network utilization*, because networks are a more costly resource
than "MIPS" in today's typical computing environment.

4) The cost of coordinating work. People time costs money, and the more
distant someone is in an organization, the harder it is to share common
resources. This isn't "wasteful"; *time is money*. Anyone's job is to
get results. The cost of coordinating with a lot of people to use
hardware "efficiently" may be *much* greater than the cost of the "wasted"
hardware. This is particulary true if computing resources are on the
critical path in any project. Project management of large projects is
tough. Why let computer utilization become an issue: the goal is to
do the most with the least, and overall cost and efficiency are much
more important than the utilization of any one resource, including office
space, computers, etc. That being said, it usually isn't the issue for
most numerical simulations, where the speed of the hardware determines
what is possible, and efficient utilization of scarce resources, like
Cray memory and memory and I/O bandwidth is a day to day task.

********************************************************


So, I don't think there is one answer for what is most cost effective.
It all depends on the job at hand. That is why you need engineers, to
figure out how to get something done as cheaply as possible. You don't
need a "policy" to decide that desktop systems, file servers, or
supercomputers are best: you need to study the job at hand and figure out
how to do it best. Whoops: back to work :-)


********************************************************
********************************************************


Speaking of architectural issues, how is the BBN TC 2000 working out?
It should be a perfect example of Killer Micros in action. But,
I was rather surprised that the TC 2000 Butterfly switch is only 8 bits (!)
wide and only supports a maximum memory bandwidth of 2.4 GBytes/sec
for a 63 processor system. A Cray Y-MP has about 40 GBytes/sec of total
memory bandwidth, for reference.

Hugh LaMaster, M/S 233-9, UUCP ames!lamaster
NASA Ames Research Center ARPA lama...@ames.arc.nasa.gov
Moffett Field, CA 94035
Phone: (415)604-6117

Barry Shein

unread,
Mar 19, 1990, 5:06:17 PM3/19/90
to

The argument is that because a personal computer/wkstn is idle 99% of
the time therefore it would be better shared.

The problem is that although this argument seems great in theory, in
practice it tends to have real problems.

When people share a computer things go wrong, the biggest thing that
goes wrong is that one cannot estimate, day to day, what to expect
from the shared computer.

One day it can look up 1000 queries in an hour, the next day you only
get 10 per hour (oops, someone out there is running a CPU hog.)

One day you have lots of disk space, the next day you can't save the
file you just edited AND YOU HAVE NO CONTROL over the situation (oh,
you might have political control, but instead of just cleaning up a
few files you now have to have a Computing Resources committee
meeting.)

One day someone "out there" tickles a bug that keeps crashing the damn
thing...shouldn't happen...that and 50c *might* get you a cuppa (hey,
ya know what happens, no one even *knows* for the first week who's
crashing the thing, certainly not the guilty party, it just keeps
going down.)

And we won't talk about the Animal Farm nature of shared computers,
all pigs are equal, some pigs are, however, more equal.

Control and predictability, real important.

Ever share a bathroom with a few people? Works in theory, hey, no one
uses the bathroom more than 30 minutes/day so sharing among 10 people
should be fine! Uh-huh. Ever share one bathroom among four or five
people? Don't work too well...

Computers are similar, sure, they're idle 99% of the time, except
never when you need them (like from 3-5PM, typically.)

Why do you think so many frantic hackers became night-owls?

Anyhow, a simple resource sharing argument is just that,
oversimplified. There certainly are resources that can be shared, but
it takes more thought to make it work right than is being presented.
Most sites can hardly put up with sharing a printer among several
people (a printer that's idle 90% of the time, I may add, but never
when you need it.)
--
-Barry Shein

Software Tool & Die | {xylogics,uunet}!world!bzs | b...@world.std.com
Purveyors to the Trade | Voice: 617-739-0202 | Login: 617-739-WRLD

Philip Machanick

unread,
Mar 19, 1990, 6:48:39 PM3/19/90
to
In article <00933EBB...@KING.ENG.UMD.EDU>, sys...@KING.ENG.UMD.EDU
(Doug Mohney) writes:

> If shared resources are such wonderful critters, how come multiuser Macs
> aren't popular? Or '386es? You could conceivably hang multiple terminals
> from a '386 or '486 box, but I haven't heard of people rushing out to do so.

Predictable response time...This is also (one of the reasons, anyway) why
Apple does not support pre-emptive multi-tasking. I'm using a 16Mbyte
DECstation 3100 and despite the faster processor, it doesn't compare with
a 68030 Mac on user interface reponsiveness. And the DECstation is hardly ever
used by other users. Moral of the story? A multi-tasking OS with virtual memory
etc. has its price. Of course, if you aren't doing much "interactive" stuff
(e.g., large-scale compiles or number crunching), the trade-offs are
different. I would go with a Mac as a user interface engine (scrap the
X-terminal idea), with a networked high-speed machine (or machines) to do the
number crunching, large-scale file system, database etc.

Philip Machanick
phi...@pescadero.stanford.edu

Samuel Fuller

unread,
Mar 19, 1990, 9:30:31 PM3/19/90
to
Another consideration in this discussion of single user workstations
versus shared compute servers is memory. Which is better 16 Megabytes
of memory on 100 workstations or 1.6 Gigabytes on one server allocated
as needed to 100 X users?

--
---------------------------------------------------------------------------
Sam Fuller / Amdahl System Performance Architecture

I speak for myself, from the brown hills of San Jose.

UUCP: {ames,decwrl,uunet}!amdahl!sbf10 | USPS: 1250 E. Arques Ave (M/S 139)
INTERNET: sb...@amdahl.com | P.O. Box 3470
PHONE: (408) 746-8927 | Sunnyvale, CA 94088-3470
---------------------------------------------------------------------------

Ray Loyzaga

unread,
Mar 20, 1990, 12:49:47 AM3/20/90
to
Try answering this question, "What will be the most cost effective
solution for a company (given limited system administration resources)
which employs 10 skilled computer application users. 10 killer micros
of the ~10mips class costing $25k each or 1 or 2 killer RISCS of the
50-100Mip class with X-terminals?
The upfront costs will be very close, the killer micros
being slightly cheaper.
Eg, 2xRC6280 ~$300k, 10Xterms $30k (total $330k, 10 Sparc1 $250k).
Remember you need to have enough disk and memory (32Mb)!
It is no use comparing 10 X-terminals on a 10 mip machine as opposed
to 10 10 mips workstations.

I think I would rather be on the RC6280's, they would have the benefit
of centralized backups, large memories, easier admin, better ability
to share resources and conduct group work.
Upgrades to memory/disk benefit all users, all the technical staff can
work on the few grunt boxes and they will be using the same resources
as the users. If they have really nice workstations you will find that
they will not maintain other workstations to the same level.
The users (probably engineers or similar) can concentrate on the tasks
they are being paid for rather than learning how to administer a Unix
machine.
The same argument goes for a teaching environment where the user does not
need to control the console of a system to get their work done, all they
want is the limited graphics bandwidth that a windowing terminal provides.

When the RC6280's run out of steam, you do your shopping, and buy next years
version of somebodys super-scalar multi-cpu RISC box, and no-one
need know, the X-terms can stay (what is everyone going to do with
their sun3's now that sparcstations are all the rage?
They would make great X-terms.

Steve Jay

unread,
Mar 19, 1990, 10:13:01 PM3/19/90
to
bw...@cbnewsc.ATT.COM (bruce.f.wong) writes:

>Your company, "home of the 1 gigabit network", is making computers easier
>to share.

Gee, glad someone noticed.

Steve Jay
s...@ultra.com ...ames!ultra!shj


Ultra Network Technologies / 101 Dagget Drive / San Jose, CA 95134 / USA

(408) 922-0100 x130 "Home of the 1 Gigabit/Second network"

P.S. This is neither an offer to sell nor a solicitation of an offer
to buy.....

Ed Hamrick

unread,
Mar 19, 1990, 11:30:16 PM3/19/90
to
Mr. Brooks,

I read your recent article regarding killer micros with great interest.
I'd like to comment on a few of the points you made below:

> Computers are best utilized as shared resources, your Killer Micros should
> be many to a box and sitting in the computer room where the fan noise does
> not drive you nuts. This is where I keep MY Killer Micros.

I received a lot of mail regarding this very point, and you were one of the
few people who agreed with me. I'd like to qualify this point by saying that
too much centralization is inefficient also. A good rule of thumb is to
centralize to the point where 50% to 80% of the compute cycles are used.
Sharing at the departmental level also alleviates many of the problems of
corporate-wide centralization.

A much more interesting subject is the one you raise below:

> To use the "efficient utilization argument" to support the notion that
> low volume custom processor architectures might possibly survive the
> attach of the Killer Micros is pretty foolish, however. Ed, would you
> care to run the network simulator and Monte Carlo code I posted results
> of on the Convex C210, and post the results to this group? I won't
> ruin the surprise by telling you how it is going to come out...

I'd be happy to run these programs on a C210. I think you'd find that
the C210 does much better than the 25 MHz clock would otherwise lead
you to predict. However, most of CONVEX's customers purchase our
machines for more than the excellent scalar performance - a large
number of important scientific and engineering applications require
high speed vector performance along with large memory, 2 GByte virtual
address space, and high-speed I/O.

It would be interesting to see the performance of these scalar codes
on various architectures, relative to the clock speed of the machines
implementing these architectures, especially the Cray numbers.

The cost of processors is a very small part of the total cost of a
departmental compute server. How much do you think Alliant pays for
the 8 i860 chips in their low-end $500K product? The design of the
memory system is the dominant factor in system performance and system
cost for departmental supercomputers.

There is no question that all computer vendors will some day implement their
particular architectures in a small number of chips. The only question
is when. Making this decision too early might cause you to make premature
architectural trade-offs in order to reduce the number of gates needed
for today's chips. For example, the i860 uses reciprocal approximation
for the divide and square root functions. If space for more gates had
been available, the i860 might have been implemented differently.

> Perhaps we can get the fellows at Alliant to do the same with their new
> 28 processor Killer Micro powered machine. That i860 is definitely a
> Killer Micro. After we compare single CPU performances, perhaps we could
> then run the MIMD parallel versions on the Convex C240 and the Alliant 28
> processor Killer Micro powered box. Yes, there are MIMD parallel versions
> of both codes which could probably be made to run on both machines.

If you have a chance, ask the Alliant people what their Linpack 100x100
performance is, and see how well it scales up to 28 processors. Try to
get real runs, not estimates. I'd also be curious about main memory
bandwidth (not crossbar bandwidth). Information like number of banks,
number of bytes read per bank access, and bank cycle time would be
particularly interesting. It would also be useful to run the MIMD versions
of your codes on both the Alliant and the C240, and compare the parallel
speed-ups. It would also be revealing to run MIMD scalar codes (and vector
codes) that have a low cache hit rate on both the Alliant and CONVEX.

As an aside, I was curious why you were asked not to release information
about the low utilization of the 300 workstations you mentioned. I can't think
of any reason Livermore wouldn't want this information publicly available,
since this is likely to be true of any organization using large numbers of
single user workstations. It would do a great service to people considering
lots of single-user killer micros to have this data publicly available.

Regards,
Ed Hamrick

Kian-Tat Lim

unread,
Mar 20, 1990, 12:58:06 PM3/20/90
to
In article <100...@convex.convex.com>, hamrick@convex1 (Ed Hamrick) writes
[in reference to the Alliant FX/2800]:

>If you have a chance, ask the Alliant people what their Linpack 100x100
>performance is, and see how well it scales up to 28 processors. Try to
>get real runs, not estimates. I'd also be curious about main memory
>bandwidth (not crossbar bandwidth). Information like number of banks,
>number of bytes read per bank access, and bank cycle time would be
>particularly interesting.

From publicly-available Alliant literature:

MEMORY SYSTEM
Cache size 512KB per module, 4MB max
Processor to Cache Bandwidth: 1.28GB/sec [through the crossbar]
Maximum Physical Memory: 1GB
Interleaving: 16-way on a single board
Memory Bus Bandwidth: 640MB/sec

I believe that Alliant has run 100x100 Linpack on a 28 processor
system, but I'm not sure if that figure has been made public. It's
probably obvious that it won't be 28 times the raw i860 number (11
MFLOPS).
--
Kian-Tat Lim (k...@wagvax.caltech.edu, KTL @ CITCHEM.BITNET, GEnie: K.LIM1)
Perl is the Swiss Army chainsaw [of Unix programming]. -- Dave Platt's friend

Jack McClurg

unread,
Mar 20, 1990, 12:58:32 PM3/20/90
to
>>You will be completely shocked to see how
>>low the processor utilization of single user work stations are. The

>>small size of the utilization factor completely negates the cost performance
>>edge of the Killer Micro inside it.
>
>No question about it, you waste a lot of resources by keeping them
>isolated and idle. The point here is that this isn't a technology
>decision, it's a policy decision. The ability for the individual
>to have 100% of his local computational power available to him
>on demand is a policy widely favored by individuals. The ability
>to get the most computation per dollar is a policy widely favored
>by central planners.
>
>No one argues that these policies are in any way compatible. They both
>exist, and each drives a different kind of purchase decision. Neither
>has anything to do with how you build technology. Both have much to do
>with you how you buy it, and rather little to do with computer
>architecture, at this late date.
>
>-- Jon

I am about to break net protocol by mentioning a product, but I am sure that
there are other products from different vendors with similar functionality
which could be substituted for my company's product.

HP has a product called Task Broker which addresses the problem mentioned
above. It can select an apropriate machine to run a task based on which
machine on the network makes the highest bid to run the task. The mechanism
used to bid is very general and allows the owner of a workstation to have a
dedicated machine during working hours and make the machine available to others
at other times.

I only mention this because of Jon's and Eugene's statements above. I think
that you can have a very cost effective environment with distributed
workstations.

Jack McClurg

Ian Dall

unread,
Mar 20, 1990, 7:44:50 PM3/20/90
to
In article <1990Mar19.2...@world.std.com> b...@world.std.com (Barry Shein) writes:
>
>The argument is that because a personal computer/wkstn is idle 99% of
>the time therefore it would be better shared.
>
>The problem is that although this argument seems great in theory, in
>practice it tends to have real problems.
>
>When people share a computer things go wrong, the biggest thing that
>goes wrong is that one cannot estimate, day to day, what to expect
>from the shared computer.
>
>One day it can look up 1000 queries in an hour, the next day you only
>get 10 per hour (oops, someone out there is running a CPU hog.)

It is not just the mips which are being shared, it is also the code.
With a central machine you only have to find memory for your kernel,
emacs etc once. With N machines you have to find it N times. There are
significant technical advantages to a machine as has been already
pointed out to others.

Some people complained about not being able to run the software they
want on a shared machine, but I don't buy that. So long as you have a
big enough disk quota you can run what you like. If you don't have
the disk space buy more disks. I fail to see how attaching the disk
which would have gone with your workstation to the central machine can
not be a more effective way of getting the disk space for your
favorite application (at least you only need space for the extra
utilities, not the entire system). Instead of buying a workstation,
try offering to buy the central system a disk on the condition that
you are the only one with a quota on that disk.

The other problem seems to be that, sure people would like to be able
to use spare capacity, but they like to be guaranteed a certain
minimum number of cycles. Well, let me propose the guaranteed share
scheduler! I doubt if this is new but I'll propose it anyway! Suppose
the total number of cycles per unit time is T, the maximum number of
users is M and the number of active users is A. Every user should get
max(T/A, T/M) cycles per unit time. The T/M is guaranteed, the T/A -
T/M is the bonus for being shared. Of course, for the convenience to
approach that of your own workstation you need T/M to be reasonably
large. One killer micro worth maybe?

The final problem is the fascist system manager. I don't know how many
of these there really are, I suspect that most are only trying to make
best use of too limited resources and that buying more resources is
the best solution. If you really do have a fascist system manager,
then sack them.

>Why do you think so many frantic hackers became night-owls?

Maybe because they want *more than* 1 workstation worth of cpu? On a
single user machine they don't have that option.

--
Ian Dall life (n). A sexually transmitted disease which afflicts
some people more severely than others.

Craig Jackson drilex1

unread,
Mar 20, 1990, 8:57:02 AM3/20/90
to
Many of the participants in discussion of shared computers vs single-user
computers seem to believe that there is an absolute answer. However,
it's really a tradeoff involving the cost of redundant (possibly idle)
equipment, the cost of switching use, and the cost of being denied use
due to others using the equipment. "Expensive" things will be shared,
whether they are computers or pieces of lab equipment. Expensive can be
measured by comparing the cost of obtaining extra, possibly idle equipment
against the opportunity cost of waiting to use the shared equipment.
The ability of computers to switch from one task to another tends
to reduce the opportunity cost of having to share it.

The amount of computing resources we have been willing to devote to a
single user has been rising for many years. At one time, even the I/O
devices were shared-use: printers, card readers, keypunches. Later,
it was found useful to devote a teletype to a user for extended periods
of time (time-sharing sessions). Still later, it became common to
give an 8080-class computer (buried in a terminal) to each user. Now,
we are discussing whether general-purpose workstations or X terminals
are the proper paradigm for users. Yet most X terminals have a good deal
more computing power than the single-user workstations of 5 years ago,
and the shared-use computers of 15 years ago. The difference is the cost.
--
Craig Jackson
dri...@drilex.dri.mgh.com
{bbn,axiom,redsox,atexnet,ka3ovk}!drilex!{dricej,dricejb}

Rob Peglar

unread,
Mar 20, 1990, 7:43:04 AM3/20/90
to
In article <7...@dgis.dtic.dla.mil>, jkru...@dgis.dtic.dla.mil (Jon) writes:
> bro...@maddog.llnl.gov (Eugene Brooks) writes:
>
> >You will be completely shocked to see how
> >low the processor utilization of single user work stations are. The
> >small size of the utilization factor completely negates the cost performance
> >edge of the Killer Micro inside it.
>
> This is quite correct, and therefore we should stop using personal
> automobiles, too. Instead we should use taxis, car pools, and
> other forms of better sharing the same basic hardware. This will
> increase the <10% utilization of most cars.

All depends on where you live. In New York or Tokyo, you should stop
using personal cars; in my experience, anyone in those two cities who
uses one is nuts. Take the public transport.

That aside, you're missing Eugene's point. I can't fathom your leap from
KM's to cars. The whole point here is that a powerful KM on someone's desk
is now becoming a wasted resource at times. This is evolution. One
wouldn't (not to mind couldn't) have an CDC 6600 on the desktop; now, we're
down to KM's. Again, it all depends on the situation. The scenario at
LLNL and many other R&D houses is this; lots (hundreds, thousands) of
engineers and programmers. It is economically infeasible to have each
person with a 10% utilized KM.

For those who can utilize at 50 % or so, by all means buy it. However,
at 10%, it becomes much harder to justify single-user KM's.

>
> OK, ob. smiley. Yes, we like having our own cars, and we like having
> our own local source of computation, and we're going to continue
> to choose this whenever we have a choice. It's a fact of life.

True. But, many people don't have such choices.

(stuff deleted)

> decision, it's a policy decision. The ability for the individual
> to have 100% of his local computational power available to him
> on demand is a policy widely favored by individuals. The ability
> to get the most computation per dollar is a policy widely favored
> by central planners.
>
> No one argues that these policies are in any way compatible. They both
> exist, and each drives a different kind of purchase decision. Neither
> has anything to do with how you build technology. Both have much to do
> with you how you buy it, and rather little to do with computer
> architecture, at this late date.

I disagree. Architecture is forcing the hand of many a buyer. KM's are
becoming easier and easier to justify (or should I say single-user WS :-)
But Eugene's point is still valid. Even just sharing with another person
("dual-user WS" ?) allows, at the asymptote, a factor of 2 savings for
the big R&D houses. To those with finite resources (dollars), this is
significant. Applying it across the board, however, is still a mistake,
one that Jon points out. One size does not fit all.

Rob
--
Rob Peglar Control Systems, Inc. 2675 Patton Rd., St. Paul MN 55113
...uunet!csinc!rpeglar 612-631-7800

The posting above does not necessarily represent the policies of my employer.

Stan Lackey

unread,
Mar 20, 1990, 10:04:40 AM3/20/90
to
In article <45...@ames.arc.nasa.gov> lama...@ames.arc.nasa.gov (Hugh LaMaster) writes:
>Speaking of architectural issues, how is the BBN TC 2000 working out?
>It should be a perfect example of Killer Micros in action. But,
>I was rather surprised that the TC 2000 Butterfly switch is only 8 bits (!)
>wide and only supports a maximum memory bandwidth of 2.4 GBytes/sec
>for a 63 processor system. A Cray Y-MP has about 40 GBytes/sec of total
>memory bandwidth, for reference.

The peak bandwidth of the 63-node TC2000 depends upon where you
measure it. The memory has a 3-level hierarchy: 1) cache, 2)local
memory, and 3)global memory. The Cray has no cache, but the 88000
chip set does; the appropriate place to measure would probably be at
the busses between the CPU chip and the cache chips. Combined
instruction cache and data cache bussus are a peak of 160 MB/s, times
63 processors is 10 GB/s. Local memory speed is in the neighborhood
of 25 MB/s, times 63 or 1.5 GB/s. Global memory is 8 MB/s for an
aggregate of 500 MB/s. Your mileage will be somewhere between 10 GB/s
and 500 MB/s, depending upon cache hit rate and the mixture of
accesses between local and global memory.

The 8-bit switch path clocks at 38 MHz, so the raw bandwidth of the
media is 38 MB/s. Times 63 paths is peak media speed of 2.4 GB/s.

Not to mislead, the above describes more the performance model, with
the speed differential between local and global memory. The
programming model is a single globally addressed memory space.
-Stan

Wm E Davidsen Jr

unread,
Mar 20, 1990, 10:51:49 AM3/20/90
to
In article <00933EBB...@KING.ENG.UMD.EDU> sys...@KING.ENG.UMD.EDU (Doug Mohney) writes:

| If shared resources are such wonderful critters, how come multiuser Macs
| aren't popular? Or '386es? You could conceivably hang multiple terminals
| from a '386 or '486 box, but I haven't heard of people rushing out to do so.

You haven't been listening. A 386 box is about 3x the original VAX,
and will happily support 8 users with the response you would like, or 32
with response slightly better than the old VAX did under that load.
There are MANY of these systems sitting in offices running Xenix and
supporting 4-8 users.

Because they're so checp people usually buy another rather than load
them to death, but a 386 will do reasonable well even with load average
up around six, providing you have enough memory.

Patrick H. McAllister

unread,
Mar 20, 1990, 6:32:32 AM3/20/90
to
It seems to me that an important consideration missing from the discussion up
to now is display I/O bandwidth. My workstation has its display controller
sitting in its backplane and can transfer graphics information to the display
at bus speeds. I can't imagine that several users' graphical interfaces can
be run across an Ethernet at what a Mac/PC/single-user workstation user would
consider to be an acceptable speed. (Of course I don't know this for sure--I
only know that the systems people here are recommending X terminals for users
who don't do much graphics and single-user workstations for those of us who
do.)

It seems to me that the two main advantages of a single user workstation are
predictable turnaround and high display bandwidth, and that users who currently
have their own machines are not going to be happy with a shared one instead
until these considerations are addressed. I can imagine an operating system
for a multi-user machine that maximized responsiveness for interactive users
instead of overall throughput (anyone remember VM/CMS :-), and it seems to me
that providing acceptable turnaround need not require a single user workstation.
Can anybody in netland speak to the other objective: can an X terminal talking
to a remote host provide acceptable performance in running a graphical user
interface like Motif or XView and running (moderately) graphics-intensive
applications under it? (I think we can all agree that CAD requires a
dedicated workstation, but how about 3D plotting of statistical data, WYSIWYG
word processing with multiple fonts, and so forth?)

Pat

gil...@p.cs.uiuc.edu

unread,
Mar 20, 1990, 12:31:33 PM3/20/90
to

Eugene Brooks writes:
> The analogy people use here is comparing their car to their personal
> computer. The price tags are even comparable in this case. The
> argument does not hold water. The car can't be switched between
> users in milliseconds. The computer is an entirely different animal.
> You CAN have exclusive access to a CPU in a suitably parallel resource
> composed of Killer Micros, yet efficiently share it with others.

Let me point out that --
(1) The price of a killer micro CPU is not much more than a decent
commercial electronic typewriter. And most secretaries get their
own typewriter... gee, I wonder why?

(2) X-windows is nowhere near the be-all and end-all of interactive
supercomputing

I like this argument a lot:

> Written 8:35 pm Mar 17, 1990 by s...@ultra.com in comp.arch


> If someone measured the time that I spend using the stapler, tape
> dispenser, or pocket calculator that I have in my office, they'd
> find that each sits idle 99.9...% of the time. Does this mean that
> I shouldn't have exclusive use of these items, and I should have to
> go to some central facility whenever I want to staple, tape, or
> calculate?

The high cost of computing in the middle of this century has done
everyone a great psychological disservice.

Killer micros of today are a lot like flourescent lights -- cheap
to operate, prevalent, and expensive to turn off. To see a machine
standing idle, when you were raised as a child to "use cycles
efficiently" is a gut-wrenching experience. Just remember Alan Kay's
prediction: In the future, computers will come in cereal boxes and we
will throw them away.

Aluminum was once valued as ten times more valuable than gold. Now we
use aluminum cans daily and discard (recycle) them without a second
thought. It looks like computer CPU's, even uniprocessor
supercomputer CPU's, will go the way of aluminum cans.

Don W. Gillies, Dept. of Computer Science, University of Illinois
1304 W. Springfield, Urbana, Ill 61801
ARPA: gil...@cs.uiuc.edu UUCP: {uunet,harvard}!uiucdcs!gillies

Stan Lackey

unread,
Mar 20, 1990, 2:36:17 PM3/20/90
to
In article <53...@bbn.COM> sla...@BBN.COM I responded to a posting
comparing TC2000 and Cray memory bandwidths:

>The peak bandwidth of the 63-node TC2000 depends upon where you
>measure it. The memory has a 3-level hierarchy: 1) cache, 2)local
>memory, and 3)global memory.
I included a set of approximate peak bandwidths at the various levels,
commenting on what I felt was an apples-to-oranges comparison with the
Cray. I erroneously left out the disclaimer: These are approximate
peak values given for comparison with other architectures only.
Although these values can be achieved under certain circumstances,
delivered averages will vary depending upon the application.
-Stan

Peter da Silva

unread,
Mar 20, 1990, 10:52:55 AM3/20/90
to
> Predictable response time...This is also (one of the reasons, anyway) why
> Apple does not support pre-emptive multi-tasking.

Pre-emptive miltitasking has nothing to do with it. The Amiga O/S uses
pre-emptive multitasking, and I'll put it up for speed and efficiency against
that Mac's system software any day. The first time I used a Mac II with a color
screen it was distinctly less peppy than my Amiga 1000 with a 7.16 MHz 68000.

Well, this was a predictable response. :->

Henry Spencer

unread,
Mar 20, 1990, 12:49:31 PM3/20/90
to
In article <21...@crdos1.crd.ge.COM> davi...@crdos1.crd.ge.com (bill davidsen) writes:
> The problem with sharing a computer is that someone gets to be
>administrator. And that means making decisions about software and o/s
>versions which will impact users...

Yes, it's ever so much nicer to force every user to be a system administrator.
That way you get to see any particular mistake made over and over again,
instead of just once, which keeps life from getting dull. It's particularly
exciting when networks are involved, which means that one person's mistake
can foul up everyone else, or when security is involved, which means
that one person's mistake can lose you a lot of money and work.

I really don't understand this persistent myth that several dozen amateur
system administrators are better than one professional. If *only* the
user himself is affected, it doesn't make much difference, but that's
almost never the case in reality.

>... On of the nicest things about a system
>of your own, even is small, is that backups happen when you want,
>upgrades happen when you want (and more importantly don't happen when
>you don't want)...

No, sorry, these things don't happen when you want. They happen when
you have time -- which is usually long after you really want -- or when
external constraints force you into it -- which is usually just when you
don't want to be bothered. For example, few people run backups half as
often as a centrally-administered system run by professionals does.
A good many of them live to regret it.
--
MSDOS, abbrev: Maybe SomeDay | Henry Spencer at U of Toronto Zoology
an Operating System. | uunet!attcan!utzoo!henry he...@zoo.toronto.edu

Steve Jay

unread,
Mar 20, 1990, 2:42:51 PM3/20/90
to
m1p...@fed.frb.gov (Patrick H. McAllister) writes:

>I can't imagine that several users' graphical interfaces can
>be run across an Ethernet at what a Mac/PC/single-user workstation user would
>consider to be an acceptable speed.

For some applications, it will take more network bandwidth to move
the graphics images than to move the data & programs needed to generate
the images. For other applications, the opposite will be true. It won't
always be the case that it's less load on the network to compute the
images locally.

Roger B.A. Klorese

unread,
Mar 20, 1990, 3:58:34 PM3/20/90
to
>This is quite correct, and therefore we should stop using personal
>automobiles, too. Instead we should use taxis, car pools, and
>other forms of better sharing the same basic hardware. This will
>increase the <10% utilization of most cars.

If you will follow transportation debate, you will find that there are
many voices agreeing with your strawman. The difference is that, unlike
in the computing world, the networking, connectivity and flexibility of
mass transportation is unsatisfactory in most areas.
--
ROGER B.A. KLORESE MIPS Computer Systems, Inc. phone: +1 408 720-2939
MS 4-02 928 E. Arques Ave. Sunnyvale, CA 94086 rog...@mips.COM
{ames,decwrl,pyramid}!mips!rogerk "I'm the NLA"
"Two guys, one cart, fresh pasta... *you* figure it out." -- Suzanne Sugarbaker

Jon

unread,
Mar 20, 1990, 3:47:58 PM3/20/90
to
bw...@cbnewsc.ATT.COM (bruce.f.wong) writes:

>Sharing computing resources on a network should not be equated
>to the bad old days of timesharing.

It's been noted that the different goals implied by the two slogans
"mainframe on your desk" and "network at your service" represent more
differences of culture than technology. The technology is perfectly
capable of giving you both. The emphasis remains very different.
Both goals have merit, but the former gets more press.

Around the DC area I find the analogy compelling that a network of
roads that worked (e.g. could handle peak loads) would do more to
speed my trip from A to B than giving me a Masarati. The analogy
may be extended that even if both were made more powerful it
wouldn't do much good if there weren't interesting and useful
places accessible by car.

-- Jon
--
Jonathan Krueger jkru...@dtic.dla.mil uunet!dgis!jkrueger
The Philip Morris Companies, Inc: without question the strongest
and best argument for an anti-flag-waving amendment.

Roger B.A. Klorese

unread,
Mar 20, 1990, 3:54:22 PM3/20/90
to
In article <21...@crdos1.crd.ge.COM> davi...@crdos1.crd.ge.com (bill davidsen) writes:
> The problem with sharing a computer is that someone gets to be
>administrator. And that means making decisions about software and o/s
>versions which will impact users. On of the nicest things about a system

>of your own, even is small, is that backups happen when you want,
>upgrades happen when you want (and more importantly don't happen when
>you don't want), and the configuration is dedicated without compromise
>to the productivity of one user.

...all of which, of course, presumes no connection to a network, which
requires at least as much administration as each standalone system would.
It assumes that economies of scale with regard to shared costly resources
such as peripherals are unimportant. It assumes that site licenses and
other schemes which may be dependent on running like software revisions
are unimportant. Most important, it assumes that the productivity of one
user is more important than the total productivity of the organization.

Hugh LaMaster

unread,
Mar 20, 1990, 5:12:45 PM3/20/90
to
In article <M1PHM02.90...@mfsws6.fed.frb.gov> m1p...@fed.frb.gov (Patrick H. McAllister) writes:
>to now is display I/O bandwidth. My workstation has its display controller

This is quite correct. This is an important consideration in what is
optimal.

>only know that the systems people here are recommending X terminals for users
>who don't do much graphics and single-user workstations for those of us who
>do.)

This is a the basic rule of thumb without exploration of your requirement.
I would state, though, that "Mac-like" can mean several things. The X Window
System can do most Mac-like things just fine, although particular X
terminals may not be fast enough for your particular application. But some
are fast enough for most such applications. When people say "graphics",
they usually mean image processing, rendering of 3-D objects in full color,
etc. If you need "graphics" in this sense, the X-Terminal approach doesn't
add up.

>Can anybody in netland speak to the other objective: can an X terminal talking
>to a remote host provide acceptable performance in running a graphical user
>interface like Motif or XView and running (moderately) graphics-intensive
>applications under it? (I think we can all agree that CAD requires a

In a word, yes. I run that way every day, and at this minute. The limiting
factor when I run applications like Framemaker is the speed of the server
I am using and the number of people I share it with. But, X is *not*
the problem.

**********************************

Comment: I think Eugene Brooks was trying to make a point which was lost that
"Killer Micro" meant the CPU, not whether the CPU was packaged for a desktop.

Perhaps we need a new term for 15-100 VUPS desktop systems, killer micro based.

"Killer Desktops" anyone?

Doug Mohney

unread,
Mar 20, 1990, 5:04:28 PM3/20/90
to
In article <21...@crdos1.crd.ge.COM>, davi...@crdos1.crd.ge.COM (Wm E Davidsen Jr) writes:
> .............. A 386 box is about 3x the original VAX,

>and will happily support 8 users with the response you would like, or 32
>with response slightly better than the old VAX did under that load.
>There are MANY of these systems sitting in offices running Xenix and
>supporting 4-8 users.
>
Sure. There are many more people who don't run Xenix and are running
Novell with Ethernet or Token Ring. Or even Banyon Vines, for that matte

Doug

Doug Mohney

unread,
Mar 20, 1990, 5:07:20 PM3/20/90
to
>The high cost of computing in the middle of this century has done
>everyone a great psychological disservice.

I like this!

>Aluminum was once valued as ten times more valuable than gold. Now we
>use aluminum cans daily and discard (recycle) them without a second
>thought. It looks like computer CPU's, even uniprocessor
>supercomputer CPU's, will go the way of aluminum cans.

Yes, but can we recycle the chips to put into brilliant toasters?

Hugh LaMaster

unread,
Mar 20, 1990, 5:28:52 PM3/20/90
to
In article <53...@bbn.COM> sla...@BBN.COM (Stan Lackey) writes:
>In article <45...@ames.arc.nasa.gov> lama...@ames.arc.nasa.gov (Hugh LaMaster) writes:
>>Speaking of architectural issues, how is the BBN TC 2000 working out?

>The peak bandwidth of the 63-node TC2000 depends upon where you
>measure it.

I agree. Of course, Crays have no caches, but some Crays have local memory
and all Crays have vector registers and fairly numerous scalar registers.
You could call registers "programmable caches" to compare bandwidths :-)

My question was intentionally brief, but to be more specific: the architecture
obviously depends on the ability to parallelize in such a way that global
memory bandwidth is not the bottleneck. How well is this working out?
etc. etc. etc.

Dave Haynie

unread,
Mar 20, 1990, 3:08:19 PM3/20/90
to

>Predictable response time...This is also (one of the reasons, anyway) why
>Apple does not support pre-emptive multi-tasking.

Pre-emptive multitasking has nothing at all to do with predictable response time.

>I'm using a 16Mbyte DECstation 3100 and despite the faster processor, it doesn't compare
>with a 68030 Mac on user interface reponsiveness. And the DECstation is hardly ever
>used by other users.

The way UNIX implements its multitasking has everything to do with the
unpredictable response time you get on UNIX workstations. Same reason the
NeXT box has a "jumpy" feel to the user.

I use two non-UNIX systems with pre-emptive multitasking -- Apollos (under
Aegis, or DomainOS, or whatever they call it these days) and Amigas. Both
of these systems, especially the Amiga, are extremely responsive. In fact,
moreso than the Mac. For example, on the Amiga, the main things governing
user-interaction, such as mouse and keyboard response, are interrupt driven
and managed by a high priority task. The user interface also runs at a
higher priority than the average user task. So when you start that 64k x
64k spreadsheet to recalculating, you don't have the mouse drop dead, and
you can still move windows around.

What makes the difference is real time response, an operating systems
issue, but not the same thing as pre-emptive multitasking.

>Moral of the story? A multi-tasking OS with virtual memory etc. has its price.

The real moral of the story is that operating systems originally designed
for multi-user operation with users hooked in via serial line text
terminals may not provide the best feel when adapted for use as the
operating system for GUI based, single-user workstations. At least not
without a great deal of rethinking, which apparently hasn't yet been
completed by most of the folks building these systems.

>Philip Machanick
>phi...@pescadero.stanford.edu


--
Dave Haynie Commodore-Amiga (Systems Engineering) "The Crew That Never Rests"
{uunet|pyramid|rutgers}!cbmvax!daveh PLINK: hazy BIX: hazy
Too much of everything is just enough

Paul Graham

unread,
Mar 20, 1990, 6:12:00 PM3/20/90
to
gil...@p.cs.uiuc.edu writes:


|Let me point out that --
|(1) The price of a killer micro CPU is not much more than a decent
| commercial electronic typewriter. And most secretaries get their
| own typewriter... gee, I wonder why?

the same reason people who do data entry all day get their own terminal.

|(2) X-windows is nowhere near the be-all and end-all of interactive
| supercomputing

perhaps, but that doesn't mean that a nice terminal with a nice channel to
a room full of mips isn't a good way to go (rob pike makes this argument
better than i do).

|I like this argument a lot:

|> Written 8:35 pm Mar 17, 1990 by s...@ultra.com in comp.arch
|> If someone measured the time that I spend using the stapler, tape
|> dispenser, or pocket calculator that I have in my office, they'd
|> find that each sits idle 99.9...% of the time. Does this mean that
|> I shouldn't have exclusive use of these items, and I should have to
|> go to some central facility whenever I want to staple, tape, or
|> calculate?

|Killer micros of today are a lot like flourescent lights -- cheap


|to operate, prevalent, and expensive to turn off. To see a machine
|standing idle, when you were raised as a child to "use cycles
|efficiently" is a gut-wrenching experience. Just remember Alan Kay's
|prediction: In the future, computers will come in cereal boxes and we
|will throw them away.

nice "systems", as opposed to the killer micro that drives them, are not
"cheap" just yet (but getting better every day). what i'd like to see is
a nice mechanism that lets the x terminal find an idle workstation and
attach to it while giving some notice to users who are selecting from
a pool of workstations that that workstation is now not idle. we have
"labs" with workstations and x terminals. people have (or soon will have)
better access to x terminals (for various reasons) but all the xterminal
users jump on the same backend while workstations stand idle.

it may be the case that the big step needs to be in the communication
channel. i can buy a 10 MIP cpu for my multi for 2K. an x terminal for
1.5k (a bit more for a NeWs terminal). my multi should soon have 128-256MB
of memory and in excess of 600MB of swap. i'm loathe to build a facility
full of workstations so configured. of course i work at a university, so
maybe it's just a matter of budgets. (i've recently made a similar case but
including software expense in comp.unix.questions)

sorry this doesn't have much to do with computer architecture.

Hugh LaMaster

unread,
Mar 20, 1990, 6:20:56 PM3/20/90
to
In article <45...@ames.arc.nasa.gov> lama...@ames.arc.nasa.gov (Hugh LaMaster) writes:
>I agree. Of course, Crays have no caches, but some Crays have local memory

" " " " ^ DATA caches ^ I should have said.
Pardon.

Herman Rubin

unread,
Mar 20, 1990, 8:30:39 PM3/20/90
to
In article <37...@mips.mips.COM>, rog...@mips.COM (Roger B.A. Klorese) writes:
> In article <7...@dgis.dtic.dla.mil> jkru...@dgis.dtic.dla.mil (Jon) writes:
> >This is quite correct, and therefore we should stop using personal
> >automobiles, too. Instead we should use taxis, car pools, and
> >other forms of better sharing the same basic hardware. This will
> >increase the <10% utilization of most cars.

> If you will follow transportation debate, you will find that there are
> many voices agreeing with your strawman. The difference is that, unlike
> in the computing world, the networking, connectivity and flexibility of
> mass transportation is unsatisfactory in most areas.

I have opposed sharing in the transportation debate, and I oppose it here.

In the computing world, the networking, connectivity and flexibility of
sharing non-specific resources is unsatisfactory in most areas. Other than
such things as text files in ASCII, nothing is easily shared unless the same
machine, or at best the same type of machine, is used, and it may even be
necessary to use the same language. Even different compilers for the same
language can give problems. A Maserati and a Yugo are more similar than
different computers.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hru...@l.cc.purdue.edu (Internet, bitnet, UUCP)

Steve Jay

unread,
Mar 20, 1990, 8:03:46 PM3/20/90
to
he...@utzoo.uucp (Henry Spencer) writes:

> > The problem with sharing a computer is that someone gets to be
> >administrator. And that means making decisions about software and o/s
> >versions which will impact users...

> Yes, it's ever so much nicer to force every user to be a system administrator.

> I really don't understand this persistent myth that several dozen amateur


> system administrators are better than one professional.

I think it's possible to have the best of both worlds...single user
workstations with the benefits of central administration, including
backups, network fiddling, etc. Not easy, but possible, to take
most of the burden of system administration off of most users, but
still leave each user with the warm and fuzzy feeling of having
his/her own machine.

Barry Margolin

unread,
Mar 20, 1990, 11:31:30 PM3/20/90
to
In <1990Mar21....@ultra.com>, Steve Jay (s...@ultra.com) writes:
>he...@utzoo.uucp (Henry Spencer) writes:
>> > The problem with sharing a computer is that someone gets to be
>> >administrator. And that means making decisions about software and o/s
>> >versions which will impact users...
>> I really don't understand this persistent myth that several dozen amateur
>> system administrators are better than one professional.
>I think it's possible to have the best of both worlds...single user
>workstations with the benefits of central administration, including
>backups, network fiddling, etc.

The original posting claimed that the benefits of single-user systems was
that system administrators don't bother the users by forcing upgrades at
inconvenient times, etc. How do you claim they can be administered
centrally without the users noticing? For instance, suppose there's an
automated network backup system (something we're planning on using for the
100 or so Macs on our network), but perhaps it requires a particular system
version. How do you ensure that backups are performed without forcing
every user to go through the hassle of upgrading their systems? What do
you do about the users who don't feel like upgrading just yet (perhaps
they haven't gotten around to getting the upgraded version of some
application so that it will work with the new system)?
--
Barry Margolin, Thinking Machines Corp.

bar...@think.com
{uunet,harvard}!think!barmar

Randell Jesup

unread,
Mar 20, 1990, 8:58:01 PM3/20/90
to
>In article <00933EBB...@KING.ENG.UMD.EDU>, sys...@KING.ENG.UMD.EDU

>(Doug Mohney) writes:
>
>> If shared resources are such wonderful critters, how come multiuser Macs
>> aren't popular? Or '386es? You could conceivably hang multiple terminals
>> from a '386 or '486 box, but I haven't heard of people rushing out to do so.
>
>Predictable response time...This is also (one of the reasons, anyway) why
>Apple does not support pre-emptive multi-tasking. I'm using a 16Mbyte

>DECstation 3100 and despite the faster processor, it doesn't compare with
>a 68030 Mac on user interface reponsiveness. And the DECstation is hardly ever
>used by other users. Moral of the story? A multi-tasking OS with virtual memory
>etc. has its price.

You're arguing a deficiency of most Unixes, not of multi-tasking
per se. A good counter-example is the Amiga - preemptive multitasking but
provides excellent response time even on a lowly 68000. Most Unixes are
not optimized for user response time, their schedulers just weren't designed
with that as a major consideration. On an Amiga, the highest-priority task
gets 100% of available cycles, or round-robins with tasks of the same
priority, on a many-times-per-second basis. Combined with interrupt and
DMA driven IO, this produces very fast user response times. Light-weight
tasks (faster task-switching) helps here also.

I suspect the main reason Apple hasn't gone preemptive is that their
system was designed so that preemption would be a massive problem, at best.
All those "low-memory-globals", etc that programs modify would cause major
havoc to support, or require massive changes of the "rules", making most
applications that had been written correctly become "broken".

Those of us in the micro market often have to bow and scrape to
the Great God of Compatibility. :-( We here at Commodore have been stuck
with our own early design decisions in some cases.

There are other ways to improve user response time, most of them
"classical". Stratus VOS (last I looked) bumped the priority of a task that
just got input from a user temporarily. This improves the "feel" of
responsiveness.

--
Randell Jesup, Keeper of AmigaDos, Commodore Engineering.
{uunet|rutgers}!cbmvax!jesup, je...@cbmvax.cbm.commodore.com BIX: rjesup
Common phrase heard at Amiga Devcon '89: "It's in there!"

George.H.Harry.Rich

unread,
Mar 21, 1990, 9:53:06 AM3/21/90
to
Speaking as a user, I find that there are applications where it's really
important for me to be together with everyone else, and others where I
feel that I can do my best job, if I'm allowed to time upgrades, select
my own software, etc., etc. My own feeling is that the best approach is
using networked individual systems where the software and data that must be
sychronously updated is maintained on a network server, and the software which
does not need to be in synchronization sits on my workstation.

This has the advantantage that the professional administrator can stick to
dealing with general needs of the organization without messing with
special requirements of individuals, and at the same time saves me the
problem of taking forced changes and upgrades when the organization doesn't
need them.

Of course this is expensive. But have you looked at the cost of the people
using these systems lately?

I don't think that the argument that most desktops have terribly low
average utilization levels amounts to a hill of beans. Expensive people
have been kept waiting for inexpensive computers for a decade by
others who have been trying to optimize computer utilization rather than
the overall operational cost and effectiveness of organizations.

Regards,

Harry Rich

Disclaimer: The ideas expressed here are my own and not necessarilly those
of my employer (who would be glad to sell you either kind of system).

usenet news poster

unread,
Mar 20, 1990, 11:58:48 PM3/20/90
to
Lets view the net as an architecture for a moment. What is the most
cost effective way to provide computing/network access to a large number
of people?

More than a decade ago it was hardwired terminals and central minis.
For the past decade or so, it has been local processors (micros) with
loose network interconnections, or perhaps moderately coupled local
processors (diskless workstations). Now it looks like X-terminals.
They give a reasonably good quality text and 2D graphics interface,
don't flog the net like a diskless WS, and avoid the cost of duplicating
disk drives etc. for each individual processor/desktop.

Question: Is this a temporary aberration or the shape of the future?

What will happen when the nets are 10x faster, disks 10x cheaper etc.

David States, National Library of Medicine
(usual disclaimer, views my own only)

Steve Jay

unread,
Mar 21, 1990, 12:56:21 AM3/21/90
to
barmar@ (Barry Margolin) writes:

>How do you ensure that backups are performed without forcing
>every user to go through the hassle of upgrading their systems? What do
>you do about the users who don't feel like upgrading just yet (perhaps
>they haven't gotten around to getting the upgraded version of some
>application so that it will work with the new system)?

The next words in my previous article were "Not easy, but possible".

I don't claim to be able to answer these questions in all cases, but
I think "central administration, single user workstations" can be
handled most of the time. For the specific example, you can do network
backups without requiring the same OS verison on all machines. In fact,
you can do network backups of machines from different vendors. The key
is probably mutual trust & cooperation between the user & administrator.
Oops, I just shot my original argument, which was that users want
single user systems because they can't get what they want from their
administrator.

The bottom line is that both central servers & single user worksations
are likely to be around for a long time. Tastes great, less filling.

Anyway, we've kind of drifted off of a subject for comp.arch. It's
an interesting issue. Is there a more appropriate newsgroup for it?

Eugene Brooks

unread,
Mar 21, 1990, 1:17:55 AM3/21/90
to
In article <100...@convex.convex.com> ham...@convex1.convex.com (Ed Hamrick) writes:
>I'd be happy to run these programs on a C210. I think you'd find that
>the C210 does much better than the 25 MHz clock would otherwise lead
>you to predict.

A friendly fellow on the Internet has taken care of this for you.
I won't use his name to protect the innocent!

The score for the network simulator SIM was 31% of IBM 530 performance.
The score for the Monte Carlo was 46% of IBM 530 performance.
The Convex C2 looks pretty good relative to the XMP, given the price,
but its performance pales against any Killer Micro.

Both programs were compiled with -O2. The clock speed of the 530
is the same as that of the C210, I would say that the IBM is doing
something nice. The Convex compilers are nothing to sneeze at.

bro...@maddog.llnl.gov, bro...@maddog.uucp

K. Gopinath

unread,
Mar 21, 1990, 2:39:47 AM3/21/90
to
I want to get some details on the Cyber 992(esp. architectural
details that are imp. for a compiler writer). It is supposed to be a
vector machine but I do not know anything more. Any pointers, etc.
will be appreciated.
Thanks
Gopi

Kian-Tat Lim

unread,
Mar 20, 1990, 4:57:32 PM3/20/90
to
In article <14...@cit-vax.Caltech.Edu>, ktl@wag240 (Kian-Tat Lim) writes:
>I believe that Alliant has run 100x100 Linpack on a 28 processor
>system, but I'm not sure if that figure has been made public. It's
>probably obvious that it won't be 28 times the raw i860 number (11
>MFLOPS).

I've been told that the numbers are public:

Alliant FX/2808 (8 processors, 4 in one cluster):
LINPACK DP 100x100: 20
1000x1000: 220

Alliant FX/2828 (28 processors, 14 in one cluster):
LINPACK DP 100x100: 42
1000x1000: 720
--
Kian-Tat Lim (k...@wagvax.caltech.edu, KTL @ CITCHEM.BITNET, GEnie: K.LIM1)
Perl is the Swiss Army chainsaw [of Unix programming]. -- Dave Platt's friend

Joseph H Allen

unread,
Mar 21, 1990, 12:24:54 AM3/21/90
to
Sharing users is definately more effecient. Four processors can easily share
one user. However, when you get up to eight processors significant thrashing
begins to occur.

:)

--
"Come on Duke, lets do those crimes" - Debbie
"Yeah... Yeah, lets go get sushi... and not pay" - Duke

Doug Mohney

unread,
Mar 21, 1990, 10:14:43 AM3/21/90
to
In article <34...@news.Think.COM>, barmar@ (Barry Margolin) writes:
>? What do
>you do about the users who don't feel like upgrading just yet (perhaps
>they haven't gotten around to getting the upgraded version of some
>application so that it will work with the new system)?

Or when the upgrades break the existing applications, and then the systems
manager doesn't have to time to fix that user's problem?

"But it's only one person out of the company..." The greatest good for
the greatest number...? Euhhhhhh. Computing technology should be liberating,
not enslaving.

Doug

Stuart Lynne

unread,
Mar 21, 1990, 3:17:05 PM3/21/90
to
In article <5...@sibyl.eleceng.ua.OZ> i...@sibyl.OZ (Ian Dall) writes:
>In article <1990Mar19.2...@world.std.com> b...@world.std.com (Barry Shein) writes:
>>

>The other problem seems to be that, sure people would like to be able
>to use spare capacity, but they like to be guaranteed a certain
>minimum number of cycles. Well, let me propose the guaranteed share
>scheduler! I doubt if this is new but I'll propose it anyway! Suppose
>the total number of cycles per unit time is T, the maximum number of
>users is M and the number of active users is A. Every user should get
>max(T/A, T/M) cycles per unit time. The T/M is guaranteed, the T/A -
>T/M is the bonus for being shared. Of course, for the convenience to
>approach that of your own workstation you need T/M to be reasonably
>large. One killer micro worth maybe?

You have to have a scheduler that is aware of the number of users currently
requesting CPU cycles. When the number of cycles available is less than
requested divide it up via a formula where all possible users are allocated
a fixed percentage of CPU cycles (such that the total of all users
allocations add's up to 100 per cent).

When cycles are scarce you get at least your allocation. When cycles are
available because there is no one else around (at 3:00 AM for example) you
can get access to a MUCH larger amount of cycles.

For example if there are 50 people using a 50MIPS Killer Micro Mini
Mainframe (TM), each would be allocated 2%. During the day when *all* 50
people are in and pounding on the keyboard they would each get about 1MIPS
worth of CPU if they need it. At night two late night programmers doing big
make's could each get 50% or 25MIPS.

The scheduler will have to factor in system overheads as well of course.

Personally I'd much rather get a guaranteed 2% of a KMMM(TM) with the
potential of using it *all* when no one else is around than to get 100% of a
much smaller machine.

--
Stuart...@wimsey.bc.ca ubc-cs!van-bc!sl 604-937-7532(voice) 604-939-4768(fax)

Ross Alexander

unread,
Mar 21, 1990, 2:43:13 PM3/21/90
to
In article <1990Mar20....@utzoo.uucp>, he...@utzoo.uucp (Henry Spencer) writes:
> Yes, it's ever so much nicer to force every user to be a system administrator.
> That way you get to see any particular mistake made over and over again,
[ much totally correct observation edited for brevity ]

> don't want to be bothered. For example, few people run backups half as
> often as a centrally-administered system run by professionals does.
> A good many of them live to regret it.

Yes, yes, and yes. Amateurs, even extremely well-meaning and erudite
ones, are paid not to adminstrate their workstations but to *get their
primary jobs done* be that what may. Backups are not their primary
job, and in the nature of things get pushed down the queue until they
fall off the bottom. Then cometh the day of reckoning, and Lo! there
is no backup, folks. Guess it's time to redo it. Sure glad we
remember everything we did :-(.

I might add operator time $ is < rocket scientist time $ by an
appreciable margin. And the central site gets the backups done. ( We
do a full backup of everything every working day. )

Either way you cut it (central server or distributed workstations),
you *must have* a professional administrator whose primary job is
adminstration, or the nescessary just doesn't get done. Ad hoc
administration by uncoordinated part-timers is a recipie for chaos.

--
--
Ross Alexander (403) 675 6311 r...@aungbad.AthabascaU.CA VE6PDQ

Henry Spencer

unread,
Mar 21, 1990, 12:51:48 PM3/21/90
to
In article <7670...@p.cs.uiuc.edu> gil...@p.cs.uiuc.edu writes:
>(1) The price of a killer micro CPU is not much more than a decent
> commercial electronic typewriter. And most secretaries get their
> own typewriter... gee, I wonder why?

Probably because typewriters don't need sysadmins, and therefore it is
cheap to buy one for each heavy user. Computers are different.
--
Never recompute what you | Henry Spencer at U of Toronto Zoology
can precompute. | uunet!attcan!utzoo!henry he...@zoo.toronto.edu

Dan Hendrickson

unread,
Mar 21, 1990, 5:17:48 PM3/21/90
to
In article <2...@van-bc.UUCP> s...@van-bc.UUCP (Stuart Lynne) writes:
}In article <5...@sibyl.eleceng.ua.OZ> i...@sibyl.OZ (Ian Dall) writes:
}}In article <1990Mar19.2...@world.std.com> b...@world.std.com (Barry Shein) writes:
}}The other problem seems to be that, sure people would like to be able
}}to use spare capacity, but they like to be guaranteed a certain
}}minimum number of cycles. Well, let me propose the guaranteed share
}}scheduler! I doubt if this is new but I'll propose it anyway! Suppose
}}the total number of cycles per unit time is T, the maximum number of
}}users is M and the number of active users is A. Every user should get
}}max(T/A, T/M) cycles per unit time. The T/M is guaranteed, the T/A -
}}T/M is the bonus for being shared.
[other stuff]

}You have to have a scheduler that is aware of the number of users currently
}requesting CPU cycles. When the number of cycles available is less than
}requested divide it up via a formula where all possible users are allocated
}a fixed percentage of CPU cycles (such that the total of all users
}allocations add's up to 100 per cent).
}
}When cycles are scarce you get at least your allocation. When cycles are
}available because there is no one else around (at 3:00 AM for example) you
}can get access to a MUCH larger amount of cycles.
}
}For example if there are 50 people using a 50MIPS Killer Micro Mini
}Mainframe (TM), each would be allocated 2%. During the day when *all* 50
}people are in and pounding on the keyboard they would each get about 1MIPS
}worth of CPU if they need it. At night two late night programmers doing big
}make's could each get 50% or 25MIPS.
[other stuff]

While at Prisma, Inc (may it rest in peace), the sw group implemented a
fair-share scheduler on our local Sun network. Certain groups were given
a guaranteed percent of the machine if they needed it. Each group could
be given different fair-share percentages which were enforced on a fully-
utilized machine. I am not sure how the extra cycles were split up, if
it was based on the fair-share percentage, or on the # of users.
I don't have any more details on the scheduler. Perhaps some
of the ex-Prismoids out there could give more details.

(To throw this discussion into another thread, this would give a system
administrator with his, not your, interests at heart the ability to get
on your machine when he needed and steal YOUR cpu cycles to his heart's
content!)


Dan Hendrickson, Tandem Computers, Inc.
Austin, TX

Anders Wallgren

unread,
Mar 21, 1990, 5:25:53 PM3/21/90
to
In article <2...@van-bc.UUCP>, sl@van-bc (Stuart Lynne) writes:
>
>You have to have a scheduler that is aware of the number of users currently
>requesting CPU cycles. When the number of cycles available is less than
>requested divide it up via a formula where all possible users are allocated
>a fixed percentage of CPU cycles (such that the total of all users
>allocations add's up to 100 per cent).
>
>When cycles are scarce you get at least your allocation. When cycles are
>available because there is no one else around (at 3:00 AM for example) you
>can get access to a MUCH larger amount of cycles.
>
>For example if there are 50 people using a 50MIPS Killer Micro Mini
>Mainframe (TM), each would be allocated 2%. During the day when *all* 50
>people are in and pounding on the keyboard they would each get about 1MIPS
>worth of CPU if they need it. At night two late night programmers doing big
>make's could each get 50% or 25MIPS.
>

Karl Marx would be proud...

Jack McClurg

unread,
Mar 22, 1990, 12:28:11 PM3/22/90
to
/ hpfcda:comp.arch / use...@nlm-mcs.arpa (usenet news poster) / 9:58 pm Mar 20, 1990 /

don't flog the net like a diskless WS
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

David States, National Library of Medicine
(usual disclaimer, views my own only)

----------
What is the justification for this statement? Different implementations
of diskless workstations seem to load the net very differently. If you
are interested in this, maybe we should start a new basenote.

Mark Linimon

unread,
Mar 22, 1990, 3:23:06 AM3/22/90
to
In article <1990Mar20....@utzoo.uucp>, he...@utzoo.uucp (Henry Spencer) writes:
> I really don't understand this persistent myth that several dozen amateur
> system administrators are better than one professional. If *only* the
> user himself is affected, it doesn't make much difference, but that's
> almost never the case in reality.

I'll have to disagree with one of Henry's implicit assumptions here, which
is that most organizations will supply such a "professional." In my
experience with small and medium-size [engineering] companies, management
does not feel that system administration is an undertaking that requires
either time or personnel. Given that one has some knowledge of system
administration, one will be 'volunteered' to do it. With a centralized
system, one gets to do a whole group's worth of system administration.
With a decentralized system, one gets to do one system's worth.

Assuming that management feels that it's a zero-effort activity, one is not
going to get brownie points, extra credit, overtime, or even a thank-you
for either; in fact, may be criticized for "wasting time". So how much free
time would one like to spend on it?

I'm not saying this is right, just common, and I speak from repeated
experience. Make mine decentralized.

Mark
--
Mark Linimon / Lonesome Dove Computing Services / Southlake, Texas
lin...@nominil.lonestar.org || "I'm getting too old for this..."
{mic, texbell}!nominil!linimon || -- Guy Clark (ain't we all, Guy...)

Wm E Davidsen Jr

unread,
Mar 22, 1990, 8:51:27 AM3/22/90
to
In article <17...@aurora.AthabascaU.CA> r...@cs.AthabascaU.CA (Ross Alexander) writes:

| Yes, yes, and yes. Amateurs, even extremely well-meaning and erudite
| ones, are paid not to adminstrate their workstations but to *get their
| primary jobs done* be that what may. Backups are not their primary
| job, and in the nature of things get pushed down the queue until they
| fall off the bottom. Then cometh the day of reckoning, and Lo! there
| is no backup, folks.

This is true of PCs but not of workstations. The workstation may very
well have most of its filesystems NFS mounted on a large machine anyway,
keeping only the system files and temp local, and in any case can be
backed up by a script run from cron on a regular basis.

We do that for 400 workstations here, and it just works. Daily
incrementals, weekly full dumps (staggered to spread load), one operator
mounting tapes on the drives. If a sysmgr sets up a good crontab once,
the system will take care of itself for the most part, and detect most
problems and send mail to the professional manager. This leaves the user
to make the choice of when (if) the o/s gets upgraded, etc. Other stuff
like updating the alias and sendmail files gets done by cron, too.

By giving the user a modest system of his/her own, things like
mail/news/editing run at constant and predictable speed, while file,
compute, print, and {n,t,e}roff servers provide cheap shared power to
keep the cost of computing down.
--
bill davidsen (davi...@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
"Stupidity, like virtue, is its own reward" -me

Ed Basart

unread,
Mar 22, 1990, 1:40:27 AM3/22/90
to
Obviously I am biased (due to my employment situation), but I believe
X terminals have a semi-permanent advantage over workstations.
The real difference between a workstation and an X terminal (or in our
ergot, network display station), is the fact that one can build a
point product that purely runs X, avoids all the trash, and is
consequently highly integrated and "cheaper" and sometimes "better".
Workstations are low cost (read microprocessor-based) computers with
a display plopped on top. Network display stations go one step further by
first starting with the display, and wrapping "just enough" hardware
around it to get it to work effectively.

In the case of diskless workstations versus network display stations,
I think many observers would agree that battles rage on, but that
network display stations have won the war. Diskless workstations are
an abomination and aberration that were contrived to reduce cost. The
result is an amputated system sliced through some rather important
arteries. Just put your sniffer on a network of diskless nodes and
watch the flow of blood that is paging traffic.

So, when one views a workstation as a general purpose platform that has
to keep getting faster and faster, with ever more memory, disc, and
floating point (remember many resources must be added to feed the
ravenous appetite of Un*x), it will cost more than
"display-only" network display with its relatively simple operating
environment. If workstations evolve to become more like network
display stations, then we will be back to the CISC versus RISC
arguments that have given us all such entertaining reading here in this
forum.

As long as the workstation is a pile of boxes and add-in boards, they
will always cost more than the corresponding X terminal. And suppose
that soothsayers are right, costs plummit and a diskful workstation
may cost $499 and an X terminal $475, but who cares? As long as the
X terminal remains dedicated to a single, simple function it will work
better and be the desktop device of choice because it has 1 million
less lines of code to worry about.

Borrowing a line from Eugene Brooks:

THERE IS NO ESCAPE FROM THE ATTACK OF THE KILLER X TERMINALS

(I had to put that in caps so that folks can hear me over the roar
of the fans in their workstations.)

--

Ed Basart, 350 N. Bernardo Ave., Mountain View, CA 94043, (415)694-0650
uunet!lupine!ed

WALLWEY DEAN WILLIAM

unread,
Mar 22, 1990, 11:36:45 AM3/22/90
to
As far as the attack of the killer X-terminals, there really is only one
reason why there even are such things---

X-Windows SUCKS---

It's so SLOW and Such a computer HOG in terms of both memory
and computer cycles. The Unix community has found that in order to
get performance greater than a pc-XT out of most "Workstations" running
X, that they have to off-load much of the work to an X-terminal--Which
easily increases the cost per user another $1500 or so!

What actually you are going to see in the 90's are the attack of the
KILLER PCs running OS/2. PC's running OS/2 have all the benifits that
people have been talking about in single user systems, but beat the
performance/$$$ ratio better than any other computing system out there!
Where else can I spend $5000, and get a machine that compares in pure
performance to the top quality Suns and other workstations that sell for
up to $30,000. For that price it even includes a LARGE hard drive that
doesn't have to be shared, a co-processor based graphics system that
compares to the best workstations, and has the largest program base of
programs in the world?

A side note:
Motif, probably the X windowing system that will become the true
standard for UNIX, was actually a copy of MicroSoft Windows Look And
Feel with the added 3-d effect. It was proposed by MicroSoft and HP (I think?)
to the OSF! My own wimpy 10 MHz 286 running MSWindows is "snappyer" than
the $25,000 worksations running Motif that I use here at the
University. I can't wait until I get OS/2. OS/2 is supposed to be
faster than even MSWindows. You can just imagine how fast it would be
on a 386 machine compared to running UNIX and Motif on a 386 machine!
Another feature of OS/2 that will come out next year is that it will have
a "Page Desciption Language" built in that compares with PostScript
--(why do you think APPLE corp dropped its Adobe stock and formed a
co-development agreement with MicroSoft----they saw the "writing on the wall").
This feature is completly device independent and is also used in displaying
graphics to get the the highest quality "WYSIWYG" like Next's Display
Postscript. This means that also an OS/2 shop can use $1000 laser
printers rather than $6000 ones that a Unix shop would be required to
use to get the same quality of output!

Dean Wallwey

Mike Bolotski

unread,
Mar 22, 1990, 1:00:14 PM3/22/90
to

In article <18...@boulder.Colorado.EDU>, wal...@boulder.Colorado.EDU
(WALLWEY DEAN WILLIAM) writes:

A whole bunch of stuff about the imminent death of UNIX workstations.

Superb satire. Thank you.

But just in case that message was real..



|> Postscript. This means that also an OS/2 shop can use $1000 laser
|> printers rather than $6000 ones that a Unix shop would be required to
|> use to get the same quality of output!

Our Sun cluster uses an Apple LaserWriter, an HP LaserJet, and a TI printer.
Identical those used on PC's. A serial port is a serial port.

Now can we get back to architecture discussions?

------
Mike Bolotski, Department of Electrical Engineering,
University of British Columbia, Vancouver, Canada
mi...@salmon.ee.ubc.ca | mikeb%salmon.e...@relay.ubc.ca
salmon.ee.ubc.ca!mi...@uunet.uu.net| uunet!ubc-cs!salmon.ee.ubc.ca!mikeb

WALLWEY DEAN WILLIAM

unread,
Mar 22, 1990, 2:24:30 PM3/22/90
to
In article <11...@fs1.ee.ubc.ca> mi...@salmon.ee.ubc.ca writes:
>
>In article <18...@boulder.Colorado.EDU>, wal...@boulder.Colorado.EDU
>(WALLWEY DEAN WILLIAM) writes:
>
>A whole bunch of stuff about the imminent death of UNIX workstations.
>
I don't think there will be a death of UNIX workstations, but I think
that you will see people in the 90's buying PC's in places where UNIX
workstations commenly fill. Unix Workstations I hope will move up to
a higher plane! X is slow and clunky to say the least! You can get
decent performance on X-terminals or Expensive workstation pretty much
dedicated to a single user, but that is expensive! How much does it
cost to set up Sun Lab with 20 workstations and good printing
facilities?

By the way MicroSoft has recently been reporting that they are selling
more copies of little O' MSWindows than Apple is selling Macintoshes!
I can guaranty APPLE corp is selling more Machintoshes, than Sun
is selling Sun Workstations!! Granted MSwindows is not a real
operating system, but most of the people running MSWindows will be
capable of running OS/2 (a real operating system by most peoples
standards) with more memory!


>Superb satire. Thank you.

If you really think that is satire, prove me wrong point by point----
Also when I mean Attack or the Killer PC, I'm talking about just pure
numbers.

>
>But just in case that message was real..
>
>|> Postscript. This means that also an OS/2 shop can use $1000 laser
>|> printers rather than $6000 ones that a Unix shop would be required to
>|> use to get the same quality of output!
>
>Our Sun cluster uses an Apple LaserWriter, an HP LaserJet, and a TI printer.
>Identical those used on PC's. A serial port is a serial port.

Do they all produce the same quality output from all the programs you
can run on your Sun cluster----Here at CU, all of our workstations, and
our VAX cluster use Postscript to get out good quality. If you can
really get as good of output on the HP LaserJet doing graphics and
scalable fonts as you can on a postscript printer, I and I am sure
others would like to know how.

>
>Now can we get back to architecture discussions?

I agree this is not the place for discussions of operating systems, or
X or even shared vs single user systems except in the context of arch.
Let's move this discussion to mail or another News Group.

>
>------
>Mike Bolotski, Department of Electrical Engineering,
> University of British Columbia, Vancouver, Canada
>mi...@salmon.ee.ubc.ca | mikeb%salmon.e...@relay.ubc.ca
>salmon.ee.ubc.ca!mi...@uunet.uu.net| uunet!ubc-cs!salmon.ee.ubc.ca!mikeb

The above is not a flame--And no solution is perfect or acceptable in
all situations, but I think via PC's "workstation" environments are
going to be seen a lot more!

Dean Wallwey

Mark Moraes

unread,
Mar 22, 1990, 3:13:06 PM3/22/90
to
The last time this thread came up (under the titles "Fad computing"
and "X-terminals vs workstations"), some of us went offline and
had our little private flame war, er, rational discussion. Some points
that were brought up:

- The problem with single-user vs shared is mainly political, partly
emotional, almost never technical. (So let's get this debate out of
comp.arch, please -- it has almost nothing to do with architecture
other than the fact that computers are involved periperally in the
debate)

- There's a warm fuzzy feeling of "knowing this machine is MINE, all
MINE, and you can get your grubby paws off it, thank you." By the
same token, there's this warm fuzzy feeling of having the latest, most
sexy hardware on your desk, often giving you the same effective
throughput as the last generation...

- Some shared computing facilities (SCFs) start to dictate what
hardware their users run. `No, you can't buy machines from vendor X
even if they're more cost-effective because we don't like them, or
because it'll spoil our "special relationship" with vendor Y.'

- Some shared computing facilities (SCFs) start to dictate what
software their users run. (In particular, many sysadmins let their
personal idiosyncrasies get in the way of user support -- things like
sh vs csh vs ksh, vi vs jove vs emacs, suntools vs X10 vs X11, troff
vs TeX vs *[Ww]ord* etc)

- SCFs usually involve charging or cost sharing of some form, which is
always a minefield of political problems. Accusations that one group
of users is subsidizing another, that charges are biased, etc creep in.
People end up preferring to pay larger amounts of money just to get
free of the strings.

- Some SCFs start setting arbitrary resource limits, even if users are
willing to pay for more. (Anecdotes about the computer centres that
wouldn't let you print more than N pages per month go here)

- Most SCFs hate networks of workstations, especially diskless
workstations since they're a pain to administer.

- Some people believe that running your own workstation is a piece of
cake. After all, the vendors ship systems that can be used straight
out of the box.

- Sysadmins? What sysadmins? That's what grad students are for!

- Disk disasters? What disk disasters?

- Networks are a snap!

- Security is too expensive; no one will break into our machines!

- Vendors ship secure systems that casual hackers can't break into
easily!

- Why bother upgrading the operating system?

- Having multiple servers and machines makes your environment more
tolerant of faults. (memories of 45 machines symlinking to a
single shared news partition, mail partition, or /etc/motd go here)

- Having Unix get between your window system and your display can be a
real drag; typically, a cheap windowing terminal gets better
responsiveness than your expensive workstation/cruncher because it
doesn't have to do system calls just to process mouse movement.

- People who want single-user machines are best advised to go and buy
some brand of personal computer, unless they're sufficiently
Unix-savvy, or willing to learn a fair bit of arcania.

Mark.

PS: Oh yeah, for the humour impaired, some of the items above should
probably have smileys after them...

Peter da Silva

unread,
Mar 22, 1990, 2:21:48 PM3/22/90
to
> Where else can I spend $5000, and get a machine that compares in pure
> performance to the top quality Suns and other workstations that sell for
> up to $30,000.

Well, you can spend half that and get an Amiga... and given the incredible
thawball [1] of OS/2 applications and the fact that you can run DOS apps
on the Bridge card at least as well as you can under OS/2, it's probably
got it beat on applications as well.

> OS/2 is supposed to be faster than even MSWindows.

I've seen OS/2. It's no faster than MS Windows, and Windows is a dog.

[1] Thawball (n): Opposite of a Snowball, indicates a shortage. First used
in Shockwave Rider by John Brunner.
--
_--_|\ `-_-' Peter da Silva. +1 713 274 5180. <pe...@ficc.uu.net>.
/ \ 'U`
\_.--._/
v

WALLWEY DEAN WILLIAM

unread,
Mar 22, 1990, 3:28:02 PM3/22/90
to

In article <18...@boulder.Colorado.EDU> wal...@boulder.Colorado.EDU


(WALLWEY DEAN WILLIAM) I write:
>Where else can I spend $5000, and get a machine that compares in pure
>performance to the top quality Suns and other workstations that sell for
>up to $30,000.

I should have said:
Where else can I spend $6500, and get a machine at street prices that
copmares about the same in performance benchmarks as medium quality Stand-Alone
Suns and other workstations that you are likely to find on a deskTop.

(look at Byte's latest Unix benchmaks---Primary the Everex System----
You can get clone systems that are give 98% of the speed of the
Everex and cost $4000. All that need be added are the co-processor
graphics system,
the hard-drive and the operating system----Easily can be done for under
$2500)

I admit I did get a little carryed away in my original posting--But I do
still stand by my general view of X running at least Motif.... I was
actually really looking forward to seeing Motif on our machines until
I saw its performance. It was only when I saw Motif run on a DecStation
3100 configured almost identically to the ~$39,000 Reviewed model in Byte
a couple of months ago, that I have seen what I consider acceptable
(In this case- much better than acceptable---blinding) implementation.
Yet the point remains, X running Motif is expensive for the performance
that it actually yeilds!

Dean Wallwey

George.H.Harry.Rich

unread,
Mar 22, 1990, 8:31:40 AM3/22/90
to
In article <2...@van-bc.UUCP> s...@van-bc.UUCP (Stuart Lynne) writes:
>In article <5...@sibyl.eleceng.ua.OZ> i...@sibyl.OZ (Ian Dall) writes:
>>In article <1990Mar19.2...@world.std.com> b...@world.std.com (Barry Shein) writes:
>>>
>
...

>Personally I'd much rather get a guaranteed 2% of a KMMM(TM) with the
>potential of using it *all* when no one else is around than to get 100% of a
>much smaller machine.
>
>--
>Stuart...@wimsey.bc.ca ubc-cs!van-bc!sl 604-937-7532(voice) 604-939-4768(fax)
My experience in shared environments is that I can't get that guarenteed
2%, no matter how good the scheduler is. There is always maintenance, system
failure, etc., etc.

My relatively slow desktop workstation takes care of the small job I have
to get done in the next 10 minutes much more reliably than any shared system;
in the event of a system failure, or maintenance, both of which occur on
the desktop, I have redundancy -- i.e. borrow the desktop on the next desk.

I'll have to admit that with a different kind of work pattern, I might prefer
the really fast shared system, but for most environments availability
rather than compute power is the issue.

Regards,

Harry Rich

Disclaimer: Again, my ideas on this subject are my own, and not necessarily
those of my employer.

Lawrence Crowl

unread,
Mar 22, 1990, 5:15:25 PM3/22/90
to
In article <45...@ames.arc.nasa.gov>
lama...@ames.arc.nasa.gov (Hugh LaMaster) writes:
>My question was intentionally brief, but to be more specific: the [BBN TC
>2000 Multiprocessor] architecture obviously depends on the ability to
>parallelize in such a way that global memory bandwidth is not the bottleneck.
>How well is this working out?

My experience has been with the first Butterfly, based on the 68000. On this
system, contention for the "inter-node" communication network was negligible.
You are far more likely to limit performance because of contention for a
specific memory module than the communication network. I expect (but do not
know) that the same is true for the TC 2000.
--
Lawrence Crowl 716-275-9499 University of Rochester
cr...@cs.rochester.edu Computer Science Department
...!{ames,rutgers}!rochester!crowl Rochester, New York, 14627

Stuart Lynne

unread,
Mar 22, 1990, 7:20:06 PM3/22/90
to

I have two different patterns of use. The first which is also the most usual
is pretty typical, reading news :-), reading mail, editing, running small
jobs, doing miscellanous odd jobs. For this I want my guaranteed response,
but am not worried if some of them take a couple of minutes or so.

The second pattern is to consume a great deal of CPU/IO resources. For
example checking out a very large source tree, doing a complete make,
generating a release, running test suites, etc.

While the first type of use can be handled on virtually any environment (>80286)
the second can't unless I'm willing to wait several hours. I don't mind
scheduling it for times I know there are not too many users around. But I'd
much rather that it can be done in minutes than hours.

So I stand by my statement. For my use, 2% is great for daily use. When I
really need to get a lot of work done I'll come in evenings when I can get
greater than 50% of the KMMM's resources for my own use.

Anyway it will be interesting to see how well this all will work. We're
getting a MIPS R3000 based machine in the next month or so. It's a tad bit
faster the Unix on a 25Mhz 386 box. Maybe I'll even try X windows finally.

Charles Simmons

unread,
Mar 22, 1990, 11:45:34 PM3/22/90
to
In article <1990Mar20....@utzoo.uucp>, he...@utzoo.uucp (Henry
Spencer) writes:
> Yes, it's ever so much nicer to force every user to be a system
administrator.
> That way you get to see any particular mistake made over and over again,
> instead of just once, which keeps life from getting dull. It's particularly
> exciting when networks are involved, which means that one person's mistake
> can foul up everyone else, or when security is involved, which means
> that one person's mistake can lose you a lot of money and work.

>
> I really don't understand this persistent myth that several dozen amateur
> system administrators are better than one professional. If *only* the
> user himself is affected, it doesn't make much difference, but that's
> almost never the case in reality.
>
> >... On of the nicest things about a system
> >of your own, even is small, is that backups happen when you want,
> >upgrades happen when you want (and more importantly don't happen when
> >you don't want)...
>
> No, sorry, these things don't happen when you want. They happen when
> you have time -- which is usually long after you really want -- or when
> external constraints force you into it -- which is usually just when you

> don't want to be bothered. For example, few people run backups half as
> often as a centrally-administered system run by professionals does.
> A good many of them live to regret it.
> --
> MSDOS, abbrev: Maybe SomeDay | Henry Spencer at U of Toronto Zoology
> an Operating System. | uunet!attcan!utzoo!henry
he...@zoo.toronto.edu

I don't quite understand why people believe that a centralized compute
server will be serviced by a single adminstrator (or administrative
facility), but that individual workstations on a network will have
to be administered by individual users.

Here at Oracle, each programmer has their own workstation. Programmers
mount file systems across the network as needed, so sharing resources
is trivial. File systems backups are done by a central administrative
facility, so programmers don't have to worry about backups. The central
administrative facility is also responsible for updating software on
the various systems.

This technique works fairly well despite the extreme heterogeneity of our
network.

-- Chuck

Eric S. Raymond

unread,
Mar 23, 1990, 9:36:45 AM3/23/90
to
In <11...@nlm-mcs.arpa> David States wrote:
> What will happen when the nets are 10x faster, disks 10x cheaper etc.

Psychologically, what people want is to own lots of resources. What they
can have is constrained by economics. X terminals won't cut it long-term
because when disks get 10x cheaper the pressure for lots of `owned' local
storage will mount. With fiber optics coming in, net bandwidth should cease
to be a concern long before that.
--
Eric S. Raymond = er...@snark.uu.net (mad mastermind of TMN-Netnews)

M.R.Murphy

unread,
Mar 23, 1990, 10:16:54 AM3/23/90
to
In article <2...@emdeng.Dayton.NCR.COM> hr...@emdeng.UUCP (George.H.Harry.Rich) writes:
>In article <2...@van-bc.UUCP> s...@van-bc.UUCP (Stuart Lynne) writes:
>>In article <5...@sibyl.eleceng.ua.OZ> i...@sibyl.OZ (Ian Dall) writes:
>>>In article <1990Mar19.2...@world.std.com> b...@world.std.com (Barry Shein) writes:
>>>>
>>
>...
>I'll have to admit that with a different kind of work pattern, I might prefer
>the really fast shared system, but for most environments availability
>rather than compute power is the issue.

What I'd like to do is, see, take 200 16' outboard boats, and like, tie 'em
together with cable, and like use 'em to carry cargo across the Atlantic...

Or maybe, next time I want to fish in a local lake, why, what I'd like to
do is use a tanker like say the size of the Valdez, so I wouldn't have to like
row from spot to spot to try my luck and skill. I could just walk down the
deck...

Sheesh.

One would think that this subject has been beaten to death, and that it is
probably more suitable in comp.misc.
--
Mike Murphy Sceard Systems, Inc. 544 South Pacific St. San Marcos, CA 92069
m...@Sceard.COM {hp-sdd,nosc,ucsd,uunet}!sceard!mrm +1 619 471 0655

Dirk Grunwald

unread,
Mar 23, 1990, 4:10:51 PM3/23/90
to

I believe that the fair-share scheduler was written by Ray Essick, now
at Motorola in Schaumberg (hi ray) & will appear in the USENIX
proceedings.

He had some graphs showing percentage of CPU allotted over time, both
``before'' and ``after'' fair share. The ``before'' looked like
something from a chaos theory text, while the after looked like a flat
line.

MINICH ROBERT JOHN

unread,
Mar 23, 1990, 5:15:25 PM3/23/90
to
From article <18...@boulder.Colorado.EDU>, by wal...@boulder.Colorado.EDU (WALLWEY DEAN WILLIAM):

> As far as the attack of the killer X-terminals, there really is only one
> reason why there even are such things---
> X-Windows SUCKS---
>
> University. I can't wait until I get OS/2. OS/2 is supposed to be
> faster than even MSWindows. You can just imagine how fast it would be

From what I've seen, Windows is such a dog on just about anything but
a 486 that I would not not consider it "responsive" in any sense. That's
one thing Apple has managed to do that I've not seen anyone else come
close to. It's responsive. With a GUI, if it's even a tad bit slow it's
maddening!

> Another feature of OS/2 that will come out next year is that it will have
> a "Page Desciption Language" built in that compares with PostScript
> --(why do you think APPLE corp dropped its Adobe stock and formed a

> co-development agreement with MicroSoft---they saw the "writing on the wall").

Ah, here's that performance thing again. PS is not the most responsive
thing around. Ever play with a NexT (NEXT nEXT nExT ???)? A good example
of a pretty darn good idea that (IMHO) is a big, lazy dawg. Mostly
unusable. Sure, you say, but "PS will be better when we have CPUs with
enough horsepower to MAKE it fast!" Yeah, and my Mac that is already
fast will still be orders of magnitude faster. (Ever do Disply PS in 24
bit color? Care to think what it would be ike if you could?) Apple
decided to buck Adobe for two reasons: 1) they were pressured by Adobe
to use DPS too hard and 2) Adobe was too tight with PS technology.
Notice any coincidence in Adobes release of type 1 font formats and the
MS/Apple alliance? Adobe hust happens to be the first game in town to
gain wide acceptance. They also got cocky with licensing. Now they're
getting competition and they have to rework their entire role as a
company. (Lower prices, more open-ness. Wow: Apple get's someone to be
more open! :-)

> This feature is completly device independent and is also used in displaying
> graphics to get the the highest quality "WYSIWYG" like Next's Display
> Postscript. This means that also an OS/2 shop can use $1000 laser

> Dean Wallwey

Unfortunately, just being device independent is NOT necessarily the
best thing for a display model. Especially with GUIs, response is
probably MORE important than 1:1 WYSYWIG. PS is great and all, but it's
too damn slow for graphics intensive work. Great for page display with a
printer, and maybe even as a _preview_ capability for on screen work,
but no good (right now) for interactive work.

Ah, and now for OS/2: the beast that/will?/may-well be pretty good.
Right now, PC hardware is too slow (for my tastes) to run OS/2 at an
acceptable level. Maybe in a couple years, but people aren't sitting on
their duffs in the mean time. I think IBM may get a shock as the UNIX
world spreads higher and lower at the same time. (Maybe they'll work
more with stuff like their new RISC machines instead!)
So you got me to ramble on. Good job, not everyone can do that. But
call me back when Windows / OS/2 become usable beasts. Then I'll have to
check what everyone else has come up with in the mean time. Somehow I
see OS/2 in a VERY precarious position. Maybe it's time to open up.
(UNIX...about as open as one gets.)

Robert Minich
min...@a.cs.okstate.edu

My hands and my mind aren't speaking today, so cut off my fingers
instead of sueing me. :-)

Joshua Osborne

unread,
Mar 24, 1990, 2:00:41 AM3/24/90
to
In article <10...@cbmvax.commodore.com> daveh@cbmvax (Dave Haynie) writes:
[...]
>I use two non-UNIX systems with pre-emptive multitasking -- Apollos (under
>Aegis, or DomainOS, or whatever they call it these days) and Amigas. Both
>of these systems, especially the Amiga, are extremely responsive. In fact,
>moreso than the Mac. For example, on the Amiga, the main things governing
>user-interaction, such as mouse and keyboard response, are interrupt driven
>and managed by a high priority task. The user interface also runs at a
>higher priority than the average user task. So when you start that 64k x
>64k spreadsheet to recalculating, you don't have the mouse drop dead, and
>you can still move windows around.
[...]
>The real moral of the story is that operating systems originally designed
>for multi-user operation with users hooked in via serial line text
>terminals may not provide the best feel when adapted for use as the
>operating system for GUI based, single-user workstations. At least not
>without a great deal of rethinking, which apparently hasn't yet been
>completed by most of the folks building these systems.

Have you ever run X on a Sun with the servers pri jacked up a bit? We do it
all the time here (well, whenever we run X at least :-), and it *feels* much
faster. I'm shure it's not faster on a 3/60, or a 4/60. I'm shure it's
*slower* on a 3/50 (mem is tight, and the swapper may really want to swap
entire processes, but the pri of the large X program gets calculated in and
it doesn't get swapped out real offten, paged yes, swapped no...).

More thought may help, but just a little thought will go a long way :-)
--
str...@eng.umd.edu "Security for Unix is like
Josh_Osborne@Real_World,The Mutitasking for MS-DOS"
"The dyslexic porgramer" - Kevin Lockwood
Real Programs don't use shared text. Otherwise, how can they use
functions for scratch space after they are finished calling them?

Barry Shein

unread,
Mar 24, 1990, 7:16:37 PM3/24/90
to

I think if you read over my comments and Ian Dall's well-put
counter-comments the issue becomes clear:

Shared resources are computer-efficient

Single-user systems are people-efficient

So you make your choices, and that's not as snide as it may look,
sometimes one or the other does take precedence in a trade-off, for
good reasons.

I personally wouldn't want to try to run MasterCard's database on a
bunch of killer micros, for example (tho they might act as
workstations to a killer mainframe.)

I think the paradigm is somewhat broken anyhow. What most of us
*reallY* need are capable personal machines attached cleverly into
network webs containing hot-spots of centralized resources,
information servers, super-computers, databases etc.

But balancing these will be the artform of the 90's. That's what this
conversation is really about.

In ten years fascination with computers will be akin to being
fascinated by the telephone on your desk while ignoring that its main
utility is its ability to attach to the little wire that runs to the
wall.
--
-Barry Shein

Software Tool & Die | {xylogics,uunet}!world!bzs | b...@world.std.com
Purveyors to the Trade | Voice: 617-739-0202 | Login: 617-739-WRLD

Barry Shein

unread,
Mar 24, 1990, 7:34:40 PM3/24/90
to

Although X terminals are wonderful things I suspect they are doomed to
Sutherland's Cycle of Reincarnation.

Who will be the first X terminal manufacturer to put a local disk on
their terminal? Disks are getting cheap as heck.

Who will be the first to then run some "small" applications locally?

Who will be the first to realize you can run more than one diskless X
terminal off of a diskful one? Well, only if you put a little more
memory and disk on it...maybe remove the screen on the server.

Etc.

Barry Shein

unread,
Mar 25, 1990, 4:01:45 PM3/25/90
to

From: s...@van-bc.UUCP (Stuart Lynne)
>You seem to be contradicting yourself. If net bandwidth ceases to be a
>concern then it reduces the need for having a local disk. (I couldn't
>imagine using NFS over Ethernet as swap space, much nicer to have a cheap
>local SCSI! But at fiber optic speeds? )

Why not? You do realize that a faster, remote disk can be *faster*
over ethernet?

Cheap SCSI disks often get (much) less than 100KB/sec, well within
ethernet specs (less than 10% of an ethernet, max., not like you
should be doing that constantly.) Remote disks, say IPI or SMD2, get
around 500...1000KB/s or more.

Which means it's not at all hard for a remote disk to be faster than a
local SCSI. All this was measured and reported by Sun back when they
first did their ND stuff. Granted SCSI has gotten a little faster, but
not enough to flip that equation, other things being equal.

Now, if you're talking about very fast, expensive SCSI disks well
sure, or if you're talking about overloaded ethernets then sure.

But you didn't say any of that. A reasonably loaded ethernet (say, ten
diskless workstations with sufficient memory and a fast server with
fast disks) can perform better than a bunch of local, inexpensive scsi
disks.

So there's no hard truth in what you say, it all depends on
configuration and workload.

Beyond swapping, other disk operations can be significantly faster if
remote when we assume similar constraints (that remote, server systems
are faster.) Part of this is also due to the inherent parallelism in
the two systems (client/server), e.g., the remote system is looking up
name strings while you switch back to another process locally, both
systems are now computing on your workload.

Yes, this has been measured, no, it's not a universal truth and your
mileage may vary. But you might ask someone from Sun for the papers on
all this.

They didn't come up with these systems and then rationalize them, they
measured the possibility and then decided they'd work well enough and
went ahead and built them.

Times have changes a little, but diskless systems still have their
economies, particularly if you can keep your ethernets lightly loaded
(which is not hard to do in a lab environment, for example.)

Jack Jansen

unread,
Mar 25, 1990, 4:39:37 PM3/25/90
to
In article <10...@lupine.UUCP> e...@lupine.UUCP (Ed Basart)
gives some advantages of X terminals over diskless workstations.

There's one *very important* advantage he misses, though:
the expected useful lifetime of an X terminal is about 2-3 times
longer (unless serious changes take place in the user interface
business).

If you replace one person's Sun N by a Sun N+1 everybody will be
wanting one, and you'll find yourself buying everyone a new machine
every 2 or three years. However, for the last ten years a monochrome
1000x800 screen was what everybody had on their desk, and even
though colour was available on the very first workstations hardly
anyone used it, except for the people who had a real need for it (like
the CAD folks). Only now colour is becoming more-or-less standard.

But then, I'm not unbiased either: X terminals fit beautifully in
the Amoeba view of the world. Now if only they were easily programmable...

--
--
Een volk dat voor tirannen zwicht | Oral: Jack Jansen
zal meer dan lijf en goed verliezen | Internet: ja...@cwi.nl
dan dooft het licht | Uucp: hp4nl!piring!jack

Jon

unread,
Mar 25, 1990, 8:52:34 PM3/25/90
to
b...@world.std.com (Barry Shein) writes:

> Shared resources are computer-efficient
> Single-user systems are people-efficient

Perhaps if we ingore all collaborative work it really is that
simple. But most people don't do work separately and independently
from all other work done by all other people, any more than
most programs can be decomposed to arbitrarily small units of
work that can execute in parallel without waiting for previous
results. Computers make great computation machines, but are also
commonly used as tools to enter, store, and distribute information.
This is *not* sharing hardware for economic reasons. It's sharing
information because that *is* the function to be performed. The
resources used are almost certainly going to be a collection of
privately and collectively owned hardware and software.

-- Jon
--
Jonathan Krueger jkru...@dtic.dla.mil uunet!dgis!jkrueger
The Philip Morris Companies, Inc: without question the strongest
and best argument for an anti-flag-waving amendment.

Charlie Sauer

unread,
Mar 25, 1990, 10:51:27 AM3/25/90
to
In article <1990Mar25....@world.std.com> b...@world.std.com (Barry Shein) writes:
>Who will be the first X terminal manufacturer to put a local disk on
>their terminal? Disks are getting cheap as heck.
>
>Who will be the first to then run some "small" applications locally?

Since PC's configured to be X Terminals are now being offered, it could be said
that such beasts already exist. I'm referring to announcement of the "Dell
Station Partner" (AKA "pardner" in these environs) but other PC's can be
configured to do the same thing. I'm assuming here that the server and
applications run under DOS, but there's nothing to say that the same
hardware (inexpensive 386SX) couldn't run Unix with enough disk and memory
added. (A 20MB disk and 2MB of RAM are probably typical for a PC used as an
X terminal.)
--
Charlie Sauer Dell Computer Corp. !'s:uunet!dell!sauer
9505 Arboretum Blvd @'s:sa...@dell.com
Austin, TX 78759-7299
(512) 343-3310

Gregory G. Woodbury

unread,
Mar 25, 1990, 12:55:08 AM3/25/90
to
In article <14...@cbnewsc.ATT.COM> bw...@cbnewsc.ATT.COM <Bruce.F.Wong> writes:
>Staplers and tape dispensers can't be pushed across a wire or fiber
>so sharing is very inconvienent but computing power can.
:
>The only pieces of equipment that
>a computer user should be allowed to physically abuse are those that are
>needed for interaction with the computing network: display, keyboard,
>mouse; essentially human I/O devices.

Actually, computing power which *can* be shared *isn't*.

To bring in harsh reality, our network of killers has an average
utilization on the order of 2% over and given 48 hour period. On the shorter
scale of 24 hours, the utilization can approach 90% from time to time.

I would practically *kill* to get my hands on an implementation of
Linda or some means of transparently sharing the computing resources that my
site has available in abundance. (We have 5 88Ks, a Clipper, a 32332 and
a whole passel of macs). We have figured that most of our main number-crunching
applications could be fruitfully recast into some form of tuple space or
modularly parallel coding, I just can't locate a good source for attempting
to implement a Linda-like environment. Any pointers appreciated.

(BTW: I'd share some of my extra cycles with the Internet but the
University is being real backward about getting my disconnected internet
connected to the real internet!)
--
Pre-signature Work identification for informational purposes only:
System Programmer/System Manager
Center for Demographic Studies, Duke University
--
Gregory G. Woodbury
Sysop/owner Wolves Den UNIX BBS, Durham NC
UUCP: ...dukcds!wolves!ggw ...dukeac!wolves!ggw [use the maps!]
Domain: g...@cds.duke.edu g...@ac.duke.edu ggw%wol...@ac.duke.edu
Phone: +1 919 493 1998 (Home) +1 919 684 6126 (Work)
[The line eater is a boojum snark! ] <standard disclaimers apply>

Joshua Osborne

unread,
Mar 26, 1990, 1:09:34 AM3/26/90
to
In article <1990Mar25....@world.std.com> b...@world.std.com (Barry Shein) writes:
>Although X terminals are wonderful things I suspect they are doomed to
>Sutherland's Cycle of Reincarnation.
>[...]

>Who will be the first to then run some "small" applications locally?
DEC I think. I heard that it can run an xterm with an rlogin, or telnet in it
(or a dxterm), and that there might be a few other x things it can run by itself
(hopefuly an xlock, mabie an xclock...)
>[...]

--
str...@eng.umd.edu "Security for Unix is like
Josh_Osborne@Real_World,The Mutitasking for MS-DOS"
"The dyslexic porgramer" - Kevin Lockwood
"Don't try to change C into some nice, safe, portable programming language
with all sharp edges removed, pick another language." - John Limpert

Ronald G Minnich

unread,
Mar 26, 1990, 3:21:41 PM3/26/90
to
In article <1990Mar22.2...@cs.rochester.edu> cr...@cs.rochester.edu (Lawrence Crowl) writes:
>My experience has been with the first Butterfly, based on the 68000. On this
>system, contention for the "inter-node" communication network was negligible.
>You are far more likely to limit performance because of contention for a
>specific memory module than the communication network. I expect (but do not
>know) that the same is true for the TC 2000.
I would be interested if anyone can expand on this.
I was talking to someone from RP3-land a few months back.
He said that they rarely if ever saw contention for the same
memory BANK, much less the same memory location.
This seems directly contradictory to what you are saying about the
Butterfly. Anybody wanna guess if the difference is in:
1) RP3 and Butterfly programming styles
2) ???
I am stumped by this one. Anybody have some thoughts?
thanks,
ron
--
rmin...@super.org

Donald Lindsay

unread,
Mar 26, 1990, 6:38:25 PM3/26/90
to
In article <22...@metropolis.super.ORG> rmin...@metropolis.UUCP (Ronald G Minnich) writes:
>In article <1990Mar22.2...@cs.rochester.edu> cr...@cs.rochester.edu (Lawrence Crowl) writes:
>>My experience has been with the first Butterfly, based on the 68000. On this
>>system, contention for the "inter-node" communication network was negligible.
>>You are far more likely to limit performance because of contention for a
>>specific memory module than the communication network.

>I was talking to someone from RP3-land a few months back.

>He said that they rarely if ever saw contention for the same
>memory BANK, much less the same memory location.

Memory contention on these machines is controlled by programming and
algorithm. The RP3 now maxes at 64 nodes: the BBNs somewhat higher.
If all N processors decide to refer to the same byte at once, then
there _will_ be contention. Hardware ("combining networks") to solve
this has been proposed, but to my knowledge never built.

So, the programmers try to avoid contention. Data is deliberately
spread out. If some things have to remain centralized, then the
programs try to refer to them as rarely as possible. (This is
connected to the so-called "grain size": interactions become rarer if
one can find "coarse grained parallelism".)

I'm glad to hear that the RP3 people have been successful at this.
Of course, there may be a certain amount of selection here: programs
which didn't easily paralellize would migrate elsewhere.
--
Don D.C.Lindsay Carnegie Mellon Computer Science

Craig Hughes

unread,
Mar 26, 1990, 10:13:40 PM3/26/90
to

In article <86...@pt.cs.cmu.edu>, lin...@MATHOM.GANDALF.CS.CMU.EDU
(Donald Lindsay) writes:
..........

|> So, the programmers try to avoid contention. Data is deliberately
|> spread out. If some things have to remain centralized, then the
|> programs try to refer to them as rarely as possible. (This is
|> connected to the so-called "grain size": interactions become rarer if
|> one can find "coarse grained parallelism".)

Well, what about fine grain parallelism? If you can get lots of
processes 'doing their thing' on a small amount of memory that only they
need be concerned with, then contention is minimal. I don't think the
grain size has much to do with how orthogonal your data references end
up being.

------------------------------------------------------------------------
---------
Craig S. Hughes UUCP: ...bbn!li...@hri.com
Horizon Research, Inc. INET: li...@hri.com
Waltham, MA 02154
<- ------------- ->
------------------------------------------------------------------------
---------

pfi...@pfister.austin.ibm.com

unread,
Mar 27, 1990, 10:56:06 AM3/27/90
to
There's good reason why you can see memory contention on the Butterfly
and not on RP3. The Butterfly does not interleave memory banks:
Addresses start at location 0 in bank 0, proceed to location N in bank
N, then to location N+1 is in bank 2, etc. RP3 allows that: location 0
is in bank 0, location 1 is in bank 1, etc. If you allocate an array in
the normal way, it all ends up in one bank in the Butterfly; N
processors getting the first N elements then collide. Doesn't happen in RP3.
Actually, it's not quite that simple...
First, BBN has a software package allowing you to inject
interleaving (semi-?) transparently, using an indirection table in a
bank of local memory. Works, but has obvious memory overhead - a
pointer per array element per processor accessing the array.
Second, RP3 really doesn't do a straight interleave; it applies a
"hash function" before interleaving (within each page) so that common
power-of-two strides (and many others) can't cause bad bank conflicts.
Third, RP3 actually lets you pick what you want: You can get pages
of memory either interleaved (for access by many processors) or
non-interleaved (hence can be all allocated close to one processor for
its private use without memory traffic).
Fourth, everywhere I said "location" in the first paragraph, read
"cache line."

Greg
-------------------------
Mine, not my employer's opinions.
I *think* my net address is
@cs.utexas.edu:ibmchs!auschs!pfister.austin.ibm.com!pfister
but am not sure.

Henry Spencer

unread,
Mar 27, 1990, 3:48:50 PM3/27/90
to
In article <1990Mar25....@world.std.com> b...@world.std.com (Barry Shein) writes:
> Shared resources are computer-efficient
> Single-user systems are people-efficient

What some of the shared advocates, including me, have been pointing out is
that the second assertion is not self-evidently true. Single-user systems
still need sysadmin effort, and that scales much more strongly with the
number of systems than with their size. How people-efficient is it to have
your brightest creative people running backups and installing software?
--
Apollo @ 8yrs: one small step.| Henry Spencer at U of Toronto Zoology
Space station @ 8yrs: .| uunet!attcan!utzoo!henry he...@zoo.toronto.edu

Chris Shaw

unread,
Mar 27, 1990, 9:42:15 PM3/27/90
to
In article lin...@nominil.lonestar.org (Mark Linimon) writes:

>In article he...@utzoo.uucp (Henry Spencer) writes:
>> I really don't understand this persistent myth that several dozen amateur
>> system administrators are better than one professional. If *only* the
>> user himself is affected, it doesn't make much difference, but that's
>> almost never the case in reality.
>
>I'll have to disagree with one of Henry's implicit assumptions here, which
>is that most organizations will supply such a "professional." In my
>experience with small and medium-size [engineering] companies, management
>does not feel that system administration is an undertaking that requires
>either time or personnel.

This is bad management if the number of machines is large enough. I suspect
that such management thinks that secretaries are unnecessary. In some sense
a small engineering firm doesn't need a secretary -- anybody can answer the
phone and collect the mail. But the problem isn't the straight cost of
engineers doing this, it's the opportunity cost. Most small engineering firms
probably wouldn't buy insurance under the same logic. Cost accounting for
computer system management hasn't kept up with reality, I expect.

>With a centralized system, one gets to do a whole group's worth of system
>administration. With a decentralized system, one gets to do one system's worth.

There's generally not too much difference between 2 and (say) 8 networked
suns. Half the problem is the silly attitude on the part of engineering
management at such places that system management scales linearly with the
number of CPU's. My experience with distributed amateur management is that
each individual faces the same set of system problems and solves them their
own way badly (1 "Effort Unit" each). If the problems were solved correctly,
maybe two to three times the effort of one person would be spent (2-3 EU's).
However, if there are 8 people for 8 suns, then that's a savings of 5
"Effort Units". Systems run by part-time managers are usually unreliable
on numerous counts.

The benefits of "my own workstation" are purely psychological.
One doesn't have to unexpectedly share the cpu resource with other people when
a critical job is being done. But this is clearly a waste of resources from
a purely bean-counting point of view. The question is "what's the value
(in $) of my unexpectedly taking twice as long to complete a compute job?"
In other words, if I submit a job to CPU C that takes 1 hour, and it ends up
taking 2 hours due to CPU contention, what's it worth to me to guarantee
that 1 hour jobs take exactly 1 hour? The point I'm trying to make is that
job completion unpredictability exacts a cost, and for large sun-style
networks, that cost would have to work out to be gigantic in order to
justify the investment.

What Eugene Brooks is saying is that compared to a central Killer Micro with
bags of I/O, the distributed solution is becoming more and more outrageous
cost-wise as the days go by.

>Assuming that management feels that it's a zero-effort activity, ....
>..you may be criticized for "wasting time".

I would say that the right answer ought to be "convince management that
system management is not a free-time activity", in the same way that
accounting, insurance, depreciation and secretaries are not free.

>I'm not saying this is right, just common, and I speak from repeated
>experience. Make mine decentralized.
>Mark Linimon / Lonesome Dove Computing Services / Southlake, Texas


--
Chris Shaw University of Alberta
cds...@cs.UAlberta.ca Now with new, minty Internet flavour!
CatchPhrase: Bogus as HELL !

Eugene Brooks

unread,
Mar 28, 1990, 12:36:47 AM3/28/90
to
In article <1990Mar28.0...@cs.UAlberta.CA> cds...@cs.UAlberta.CA (Chris Shaw) writes:
>What Eugene Brooks is saying is that compared to a central Killer Micro with
>bags of I/O, the distributed solution is becoming more and more outrageous
>cost-wise as the days go by.
Compared to a central box of Killer MicroS the distributed solution is becoming
more and more outrageous. The plurality is important here. One SYSTEM but
lots of Killer Micros in it.


bro...@maddog.llnl.gov, bro...@maddog.uucp

Randell Jesup

unread,
Mar 28, 1990, 1:10:31 AM3/28/90
to
In article <1990Mar25....@world.std.com> b...@world.std.com (Barry Shein) writes:
>
>Although X terminals are wonderful things I suspect they are doomed to
>Sutherland's Cycle of Reincarnation.
>
>Who will be the first X terminal manufacturer to put a local disk on
>their terminal? Disks are getting cheap as heck.
>
>Who will be the first to then run some "small" applications locally?

Or the first to realize that X runs faster on something like a
68000-based amiga than on a 68020-based Sun, and is one h*ll of a lot cheaper,
is in color, and is more useful for non-X things than a X-terminal?

:-)

Disclaimer: I work for Commodore.
--
Randell Jesup, Keeper of AmigaDos, Commodore Engineering.
{uunet|rutgers}!cbmvax!jesup, je...@cbmvax.cbm.commodore.com BIX: rjesup
Common phrase heard at Amiga Devcon '89: "It's in there!"

Randell Jesup

unread,
Mar 28, 1990, 1:19:05 AM3/28/90
to
In article <1990Mar25.2...@world.std.com> b...@world.std.com (Barry Shein) writes:
>Why not? You do realize that a faster, remote disk can be *faster*
>over ethernet?
>
>Cheap SCSI disks often get (much) less than 100KB/sec, well within
>ethernet specs (less than 10% of an ethernet, max., not like you
>should be doing that constantly.) Remote disks, say IPI or SMD2, get
>around 500...1000KB/s or more.

The only disks even close to that slow are ancient 20-meg drives, and
some drives hooked up to protocol converters (Adaptec/Omti/etc). Fairly
cheap drives (like Quantum 40S's) get ~400K/s write, ~700K/s read (admittedly
under AmigaDos, but the drive is NOT a limiting factor). This is a 40Meg
drive, street price (I think) in the 350-450 range.

>Which means it's not at all hard for a remote disk to be faster than a
>local SCSI. All this was measured and reported by Sun back when they
>first did their ND stuff. Granted SCSI has gotten a little faster, but
>not enough to flip that equation, other things being equal.

That was _ages_ ago. SCSI hasn't gotten too much faster, but the
things attached to SCSI have gotten a LOT faster. Part of Sun's problem
is their SCSI drivers, the other part is the Unix FS/disk buffering schemes
(see the old discussion here about FS speeds).

John Mellor-Crummey

unread,
Mar 27, 1990, 11:14:19 AM3/27/90
to
In article <22...@metropolis.super.ORG> rmin...@metropolis.UUCP (Ronald G Minni
ch) writes:
>In article <1990Mar22.2...@cs.rochester.edu> cr...@cs.rochester.edu (La
wrence Crowl) writes:
>>My experience has been with the first Butterfly, based on the 68000. On this
>>system, contention for the "inter-node" communication network was negligible.
>>You are far more likely to limit performance because of contention for a
>>specific memory module than the communication network.

>I was talking to someone from RP3-land a few months back.


>He said that they rarely if ever saw contention for the same
>memory BANK, much less the same memory location.

The RP3 possesses several features (that the 68000-based Butterfly lacks)
that serve to reduce memory bank contention:

1) caches -- these can be used to cache data in both local and global memory
(coherence must be maintained in software). simple caching of read-only
reduces the amount of memory and switching network traffic/contention.

2) hardware support for global memory interleaving. in the 68000-based
Butterfly, a programmer must scatter data manually in software. without
interleaving, parts of important data structures tend to occupy the same
memory bank, and thus attract many accesses which result in contention.

To relate this all to the TC2000: the TC2000 has both of these features, so
memory bank contention should be less of a problem than in the original
68000-based Butterfly.
--
John Mellor-Crummey Center for Research on Parallel Computation
joh...@rice.edu Rice University, P.O. Box 1892
713-285-5179 Houston, TX 77251
--
--
John Mellor-Crummey Computer and Information Technology Institute
joh...@rice.edu Rice University, P.O. Box 1892
713-285-5179 Houston, TX 77251

Larry Kaplan

unread,
Mar 27, 1990, 11:36:41 AM3/27/90
to
In article <86...@pt.cs.cmu.edu> lin...@MATHOM.GANDALF.CS.CMU.EDU (Donald Lindsay) writes:
>In article <22...@metropolis.super.ORG> rmin...@metropolis.UUCP (Ronald G Minnich) writes:
>>In article <1990Mar22.2...@cs.rochester.edu> cr...@cs.rochester.edu (Lawrence Crowl) writes:
>>>My experience has been with the first Butterfly, based on the 68000. On this
>>>system, contention for the "inter-node" communication network was negligible.
>>>You are far more likely to limit performance because of contention for a
>>>specific memory module than the communication network.
>
>>I was talking to someone from RP3-land a few months back.
>>He said that they rarely if ever saw contention for the same
>>memory BANK, much less the same memory location.
>
>Memory contention on these machines is controlled by programming and
>algorithm. The RP3 now maxes at 64 nodes: the BBNs somewhat higher.
>If all N processors decide to refer to the same byte at once, then
>there _will_ be contention...

>...
>So, the programmers try to avoid contention. Data is deliberately
>spread out.

I would strongly agree with this last comment about the layout of the data.

Note that the TC2000 has a very advanced switching network that attempts
to deal with contention in the network in various ways. First, some
configurations of the machines contain alternate paths through the network
so that different requests can use different switching elements to reach
the destination memories. This does not help if the destination memory
is actually busy, though it generally increases the throughput of the network
by avoiding contention within it. The GP1000, or Butterfly-1, also has
this feature.

Next, various retry strategies exist (selectable by the O/S) to reduce the
effect of contention. The current favorite method consists of retrying
requests using a "random exponential backoff" strategy. This means that the
nth retry is made after a random delay of 1 to 2^n cycles. Different
retry strategies are available and used for "locked accesses".

I have done some informal studies of hot spot behavior and have found that in
the case of 32 nodes trying to read one node's memory, over 55% of the requests
are satisfied with virtually no contention. Of the rest, only about 3%
required more than a couple of retries. No request required more than about 10
retries.

To reiterate, the programmer has significant control over how often such a bad
memory hot spot occurs by laying out the data structures properly. The
switching network takes care of switch contention fairly well by itself.

#include <std_disclaimer>
_______________________________________________________________________________
____ \ / ____
Laurence S. Kaplan | \ 0 / | BBN Advanced Computers
lka...@bbn.com \____|||____/ 10 Fawcett St.
(617) 873-2431 /__/ | \__\ Cambridge, MA 02238

It is loading more messages.
0 new messages