Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Have we learned anything in the last 20 years?

698 views
Skip to first unread message

Andy Tanenbaum

unread,
Apr 3, 1992, 2:45:33 PM4/3/92
to

I was wondering if we have learned anything about distributed systems in the
last 20 years. I mean, are there any statements we can make that are largely
accepted in the computer science research community?

Before going further, let me say what I mean by a distributed system. It is
a collection of independent computers that do not share primary memory (i.e.,
NOT a shared memory multiprocessor) but which act to the user like a single
computer (single system image). By this definition, NFS, Andrew, the Sequent,
and a lot of things are not distributed systems. Only a few such systems
exist, and they are largely research prototypes.

I would venture to state that the following statements are now considered
true by the majority of researchers in this area. My question is, can people
think of any more? For fun, I have a second category of statements that I
consider controversial rather than accepted as true. Suggestions here, too,
are welcome.

GENERALLY ACCEPTED AS TRUE BY RESEARCHERS IN DISTRIBUTED SYSTEMS
- The client-server paradigm is a good one
- Microkernels are the way to go
- UNIX can be successfully run as an application program
- RPC is a good idea to base your system on
- Atomic group communication (broadcast) is highly useful
- Caching at the file server is definitely worth doing
- File server replication is an idea whose time has come
- Message passing is too primitive for application programmers to use
- Synchronous (blocking) communication is easier to use than asynchronous
- New languages are needed for writing distributed/parallel applications
- Distributed shared memory in one form or another is a convenient model


STILL HIGHLY CONTROVERSIAL
- Client caching is a good idea in a system where there are many more
nodes than users, and users do not have a "home" machine (e.g., hypercubes)
- Atomic transactions are worth the overhead
- Causal ordering for group communication is good enough
- Threads should be managed by the kernel, not in user space

Please post replies rather than sending them to me. It should make for an
interesting discussion. (I bet if someone did this for, high energy physics
or DNA research, there would be a lot more agreement than among computer
scientists.)

Andy Tanenbaum (a...@cs.vu.nl)

Martin Fouts

unread,
Apr 3, 1992, 4:14:47 PM4/3/92
to
In article <32...@darkstar.ucsc.edu>, a...@cs.vu.nl (Andy Tanenbaum) writes:
|>
|> I was wondering if we have learned anything about distributed systems in the
|> last 20 years. [...]
|>
|> It [a distributed system] is

|> a collection of independent computers that do not share primary memory
|> [...] but which act to the user like a single computer (single system image
|>
|> [...]

|>
|> I would venture to state that the following statements are now considered
|> true by the majority of researchers in this area. My question is,
|> can people think of any more?

I would like to suggest that although you might obtain "general
acceptance" of the following observations, there is certainly a vocal
minority opinion about some of them which suggests they might not be
completely accurate:

|>
|> GENERALLY ACCEPTED AS TRUE BY RESEARCHERS IN DISTRIBUTED SYSTEMS
|> - The client-server paradigm is a good one

For certain computations and certain applications, but not as the only
paradigm available.

|> - Microkernels are the way to go

The only thing we all agree about microkernels is that they should
have a small set of semantics. We don't even really have a consensus
on what belongs in that set, and we certainly don't have an extensive
enough experience base from which to derive generalizations.

|> - UNIX can be successfully run as an application program

Yes, but so what? Can it successfully be run as an application
program in a way which makes it appear to be the single system image
of a distributed system.

|> - RPC is a good idea to base your system on

Unless you have a lot of peer to peer communication inherent in the
problem you are trying to solve.

--
Martin Fouts | M/S 1U-14
EMAIL: fo...@hpl.hp.com | HP Laboratories
PHONE: (415) 857-2971 | 1501 Page Mill Road
FAX: (415) 857-8526 | Palo Alto, CA 94304-1126

Nothing Left to Say (c)

Ken Birman

unread,
Apr 3, 1992, 4:26:42 PM4/3/92
to
In article <32...@darkstar.ucsc.edu> a...@cs.vu.nl (Andy Tanenbaum) writes:
>
>I was wondering if we have learned anything about distributed systems in the
>last 20 years. I mean, are there any statements we can make that are largely
>accepted in the computer science research community?
>

How about:

Generally accepted:
Distributed programming support requires some form of process group
mechanism.


--
Kenneth P. Birman E-mail: k...@cs.cornell.edu
4105 Upson Hall, Dept. of Computer Science TEL: 607 255-9199 (office)
Cornell University Ithaca, NY 14853 (USA) FAX: 607 255-4428


Stephen P Spackman

unread,
Apr 3, 1992, 8:16:13 PM4/3/92
to
In article <32...@darkstar.ucsc.edu> a...@cs.vu.nl (Andy Tanenbaum) writes:

|GENERALLY ACCEPTED AS TRUE BY RESEARCHERS IN DISTRIBUTED SYSTEMS
| - The client-server paradigm is a good one

| - Microkernels are the way to go

| - UNIX can be successfully run as an application program

| - RPC is a good idea to base your system on

| - Atomic group communication (broadcast) is highly useful
| - Caching at the file server is definitely worth doing
| - File server replication is an idea whose time has come
| - Message passing is too primitive for application programmers to use
| - Synchronous (blocking) communication is easier to use than asynchronous
| - New languages are needed for writing distributed/parallel applications
| - Distributed shared memory in one form or another is a convenient model

The interesting thing about this list is that it is almost exactly the
operating systems image of the direction of the languages community:
we (some of us) aren't clear if we want lazy functional or OO yet, but
we want clear encapsulation, small objects, optimisation at the server
end, and client-invisible synchronisation.

The only quibble I have is with "new languages": I think a better
position is that "new languages are needed", stop, and it's a great
pity that the goal of a really generally useful language seems to have
died.

Questions, however, about the second group - more on the nature of my
not understanding them, I think.

|STILL HIGHLY CONTROVERSIAL
| - Client caching is a good idea in a system where there are many more
| nodes than users, and users do not have a "home" machine (e.g., hypercubes)

Wouldn't client caching ideally be unavoidable, transparent, and just
part of the "way of things"? Once you have a copy of an object, isn't
the decision to keep it or lose it best based on the ratio of cost to
keep to cost to retain, and be done? I'm unable to see a distinction
between what "make" does and caching in its most general sense, and if
you actually went ahead and unified them (with suitable models for
communications costs, local storage and CPU demand, and
synchronisation overhead, of course) wouldn't the whole issue just
quietly evapourate?

| - Atomic transactions are worth the overhead

Hm. One certainly wishes they were, anyway! :-)

| - Causal ordering for group communication is good enough

Wait a minute; causal ordering is the one actually provided by
physics; clocks drift. Aren't the sensible alternatives sub-causal? Or
did you mean by "group communication" to limit causality enforcement
to declared domains, and allow causal violation between "groups"?

| - Threads should be managed by the kernel, not in user space

Threads are the virtual CPUs, and if they aren't properly
encapsulated, poof!, there goes your cross-architectural application
support!
----------------------------------------------------------------------
stephen p spackman Center for Information and Language Studies
ste...@estragon.uchicago.edu University of Chicago
----------------------------------------------------------------------

Ronald G Minnich

unread,
Apr 3, 1992, 11:43:04 PM4/3/92
to
In article <32...@darkstar.ucsc.edu> a...@cs.vu.nl (Andy Tanenbaum) writes:
>GENERALLY ACCEPTED AS TRUE BY RESEARCHERS IN DISTRIBUTED SYSTEMS
> - The client-server paradigm is a good one
But not by all. Many think it is a headache. In fact, if any of you want
to ftp the OS white paper we have here at super.org, you will find that
we say a thing or two about the problems that the client/server model
imposes.

> - Microkernels are the way to go
We shall see. So far, many claims, but I still can't get a useful
OS based on a microkernel, despite many attempts.
A couple of random tidbits:
I still hear many horrifying stories about performance (not, of course,
from the advocates of uKernels, just from average users).
I should note that
I am hearing rumblings from various microkernel types about how
"maybe we need to slip <X> back into the kernel ..."
The Unix server process on OSF is, I have been told, now slated
to be a library. Seems the trampoline was expensive ...
One of the main features of the Chorus microkernel, to me, is that I
can bind something into the microkernel address space for performance
if needed.
Are the Sprite people still in business? They call their
system a "maxikernel". They are not believers in microkernels.
"Kernelizing didn't solve anything in the 1960s, and it won't solve
anything now"- My memory of a quote by Jim Gray, but I can look it up.

So I would say, for many reasons, the jury is out on uKernels.

> - UNIX can be successfully run as an application program

well, you didn't say fast :-)

> - RPC is a good idea to base your system on

Can you turn an RPC around in < 10 microseconds? If so, I will believe it.
If not, well, I could turn synchronization around on MemNet in about
that time. I bet it takes you 10 uS just to marshall the arguments.
RPC may well strangle us as we go to gigabit/second networks.

> - Caching at the file server is definitely worth doing

Why not?

> - File server replication is an idea whose time has come

It came at least 3 years ago. That's how long we have been using it ...

> - Synchronous (blocking) communication is easier to use than asynchronous

Maybe this is accepted by some people some where, but is it harder
or just unfamiliar to people? NLTSS, a Cray OS, and AMIGADOS, an Amiga OS,
both have async I/O, and people seem to get along just fine.

> - New languages are needed for writing distributed/parallel applications

Absolutely. But new languages are needed for other reasons too :-)

> - Distributed shared memory in one form or another is a convenient model

Well, i say yes, for some applications. I can port shared-memory programs
to my cycle farm and eat crays, performance-wise. And i don't even
need all the complexity of Ivy-style cache coherent memory (i.e. my consistency
is application-controlled, and has some other simplifying assumptions).
So, even in a simple form, it is worth it.


>STILL HIGHLY CONTROVERSIAL
> - Client caching is a good idea in a system where there are many more
> nodes than users, and users do not have a "home"
> machine (e.g., hypercubes)

Don't get the point here. Caching of files on nodes in a hypercube? There
aren't any disks? ???

> - Causal ordering for group communication is good enough

Well, I am told that commercial ISIS took out the causal part, ...
So maybe it is not even needed to that extent?

ron


Niranjan G Shivarat

unread,
Apr 4, 1992, 2:54:21 PM4/4/92
to
In article <32...@darkstar.ucsc.edu> a...@cs.vu.nl (Andy Tanenbaum) writes:
>
>I was wondering if we have learned anything about distributed systems in the
>last 20 years. I mean, are there any statements we can make that are largely
>accepted in the computer science research community?
>
>STILL HIGHLY CONTROVERSIAL
> - Client caching is a good idea in a system where there are many more
> nodes than users, and users do not have a "home" machine (e.g., hypercubes)
> - Atomic transactions are worth the overhead
> - Causal ordering for group communication is good enough
> - Threads should be managed by the kernel, not in user space
>

>Andy Tanenbaum (a...@cs.vu.nl)

May be the following is also a suitable for the above category.

There are many experimental distributed systems which are object-oriented.

Is there any consensus on whether object-oriented is the way or not the way to
develop distributed operating systems in the future?

On the other hand, is it the case that object-oriented is good under certain
conditions and non object-oriented is better under some others?

Dick Dunn

unread,
Apr 4, 1992, 11:04:21 PM4/4/92
to
fo...@hplmf.hpl.hp.com (Martin Fouts) responds to a...@cs.vu.nl (Andy
Tanenbaum):
[ GENERALLY ACCEPTED AS TRUE BY RESEARCHERS IN DISTRIBUTED SYSTEMS ]
...

>|> - Microkernels are the way to go
>The only thing we all agree about microkernels is that they should
>have a small set of semantics...

That seems to be agreed in theory but not in practice...it doesn't even
seem to be agreed that microkernels should be small!

It is hard to tell whether the phenomenon of the "overweight microkernel"
results from bad design/implementation, or from some more theoretical flaw.
I do feel that there was a fad-like excess popularity of microkernels, now
swinging through the middle (and, who knows, perhaps destined for a period
of unpopularity).
--
Dick Dunn r...@raven.eklektix.com -or- raven!rcd Boulder, Colorado
...Simpler is better.

Stephen P Spackman

unread,
Apr 5, 1992, 4:22:56 AM4/5/92
to
In article <32...@darkstar.ucsc.edu> ni...@laurel.cis.ohio-state.edu (Niranjan G Shivarat) writes:
|May be the following is also a suitable for the above category.
|
|There are many experimental distributed systems which are object-oriented.
|
|Is there any consensus on whether object-oriented is the way or not the way to
|develop distributed operating systems in the future?
|
|On the other hand, is it the case that object-oriented is good under certain
|conditions and non object-oriented is better under some others?

Actually, I don't think this DOES belong as a question, because OO is
IMHO a slogan more than an idea. From a programming language
perspective, the thing that is wrong with Lisp is that it is untyped;
and the thing that is wrong with Smalltalk is that it is
"un-interfaced" - things can inherit from each other all they want,
but nobody is keeping track of the MEANING of a message selector
anymore, not even to the point of documenting it.

Operating systems work certainly needs encapsulated data that can
locate its own implementation (notions such as "file" and "driver"
come to mind) but we've ALWAYS had that. It ALSO needs - and this is
slow arriving - inheritance-based, centrally specified, interface
control (the thing the OO types are calling "types" apparently - in
the face of all reason, since they are classes as opposed to their
classes which are types, or something :-). But THAT is a development
that comes originally from the types community and not the OO
community at all, since it actual INHIBITS inheritance in
implementation by pinning everything down to a fixed interface in a
way that is only reasonable in the presence of parametric polymorphism
which for some reason is anathema to half the world.....

In sum, operating systems have ALWAYS been OO up to a point, and the
ideas for progressing beyond that point are ones that are coming INTO
and not OUT OF the OO community. IMHO, always IMHO.

Arindam Banerji

unread,
Apr 5, 1992, 5:33:04 AM4/5/92
to

Although it is important to consider whether a micro-kernel or a
macro-kernel is the way to go, our research has consistently shown us
that
an architectural view of abstractions is essential for any distributed
system.

What exactly do I mean by architectural view ?

It is important to consider the nature and choice of abstractions for a
distributed system, without worrying about implementation
strategies ( atleast upto a certain stage ). This allows for various
implementations of the same abstractions, including micro-kernels
or "add-ins" to existing operating systems. Moreover, it allows multiple
implementations to co-exist, any of which may be used for a
particular application ie : A DSM abstraction that uses full-replication
and partial replication depending upon the specific nature of
the application. Another similiar example can be the use of different
coherency mechanisms for different access-patterns, within the same
application.

In addition if multiple realizations of a set of abstractions can
interact, true heterogeneity can be achieved. For example, we have
demonstrated the ability of microkernel realizations ( currently i386
based ) of a DSM abstraction to co=operate with "add-in"
realizations of the same abstractions on OS/2 and Mach.

On the nature of such abstractions, we have also learnt that following
programming languages ( NOT copying ) is usually a good idea.
Since, most application and system software is written in high-level
languages, why not provide direct support for language-level
facilities. Whatever support is usually not provided by directly by
languages ( such as exception handling and synchnization in most
languages ) should be provided in a form that can be easily
incorporated into a high-level programming model by the application
programmer. For example, distributed shared data may be a better scheme
than DSM, since the semantics of coherency in such a model
allow for both hardware-based pages and application based data-objects
to be used as units of coherency. Similiarly, exception handling
facilities that could be used to support those needed by programming
languages ( I would also include asynchronous exceptions, here) is
a better idea than low-level siugnalling mechanisms.

This idea of taking a closer look at languages has also pushed some of
the state-of-the-art technology in distributed systems. Meta-objects
were first devised for programming languages and support for reflection
provided in a procedural language in the early 80's. This techno-
-logy has been applied very effectively by projects such as Muse to
vastly increase the flexibility of their abstractions.

Our approaches have thus taught us to look at design
realization-independent abstractions, that closely model the support
provided by
high-level languages.

-axb
Arindam Banerji
(a...@irishvm.pcl.nd.edu)
(219)-239-5273


Ken Birman

unread,
Apr 5, 1992, 1:56:32 PM4/5/92
to
In article <32...@darkstar.ucsc.edu> rmin...@super.super.org (Ronald G Minnich) writes:
>
>> - Causal ordering for group communication is good enough
>Well, I am told that commercial ISIS took out the causal part, ...
>So maybe it is not even needed to that extent?
>

Actually, the commercial ISIS is still based on cbcast, although the main
user interface is the safer, slower abcast primitive. Internally, cbcast is
all we use; indeed, the new system we are developing at Cornell (==research)
only has direct support for causal multicast, and puts everything else
at the user level.

If it seems like you hear less from my group on this lately, it is because
we started to get the sense that the crowd that understands this point
has pretty much accepted it, and that the crowd that doesn't is actually
getting sort of hostile on the issue. So, we play it down.

I think this whole issue is tightly linked to one's attitude on RPC and on
asynchronous communication. If you believe in RPC and are willing to
pay OS overhead twice on every interaction (once for the message and
once for the reply, even if the procedure was of type "void"), you
probably won't see the merit of asynchronous communication. If you
would prefer to see your communication system as a sort of message
buffer pool, which might collect multiple messages into one packet and
otherwise amortize costs over many messages, you can drive the effective
cost of communicate far lower than for RPC. For example, the current
commercial version of ISIS can send more than 1000 cbcasts per second
point to point over a UNIX release from SUN that support about 300 null
RPC's per second. The system we are doing at Cornell now will be
quite a bit faster, but it uses a microkernel approach and won't run
directly on UNIX at these higher speeds.

A good analogy is with raw file systems (think "RPC") versus buffered file
systems (think "causal, asynchronous"). In fact, there is even a good
analogy for the causality issue there: remember when UNIX file systems
used to get scrambled on every crash? The problem was basically a
disk IO subsystem that didn't do physical disk writes in an order
consistent with causality. When they added ordering dependencies
to the buffered IO subsystem in UNIX, it became possible to recover
file systems automatically after crashes: an illustration that if you
believe in asynchronous IO, you had better preserve potentially causal
event orderings.

I tend to agree with the individual who saw RPC going under as networks
get faster. Actually, though, I think the key issue is the ratio of
processor speed to communication latency, or perhaps the processor
speed compared to the product of latency and throughput (a rough measure
of how many bytes away your destination is). The more "distant" your
destination, the better off you do with asynchronous communication and
some sort of asynchronous failure notification scheme. If the destination
seems close, on the other hand, shared memory makes more sense.

I guess this means that in the long run, if current hardware trends
continue, shared memory may not make much sense at all. In fact, if
you look at projects like DASH or MUNIN, with release consistency and
explicit mutex primitives wired to the cache, I think that shared memory
is getting more and more explicitly message-like. If anything, I would
suggest that shared memory is basically not a very good idea: it is too
different from the underlying hardware and hence you can't get adequate
performance without knowing exactly when messages will get transmitted.

On the other hand if you do go with asynchronous communication, you
pretty much have to worry about causal message ordering. Which
introduces a fair amount of complexity to your system.

Robbert van Renesse discussed some of these issues in his thesis, and
he and I have been working on a short paper on the question. We aren't sure
what we will do with it (if anything) but you could get a copy from him
by email to r...@cs.cornell.edu -- he had the write token on it last.

J.P.K...@lut.ac.uk

unread,
Apr 5, 1992, 5:50:44 PM4/5/92
to
In article <32...@darkstar.ucsc.edu> a...@cs.vu.nl (Andy Tanenbaum) writes:
>
>I was wondering if we have learned anything about distributed systems in the
>last 20 years. I mean, are there any statements we can make that are largely
>accepted in the computer science research community?
[...]

>
>GENERALLY ACCEPTED AS TRUE BY RESEARCHERS IN DISTRIBUTED SYSTEMS
> - The client-server paradigm is a good one

Currently, yes, especially in the workstation/PC area.

> - Microkernels are the way to go

Back to the beauty of UNIX in it's early days, eh? A jolly good idea
IMHO. I'm surprised no vendor has yet replaced their UNIX kernel with
a souped up EMACS so that everything you could ever want and a few more
bits besides reside in the kernel!

> - UNIX can be successfully run as an application program

Yep, and so can MULTICS, VMS, CMS, etc, etc most probably. It just a
question of what sort of performance you'd be prepared to put up with.
I wonder if UNIX would be where it is today if it started out in life
with a MUTLICS environment running in the user space under which the
users ran all their applications? No, of course not. So why do we try
so hard to make every new distributed system look like UNIX if at all
possible. Don't get me wrong; I love UNIX and use it everyday. But I
don't see why new distributed systems have to look like UNIX to the
programmers. When they go commercial then I can see that compatibility
might well swing the balance for them but that's some way off. Lets
not lumber our babies with the trappings of middle age at birth, eh?

> - RPC is a good idea to base your system on

I'm afraid I can't agree with this. In fact I'm surprised at Andy
Tanenbaum for including it in his list of generally agreed upon
principles in distributed systems, especially in view of the fact that
I have before me a copy of the EUTECO '88 Proceedings in which he
co-authored a paper entitled "A Critique of the Remote Procedure Call
Paradigm". In it, he points out a number of flaws with RPC paradigm
and points out that there's still much to do before it is suitable for
fully tranparent distributed systems (though to be fair, he does end by
saying that initial work on RPC is promising).

As a researcher in the Gigabit/sec networking field I'm well aware that
when extremely high bandwidth WANs hit the scene, RPC as we know it
will be in trouble due to the latency gobbling up millions of precious
CPU cycles on the high end systems that we can expect to see connected
to them. I also know that many teams are looking into alternative
paradigms and enhancements to RPC for such environments. I feel this
is vital as so much of the current research into distributed systems
_is_ based on RPC.

> - Atomic group communication (broadcast) is highly useful

> - Caching at the file server is definitely worth doing

> - File server replication is an idea whose time has come

Yes, I agree with these (see, I do agree sometimes... 8-) ).

> - Message passing is too primitive for application programmers to use

Well, I think we'd all possibly agree that ideally, application
programmers shouldn't have to worry about comms at all. Everything
should be transparent to them. But then again, there are application
programmers and there are application programmers. Some will always
want just a little more than you offer in your clean, transparent
distributed systems and will resort to messgae passing and the like.
So I'd go along with this as long as provision is made in distributed
systems for the application programmers to access low level primitives
_should_they_want_to_.

> - Synchronous (blocking) communication is easier to use than asynchronous

Depends what you've been brought up on and also each has their own place.

> - New languages are needed for writing distributed/parallel applications

In the long term, yes. However I still feel that we've yet to really
get our acts together on building distributed systems with the tools
we've got. Hopefully systems development and language development can
proceed in parallel.

> - Distributed shared memory in one form or another is a convenient model

Yes, definately. And one I feel we'll be seeing more of in the future.

>
>
>STILL HIGHLY CONTROVERSIAL
> - Client caching is a good idea in a system where there are many more
> nodes than users, and users do not have a "home" machine (e.g.,
>hypercubes)

I don't really understand this, but if what your saying is cache the
files/segments/whatevers that the user is currently using on whatever
nodes the user is currently utilising then in general this seems like a
pretty good idea, as long as an efficient cache coherency algorithm is
enforced (and the caches don't start trashing and wasting loads of
bandwidth & time!). Still, as I say, I could really do with a bit of
an explanation of the meaning of this one!

> - Atomic transactions are worth the overhead

Well, depends what you want to do with your transaction and what your
environment is. Atomicity is a nice idea in general I'll give you but
sometimes you can get away without it and really make a killing on
performance (which probably means the ISO will enforce atomic
transactions if they ever produce a standardised distrbuted system 8-)
).

> - Causal ordering for group communication is good enough

I'll pass on this one for the moment ;-)

> - Threads should be managed by the kernel, not in user space

I think this should be in section 1. If your threads are in the user
space then there's a lot more chance that they'll interfer and do
goodness knows what.

>Please post replies rather than sending them to me. It should make for an
>interesting discussion. (I bet if someone did this for, high energy physics
>or DNA research, there would be a lot more agreement than among computer
>scientists.)

Hmm, I don't know much about HEP or DNA research but I'd be willing to
wager a few pennies that you could root out some disagreement by
presenting the right set of 'for granted' statements. Part of research
IMHO is disagreement as it forces us to consider carefully why we
support certain viewpoints and then defend them. A field with no
disagreements is a stagnent research area, and one ripe for some
upstart to come along and turn everything on it's head (just look at
Physics before Einstein and Maths before Godel!). It's nice to know
that distributed systems certainly doesn't seem stagnent yet (now
that's my suggest for a section 1 entry! 8-) )

>
>Andy Tanenbaum (a...@cs.vu.nl)

Jon

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Jon P. Knight, Ph.D. Student, Part-time Research Assistant & Subwarden.
JANET: J.P.K...@uk.ac.lut Tel: (+44) (0509) 22-2298
Dept. Comp. Studies, LUT, Ashby Road, Loughborough, Leics., UK. LE11 3TU.

Mark Day

unread,
Apr 5, 1992, 9:00:16 PM4/5/92
to

ni...@laurel.cis.ohio-state.edu (Niranjan G Shivarat) writes:

Is there any consensus on whether object-oriented is the way or not
the way to develop distributed operating systems in the future?

I suspect that we will first have to develop a consensus on what it
means for an operating system to be "object-oriented". We still seem
to be pretty far from that.

--Mark Day

md...@lcs.mit.edu

Mark Day

unread,
Apr 5, 1992, 9:06:00 PM4/5/92
to

k...@cs.cornell.edu (Ken Birman) writes:
How about:

Generally accepted:
Distributed programming support requires some form of process group
mechanism.

For replication, yes. But is this true for distributed programming in
general? I'm not sure where the "process group mechanism" is in
Argus, even though it supports distributed programming pretty well.
Do transactions count as a "process group mechanism" in this context?

--Mark Day

md...@lcs.mit.edu

Ronald G Minnich

unread,
Apr 5, 1992, 9:53:40 PM4/5/92
to
In article <32...@darkstar.ucsc.edu> rmin...@super.super.org (yours truly) writes:
>to ftp the OS white paper we have here at super.org, you will find that
^^^^^^^^^^^^^^^^^^^^^^^^

Folks, sorry about this, the paper is at
met.super.org

Don't ask me why not super.org. This is for hysterical reasons.
You will find in the pub directory
os-workshop (which is where the white paper is)
(note that this which paper mentions supercomputers, but in fact the
issues discussed are in most sections farther-reaching)
mether (which has papers on the DSM i mentioned)
rs6000 (if any of you need a /dev/klog for your AIX systems :-))
Among other things.

>We shall see. So far, many claims, but I still can't get a useful
>OS based on a microkernel, despite many attempts.

Now, I really hate to bring this up, but I realized that there
is a very successful microkernel in use on a "popular" workstation.
I.e. you boot a kernel, it successively loads in components
as required to build you a functional operating environment.
Components include networking, enet interface,
nfs, keyboard, mouse, console device, etc. Sounds pretty micro-kernel-like,
huh? BUT, none of you are going to like the answer, so I will make
it possible for you to avoid seeing it :-)

AIX on the rs/6000. Yes, I knew you would not like this answer. I
don't either. BUT, AIX does have most if not all of the attributes
that people see as desirable in a microkernel.
Now, if only I could get it to DO something useful.
Note that my above caveat about getting a USEFUL uKernel implementation
still holds.

BTW, if there are any AIX wizards reading this,
anybody out there understand how mmap() in AIX 3.2 maps to device driver
operations at page fault time (i.e. NOT map time, I have that figured out)?
Any examples of a device driver that supports mmap available?
ron

Marcus J. will do TCP/IP for food Ranum

unread,
Apr 6, 1992, 12:46:45 AM4/6/92
to
ste...@estragon.uchicago.edu (Stephen P Spackman) writes:

>In sum, operating systems have ALWAYS been OO up to a point, and the
>ideas for progressing beyond that point are ones that are coming INTO
>and not OUT OF the OO community.

Before OO was hip, we called this "modular programming" and it
was a sign of competence, not trendiness.

Operating systems tend to be modular if they are designed by
anyone with any kind of a clue. Yes, an operating system can be said
to be "object oriented", but let's not be any more faddish than we have
to be.

mjr.
--
"Sometimes if you have a cappuccino and then try again it will work OK."
- Dr. Brian Reid, 1992
"Sometimes one cappucino isn't enough."
- Me

Stephen P Spackman

unread,
Apr 6, 1992, 2:52:57 AM4/6/92
to
In article <32...@darkstar.ucsc.edu> k...@cs.cornell.edu (Ken Birman) writes:
|I think this whole issue is tightly linked to one's attitude on RPC and on
|asynchronous communication. If you believe in RPC and are willing to
|pay OS overhead twice on every interaction (once for the message and
|once for the reply, even if the procedure was of type "void"), you
|probably won't see the merit of asynchronous communication. If you
|would prefer to see your communication system as a sort of message
|buffer pool, which might collect multiple messages into one packet and
|otherwise amortize costs over many messages, you can drive the effective
|cost of communicate far lower than for RPC. For example, the current
|commercial version of ISIS can send more than 1000 cbcasts per second
|point to point over a UNIX release from SUN that support about 300 null
|RPC's per second. The system we are doing at Cornell now will be
|quite a bit faster, but it uses a microkernel approach and won't run
|directly on UNIX at these higher speeds.

But I wouldn't want to say that a compiler/operating system that
understood that void returns or calls that return right into a "select
any three items from this list" operation are special cases would be
NOT implementing RPC. (Why do I feel like this life is a constant
struggle to get work more for compilers? :-). When you compile a
function for use locally you provide pragma settings for speed, size,
safety and so forth, and you give a type signature. It seems eminently
sensible that when compiling an RPC interface you should use the same
information, maybe with a couple of new dimensions. But maybe this is
just semantic....

|A good analogy is with raw file systems (think "RPC") versus buffered file
|systems (think "causal, asynchronous"). In fact, there is even a good
|analogy for the causality issue there: remember when UNIX file systems
|used to get scrambled on every crash? The problem was basically a
|disk IO subsystem that didn't do physical disk writes in an order
|consistent with causality. When they added ordering dependencies
|to the buffered IO subsystem in UNIX, it became possible to recover
|file systems automatically after crashes: an illustration that if you
|believe in asynchronous IO, you had better preserve potentially causal
|event orderings.

Except, the user-perceived difference in the two is just that one
works better. So are you really providing an argument against RPC, or
just against naivete in its implementation?

|I tend to agree with the individual who saw RPC going under as networks
|get faster. Actually, though, I think the key issue is the ratio of
|processor speed to communication latency, or perhaps the processor
|speed compared to the product of latency and throughput (a rough measure
|of how many bytes away your destination is). The more "distant" your
|destination, the better off you do with asynchronous communication and
|some sort of asynchronous failure notification scheme. If the destination
|seems close, on the other hand, shared memory makes more sense.

Asynchronous call failure happens also in the local case. Stack
overflow is the usual cause, and you don't want to report it to the
failing thread, leastwise not until after it's recovered (which may be
more than one frame down, since the notification process will need
some stack)....

|I guess this means that in the long run, if current hardware trends
|continue, shared memory may not make much sense at all. In fact, if
|you look at projects like DASH or MUNIN, with release consistency and
|explicit mutex primitives wired to the cache, I think that shared memory
|is getting more and more explicitly message-like. If anything, I would
|suggest that shared memory is basically not a very good idea: it is too
|different from the underlying hardware and hence you can't get adequate
|performance without knowing exactly when messages will get transmitted.

Shared memory is a tiny special case of argument passing: it's what
happens when you pass a pointer to "untyped" memory across the link.
We don't want to eliminate the possiblity, just realise that it's
another case of thinking in assembly language out of habit.

Werner Vogels

unread,
Apr 6, 1992, 3:01:14 AM4/6/92
to
In article <32...@darkstar.ucsc.edu>, a...@cs.vu.nl (Andy Tanenbaum) writes:

|> - Causal ordering for group communication is good enough

If this line means that there can be a need to order messages regarding
causality, I think that certainly in the light of a-synchronous
communication this is true. There is a need to present events as they
have happened in real life.

If it means that causal ordering can replace all other types of
orderings (read atomic) I do not think this can stand. Causal ordering
can be sufficient in those types of groups where the applications
provides the serialization needed, for the types of applications that
lack this type of support and that need stronger garantuees on how all
participants receive messages, other orderings than causal are needed.

--
Werner Vogels

INESC - Distributed Systems and Industrial Automation Group
Tel: +351 1 3100316, Fax: +351 1 525843
e-mail: wer...@inesc.pt / C=pt;A= ;P=inesc;S=werner

Andy Lowry

unread,
Apr 6, 1992, 6:13:03 AM4/6/92
to
In article <32...@darkstar.ucsc.edu> a...@cs.vu.nl (Andy Tanenbaum) writes:
GENERALLY ACCEPTED AS TRUE BY RESEARCHERS IN DISTRIBUTED SYSTEMS
- The client-server paradigm is a good one

It's useful if it fits your application. If your application's
communication patterns don't fit the client/server model, it becomes
an obstacle. Much better to base your systems on a process dynamics
and communication model which encompasses client/server as a special
case. I need to be able to write programs that can accept calls from
other processes, but nevertheless have their own thread of control,
can call other processes to get services, can create new processes and
reconfigure communications channels, and can decide when and from whom
to accept calls for services. This program is neither client nor
server.

- Microkernels are the way to go

As I understand it, there's a fair amount of reactionary sentiment
growing in the uKernel community, as things that were taken out of the
kernel are thrown back in for performance reasons. It seems to me
that uKernels *are* the way to go, but we have to be careful what we
mean by that. Specifically, uKernels (indeed, any software system)
must be designed in a highly modular fashion. They should be
*conceived* of and *designed* as multiple plug-compatible units that
interoperate to provide the needed services. It's probably more
evident in the case of an OS than in other systems that these units
should be largely autonomous and equipped with very flexible but
highly structured facilities for interacting with each other. When
you actually produce a running OS from such a modular specification,
you (the compiler and/or run-time system, perhaps with your helpful
advice) may well choose to combine several units into a single
executing image, but this would be a *transparent* implementation
choice. Inter-module calls could then be highly optimized to perform
as quickly as conventional procedure calls. The high-level design
would allow a wide variety of implementation choices, from fully
physically distributed (with some resulting performance hits) to
physically monolithic. Still, it's the same easily maintained, easily
reconfigured, nicely modularized system from a logical point of view.

Secure languages like Hermes can help in providing alternatives like
this, since in a secure language one need not be wary of combining
modules in this fashion. Each module is known to be incapable of
causing another module to crash, whether or not they're separated by
address-space barriers and the like.

- UNIX can be successfully run as an application program

Not all that controversial (except from a performance point of view,
perhaps), but it's certainly not something I dream of achieving. I
hope the distributed systems we'll be using down the road will be far
less strewn with inflexibilities and unnatural models than Unix.

- RPC is a good idea to base your system on

A lot of people have argued with this point, implying that RPC implies
round-trip communication and operating system overhead. But let's
think in terms of a separation between logical and physical execution.
If I want to access a service provided by some other process,
procedure call (which may or may not be remote, I don't care) is a
convenient model for me to use. It's an easy model to design into a
language, and people have been pretty happy with it for
non-distributed programming for years. But maybe my compiler is smart
enough to turn my void-returning procedure call into a one-way
message, or buffer multiple successive calls into one with provisions
for rolling back in the unlikely event that an intermediate call
fails. It's still RPC from a logical point of view, but it's
physically implemented for higher performance than some say RPC
implies. The point is, physical platforms should not be of concern to
the vast majority of programmers. Transparent program transformations
need to be developed to map simple, coherent concepts into efficient
implementations on a variety of platforms.

- Atomic group communication (broadcast) is highly useful

I can conceive of applications where atomic group communication is
part of logical behavior of my program. But I think most of the push
for this facility comes from its use in providing replication for
either availability, performance, or fault-tolerance. In these cases
I'd like the compiler and/or run-time (possibly at my urging) to
arrange for replication and perform the necessary transformations to
achieve atomic group communication among the replicas, if that's
required in order to preserve the semantics of my program. In other
words, I don't want to know that my program is going to run on
machines that can crash and over networks that can fail. I'd rather
program in an ideal world where those things don't happen, and have
the failures transparently masked by the system. So as a mechanism
for providing fault tolerance or boosting performance transparently,
I'd agree with the above.

- Caching at the file server is definitely worth doing

Caching all over the place is worth doing when the cost analysis says
it is. But again this should be transparent to the programs that
benefit from it.

- File server replication is an idea whose time has come

True, but not just for file servers.

- Message passing is too primitive for application programmers to use

Do you mean opening sockets and doing selects on file descriptors and
that sort of thing? Then yes, you are right. But if there's language
support for communicating typed data reliably across logical channels
between processes, then it's not primitive and it's not difficult to
do. And RPC can (and should, in my opinion) be logically viewed as a
pair of message communications. Viewing it this way, and integrating
it into your language model as such, allows for the sort of flexible
but structured process interaction model that I argued above is not
offered by client/server.

- Synchronous (blocking) communication is easier to use than asynchronous

I generally agree with this statement, but noting that in a synchronous
world it's probably easier to develop systems that are prone to
deadlock.

- New languages are needed for writing distributed/parallel applications

New languages are needed that allow programs with complex interactions
to be developed, without regard for whether they will run distributed
or parallel or whatever.

- Distributed shared memory in one form or another is a convenient model

I prefer to view physical shared memory as an extremely
high-performance medium for program interactions. I don't like shared
memory as programming model, as it *forces* the programmer to write
excess code (locks, or whatever) to make sure concurrent accesses are
properly managed. I much prefer a logical model that does not expose
sharing, and a compiler that can make use of shared memory to boost
performance when it's available. I don't much like the idea of
distributed shared memory, because, biased as I am against shared
memory as a programming model, I don't see the point. I've sometimes
thought that the message-passing people and the distributed
shared-memory people are both trying to do the same thing: make the
distributed programming model and the non-distributed programming
model look the same. Distributed shared memory takes the model we've
been using for standalone processing and tries to extend it to the
distributed world, so programmers can continue to use that paradigm.
The message passing people, it seems to me, take a model that is
inherently better adapted to a distributed world, and say that this is
how programs interact, whether or not the underlying platform is
distributed. In either case there's a single model, which is a
simplification from where we started (shared memory for local
interactions, explicit network communications for remote
interactions). I believe that the message-passing view gives a far
preferable logical model, as it promotes modularity and all its
benefits, while shared memory messes with module boundaries and
therefore complicates the world.

STILL HIGHLY CONTROVERSIAL
- Client caching is a good idea in a system where there are many more
nodes than users, and users do not have a "home" machine (e.g., hypercubes)

I'm not sure what you mean by "client caching." But on basic
principles I'll disagree anyway!!! :-)

- Atomic transactions are worth the overhead

If the semantics of the program require atomic transactions, then
they're indispensible. If the program doesn't require them, then
they're not worth even the tiniest overhead.

- Causal ordering for group communication is good enough

I think this is probably true. The problem is that there are ways for
information about causal dependencies to propagate that are not going
to be tracked by the software (e.g. I can tell you that my program
just printed a message, thereby *causing* you to perform a certain
operation that causes your program to send a message; the two messages
are causally related, but the software doesn't know that). The
totally ordered atomic broadcast of ISIS (abcast) will correctly
reflect the world despite such "side-band" communication, but it's at
a rather high cost. I believe that a model like cbcast is probably
sufficient for developing logically consistent and robust software.



- Threads should be managed by the kernel, not in user space

I think there's room for both. The key is that the programmer should
not have to be concerned with it.
--
Andy Lowry, lo...@watson.ibm.com, (914) 784-7925
IBM Research, P.O. Box 704, Yorktown Heights, NY 10598

Steve Chapin

unread,
Apr 6, 1992, 9:24:57 AM4/6/92
to

}} In article <32...@darkstar.ucsc.edu> J.P.K...@lut.ac.uk writes:
}}
}} > - Threads should be managed by the kernel, not in user space
}}
}} I think this should be in section 1. If your threads are in the user
}} space then there's a lot more chance that they'll interfer and do
}} goodness knows what.

I'd say this definitely does *not* belong in section 1. At SEDMS III
this year, Lazowska's keynote address was on the beauty of user-level
threads, when done properly. In fact, the general consensus among
those that discussed them seemed to be that user-level threads were
better than kernel threads.

}} >Andy Tanenbaum (a...@cs.vu.nl)


}} Jon P. Knight, Ph.D. Student, Part-time Research Assistant & Subwarden.

s...@cs.purdue.edu Steve Chapin Today's Grammar Lesson:
Let's hope the usher lets us in.

Fred Douglis

unread,
Apr 6, 1992, 11:39:10 AM4/6/92
to
A couple of comments on the thread (wow, Andy, what a way to get this
group going!)

rmin...@super.super.org (Ronald G Minnich) said:

Are the Sprite people still in business? They call their
system a "maxikernel". They are not believers in microkernels.
"Kernelizing didn't solve anything in the 1960s, and it won't solve
anything now"- My memory of a quote by Jim Gray, but I can look it up.

As a former member of the Sprite project, and a coauthor of a paper
that considered just this issue, I should say something here. First
of all, Sprite is still in business, but just barely. The number of
Sprite users at Berkeley has dropped, and Sprite has been used by only
a few users outside Berkeley. The distribution of Sprite to the
outside world ended when the staff member responsible for the
distribution was called up for military reserves and then left the
group a few months after returning. Perhaps someone at Berkeley would
like to add something here?

Getting back to the thread, though, I disagree with the claim that the
Sprite people aren't believers in microkernels. It's more a question
that when Sprite was first designed, microkernels were not a
widely-accepted technology, and the Sprite researchers chose to go
with a more traditional structure and devote their research efforts in
other areas. In a sense, I agree with Andy's claim that microkernels
are generally accepted as a good idea, though I'm not sure that the
idea of what exactly a microkernel *is* has been generally agreed
upon. (Other messages in this thread already made this point, of
course.) Perhaps we'll all be enlightened later this month at the
microkernel workshop.

Secondly, I second the comments of ste...@estragon.uchicago.edu
(Stephen P Spackman) on the subject of client caching. In his
original message, Andy qualified his comment on client caching to
address certain environments, like that of Amoeba:

|STILL HIGHLY CONTROVERSIAL
| - Client caching is a good idea in a system where there are many more
| nodes than users, and users do not have a "home" machine (e.g., hypercubes)


As Stephen said,

Wouldn't client caching ideally be unavoidable, transparent, and just
part of the "way of things"?

*Prohibiting* client caching seems like a bad idea, since it's
unnecessarily restrictive. If the same node uses a file again and
again, shouldn't it be cached? I think that this topic is highly
controversial only in the sense that Andy thinks one thing and the
rest of the world thinks another! So far, the only comments have
taken issue with Andy's claim. Does anyone think that client caching
is a bad idea, even in an environment without "home machines"?

================
By the way, for more on the Sprite position on microkernels (as well
as other issues, including the general directions of Sprite and
Amoeba), see

@ARTICLE{douglis:amoeba-sprite,
AUTHOR = {F. Douglis and J. K. Ousterhout and M. F. Kaashoek and A. S. Tanenbaum},
TITLE = {A Comparison of Two Distributed Systems: {A}moeba and {S}prite},
YEAR = {1991},
JOURNAL = {Computing Systems},
PAGES = {353-384},
NUMBER={4},
VOLUME={4}
}

[An aside: I am not responsible for typos in this article, which was
apparently retypeset and printed without giving the authors an
opportunity to see proofs! Speaking of which, even the authorship was
printed out of order.]
================

Fred Douglis
Matsushita Information Technology Laboratory | Email: dou...@mitl.com
182 Nassau Street, | Phone: +1 609 497-4600
Princeton, NJ 08542 USA | Fax: +1 609 497-4013

Crispin Cowan

unread,
Apr 6, 1992, 11:50:59 AM4/6/92
to
It seems clear to me from this discussion that in order to have a
successful distributed system, it is essential to plug your own system
in response to questions like this one. :-) :-)

Crispin
-----
Crispin Cowan, CS grad student, University of Western Ontario
Phyz-mail: Middlesex College, MC28-C, N6A 5B7
E-mail: cri...@csd.uwo.ca Voice: 519-661-3342
"If you want an operating system that is full of vitality and has a
great future, use OS/2." --Andy Tanenbaum

Thomas Page

unread,
Apr 6, 1992, 12:39:38 PM4/6/92
to
To the list of what we have learned, add...

Network Transparency.
Ten years ago, there was considerable debate on this topic (see for
example the debate held at the 1983 SOSP conference). The debate
strongly paralleled that held in the '60s over virtual memory. On
the one side, people said that humans could manage the limited main
storage via overlays much more efficiently than could the OS. On the
other side, people said that the simpler programmer model was worth
the slight penalty and that as programs got bigger, the lru algorithms
might do better than hand overlays anyway. Well, time has ruled pretty
much in favor of virtual memory. Similarly for network transparency.
While few system (other than perhaps Sprite) have embraced transparency
to the extent Locus did in the late 70s/early 80s, most distributed
file systems and operating systems from NFS on have opted for a high
degree of network transparency. In fact, most people's complaints
about systems like NFS concern precisely those points where network
transparency breaks down, rather than where it hides machine boundaries.


To the list of controversial topics, add...

Optimistic vs. Pessimistic replica consistency control.
Pessimism says, "inconsistency is so intollerable that we are willing
to take expensive measures (like obtaining remote locks) to prevent it
from occuring." Optimism says, "inconsistent concurrent update is so
rare in practice that it makes more sense to detect it and fix it up
when it does occur than it does to go to great expense to prevent it."
It sort of parallels deadlock detection vs. prevention.

It seems that there is no right answer, but rather situations where one
is best and situtations where the other is best. So far, however,
the conservative (pessimistic) approach (primarily inherited from
the distributed database arena) has been most common. Coda from CMU
and Ficus from UCLA are examples of optimism in the replicated filing
area.


Tom Page
pa...@ficus.cs.ucla.edu

Husam Kinawi

unread,
Apr 6, 1992, 1:30:24 PM4/6/92
to
In article <32...@darkstar.ucsc.edu> a...@cs.vu.nl (Andy Tanenbaum) writes:

>GENERALLY ACCEPTED AS TRUE BY RESEARCHERS IN DISTRIBUTED SYSTEMS
> - The client-server paradigm is a good one

Good okay, but not the only one. Some applications might need other
paradigms.

> - Microkernels are the way to go

Yep, and these kernels should also support grouping and multicast
primitives within. Lightweight threads should be also supported. An
OO support layer could be included within also, or running on the
top of the kernel..

> - UNIX can be successfully run as an application program

Well, it is the same love-hate episode again... one has to say that
the UNIX momentum is too high to be stopped now, let alone changed.
So, I think that binary compatibilty with UNIX is all what we need,
and a UNIX interface running on the top of a microkernel.

> - RPC is a good idea to base your system on

Um... don't know. But I think RPCs are still very slow.

> - Atomic group communication (broadcast) is highly useful

Not only so, but I feel it should be a part of any distributed
programming environment, that is if any serious distributed applic-
ation is to be developed from within!.

> - Caching at the file server is definitely worth doing

> - File server replication is an idea whose time has come

Agree, and replication should of course be transparent!.

> - Message passing is too primitive for application programmers to use

> - Synchronous (blocking) communication is easier to use than asynchronous

> - New languages are needed for writing distributed/parallel applications

What about OO libraries instead ?. I would like to see something
like Arjuna implemented on top of ISIS, providing a class library
for a distributed applications developer, and all the grouping and
multicasting primitives supported by ISIS.
But then maybe Horus will supply these... so let us see!.

> - Distributed shared memory in one form or another is a convenient model

It is, and should be exploited in more details. The thing we have
to face is that not everyone is ready to think in parallel nor
write parallel programs right now, and hence we need to have a sort
of implicit parallelization techniques. A distributed shared memory
could do be the way to provide for these.

>STILL HIGHLY CONTROVERSIAL
> - Client caching is a good idea in a system where there are many more
> nodes than users, and users do not have a "home" machine (e.g., hypercubes)

Hmmm... don't know...

> - Atomic transactions are worth the overhead

Well, have to say yes...

> - Causal ordering for group communication is good enough

I think there was a technical report by Birman et al. that proved
that causal ordering is all that is needed... forget the report by
now though :-(

> - Threads should be managed by the kernel, not in user space

Well, yes and no!. Yes, because then threads can communicate with
each other via shared memory, but then if you have a homogenous
system and want to do some load-balancing, ie. migrate some threads
around the place, then no!.
I would again prefer an OO approach, where threads are encapsulated
with an objects, and hence could migrate within an object instead!.
If finer grain parallelism is needed, we can have composite objects
with each object consisting of a number of smaller objects, and
each of these containing a thread each. These smaller objects could
be spread to different machines on the net, executed then collected
back after they finish execution.

>Please post replies rather than sending them to me. It should make for an
>interesting discussion. (I bet if someone did this for, high energy physics
>or DNA research, there would be a lot more agreement than among computer
>scientists.)

Well, maybe we need some more communication primitives :-)
Cheers,

Husam Kinawi
================================================================================
Husam Kinawi (Phd student) e_mail: kin...@cpsc.ucalgary.ca
The Dept. of Computer Science Internet Talk: kin...@fsc.cpsc.ucalgary.ca
The University of Calgary Tel.(Voice): (403) 220-5105 (0900-2200 MDT)
2500 University Drive N.W. Tel.(Voice): (403) 284-3570 (after 2200 MDT)
Calgary, Alta., Canada T2N 1N4 Tel.(Fax.) : (403) 284-4707

Andrew Mullhaupt

unread,
Apr 6, 1992, 1:47:09 PM4/6/92
to

In article <32...@darkstar.ucsc.edu> rmin...@super.super.org (Ronald G Minnich) writes:
>In article <32...@darkstar.ucsc.edu> a...@cs.vu.nl (Andy Tanenbaum) writes:
>>GENERALLY ACCEPTED AS TRUE BY RESEARCHERS IN DISTRIBUTED SYSTEMS
>> - New languages are needed for writing distributed/parallel applications
>Absolutely. But new languages are needed for other reasons too :-)

I believe that the requirements of applications programming languages are
not the same as those of systems programming languages. How do OS people
feel about this?

Later,
Andrew Mullhaupt

Craig Partridge

unread,
Apr 6, 1992, 1:55:41 PM4/6/92
to

Well, I'm a networking person who dabbles in systems rather than
a systems person who dabbles in networking, so perhaps I've got
a different perspective. (I recall a comment by a prominent researcher
to the effect that "you can make a systems person into a networking person
but it takes several years" -- the perspectives are very different).

> GENERALLY ACCEPTED AS TRUE BY RESEARCHERS IN DISTRIBUTED SYSTEMS

> - RPC is a good idea to base your system on

I think there has been consensus that the RPC interface is one that
application writers like. However, I think there's also an emerging
consensus that classic RPC (Birrell-Nelson plus obvious extensions) has
about reached the end of its useful life -- gigabit networks and
billion-instruction per second computers mean that a single local
RPC call will take thousands or millions of instruction cycles to
complete. (Put another way, if you do one RPC call, there's a good chance
you'll spend more time waiting for that RPC call than all the time you'll
spend executing instructions). Now if you really needed to get a particular
item of data from the remote system, you have to eat the time to cross the
network. However, classic RPC may require you to make multiple
requests of the same machine to get the data -- and that's *clearly* busted.

However, all that being said, one can retain the RPC interface and
change the protocols underneath and get optimal performance in a gigabit
environment. (I.e. You only cross the network when you have to).
I showed this in my doctoral dissertation, building on existing work
like REV. (If you can be patient and not ask for a copy of the thesis, I'd
appreciate it -- I've got a paper summary in the works).

> - Atomic group communication (broadcast) is highly useful

Well, if you'd said "multicast" instead of broadcast I'd agree.
There's a considerable part of the data networking community that believes
broadcasting is a horrible idea whose time has past. (Shouting at everyone
in your favorite protocol to reach only some people -- at minimum you ought
to only shout at people who understand the protocol -- better yet is
communicating only with those folks who are interested).

> - Distributed shared memory in one form or another is a convenient model

I can't agree with this view. Distributed shared memory has the same
latency problems as RPC and while I've seen some innovative ideas about
relaxing consistency rules, I've yet to see work that suggests that
one doesn't at least sometimes get badly delayed by consistency requirements.

Craig Partridge
Editor, IEEE Network Magazine
Research Scientist, BBN
Visiting Lecturer, Stanford
(yes I'm too busy...:-)
cr...@aland.bbn.com

0 new messages