Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Why is OS Research Dead?

107 views
Skip to first unread message

Frank D. Greco

unread,
Jan 2, 2003, 12:31:11 AM1/2/03
to
I haven't seen anything in the past few years that resembles
bonafide OS research. Is it dead?

Frank G.

Robby

unread,
Jan 2, 2003, 12:54:28 PM1/2/03
to

In article <3e13ce9f$1...@news.ucsc.edu>,

"Frank D. Greco" <fgr...@crossroadstechNOSPAM.com> writes:
> I haven't seen anything in the past few years that resembles
> bonafide OS research. Is it dead?

You might find Rob Pike's views on this interesting:
http://cm.bell-labs.com/cm/cs/who/rob/utah2000.pdf

Robby


Christopher Browne

unread,
Jan 2, 2003, 12:54:30 PM1/2/03
to

A long time ago, in a galaxy far, far away, "Frank D. Greco" <fgr...@crossroadstechNOSPAM.com> wrote:
> I haven't seen anything in the past few years that resembles
> bonafide OS research. Is it dead?

Largely, yes.

In the '90s, Microsoft skimmed a number of the top researchers off,
whilst at the same time essentially sabotaging one of the biggest OS
projects out there that you probably never heard of (WorkPlace OS),
and this had the net effect of undermining interest in the area.

Between the growth of "Windows Everywhere," and consolidation of the
second choice ("make something that looks like Unix"), there's not
terribly much room for original alternatives.

Also consider that creating a totally new OS that /isn't/ modelled
pretty much after Unix may mean having to create the full tool set from
file management to text editors to compilers yourself, and that's a
daunting effort to have as prerequisite.

The flip side is then that if you have created Yet Another Unix-Like
OS, is this truly of any great value?
--
If this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me
http://www.ntlug.org/~cbbrowne/oses.html
"Python's minimalism is attractive to people who like minimalism. It
is decidedly unattractive to people who see Python's minimalism as an
exercise in masochism." -- Peter Hickman, comp.lang.ruby


News Admin

unread,
Jan 4, 2003, 12:52:23 AM1/4/03
to
Robby wrote:

I have to agree with that presentation, although I'm confused as to why he's
not blaming Microsoft for the problems in the industry; they caused them
all, i.e. binding everyone to one OS, forcing competition out of business,
etc.

Overall, he is very correct in his observations.

Luke.


Luke A. Guest

unread,
Jan 4, 2003, 12:52:25 AM1/4/03
to
I looked at your page and here's a paper I found on Hydra:

http://www.cs.utexas.edu/users/dahlin/Classes/GradOS/papers/p337-wulf.pdf

Luke.


Isaac Stern

unread,
Jan 4, 2003, 12:52:27 AM1/4/03
to
> I haven't seen anything in the past few years that resembles
> bonafide OS research. Is it dead?

Well, you may have been looking in the wrong place.
[like ny java users group?]

There was an OSI'02 in Boston just recently. You may want to check out
papers and reports presented. (www.usenix.org)

In fact OS Research is experiencing kind of a renessance. Microkernels
are coming back in fashion and there is progress in file systems and
virtual machines areas.

What's bonafide by you? If that is something that could be sold on the
street you are probably better sticking to older stuff.

Describe what area interests you and I will post a link.

Later,
IS.


David Moore

unread,
Jan 15, 2003, 3:01:48 PM1/15/03
to
OS Research is Dead because people like me cannot get work.

I have been trying for years to get an OS Kernel R&D job -
I have 11 years experience doing everything from working on
the VMS Kernel on VAX then Alpha at DEC, doing advanced OS
Microkernels for Chorus Systems for leading european
microelectronics companies and advanced OS Kernel Technology
for Unix Systems Labs (the microkernelized Amadeus OS).

More recently because I could not get a Corporate OS R&D
job I set up my own business and did my own OS Kernel and
C Compiler for Intel IA32, 64 bit DEC Alpha and MIPSCO MIPS.
I tried to get software distributors who take a sizeable
percentage commision off the list price for every sale
but they didn't want to know.

Then I came up with one of the best OS ideas in 40 years - a
project I call PALOS. I still cannot get hired, get work or get
funded for PALOS. Magically my idea starts cropping
up in all sorts of ways and all sorts of places which vindicates
how great my idea is even if I cannot get hired to do it - meanwhile
I watch complete idiots ride the gravy train - all they have to do
over the years I've been on bread and water is show up to get
paid megabucks.

At DEC I worked on a project where an R&D engineer asked me how to
write a C switch() statement. He ended up in the Digital Technical
Journal and I got moved of the project.


Yoann Padioleau

unread,
Jan 16, 2003, 2:00:53 PM1/16/03
to
marlin...@aol.com (David Moore) writes:

> OS Research is Dead because people like me cannot get work.

perhaps OS research is dead because it is no more an important question.
Do you see really problems with current kernel ?
the fact to do a monolythic or micro kernel is not that important
for the user, and what i think is the most important question for an OS
is what it brings to the user. Micro kernel brings nothing to the user.

>
> I have been trying for years to get an OS Kernel R&D job -
> I have 11 years experience doing everything from working on
> the VMS Kernel on VAX then Alpha at DEC, doing advanced OS
> Microkernels for Chorus Systems for leading european
> microelectronics companies and advanced OS Kernel Technology
> for Unix Systems Labs (the microkernelized Amadeus OS).
>
> More recently because I could not get a Corporate OS R&D
> job I set up my own business and did my own OS Kernel and
> C Compiler for Intel IA32, 64 bit DEC Alpha and MIPSCO MIPS.
> I tried to get software distributors who take a sizeable
> percentage commision off the list price for every sale
> but they didn't want to know.
>
> Then I came up with one of the best OS ideas in 40 years - a

what is it ?

> project I call PALOS. I still cannot get hired, get work or get
> funded for PALOS. Magically my idea starts cropping
> up in all sorts of ways and all sorts of places which vindicates
> how great my idea is even if I cannot get hired to do it - meanwhile
> I watch complete idiots ride the gravy train - all they have to do
> over the years I've been on bread and water is show up to get
> paid megabucks.
>
> At DEC I worked on a project where an R&D engineer asked me how to
> write a C switch() statement. He ended up in the Digital Technical
> Journal and I got moved of the project.
>
>

--
Yoann Padioleau, INSA de Rennes, France,
Opinions expressed here are only mine. Je n'écris qu'à titre personnel.
**____ Get Free. Be Smart. Simply use Linux and Free Software. ____**


Luke A. Guest

unread,
Jan 17, 2003, 5:44:19 PM1/17/03
to
David Moore wrote:

These are just typical examples of why this industry sucks now. It's not
you, I have actually been trying to get into the computer industry for
years (I'm now in it, but it's too late really). I could always do the
work, but I had to do a degree (which I did as a mature student), but by
the time I finished the industry was going down hill and all the people in
it were brought up on M$ crap and these are the idiots who run the
industry.

Sad but true.

Luke.


Isaac Stern

unread,
Jan 17, 2003, 5:44:17 PM1/17/03
to
> Micro kernel brings nothing to the user.

Really, and who are you?

How do you create a secure OS without microkernel approach?

Microkernel is the only to build secure operating system and
Multiserver is probably most logical way to build a true network OS.

Why second point could be argued, however first statement is a fact.

Reason being is very simple all software that runs in ring 0 must be
audited to guarantee security. Auditing monolithic OS is not possible.

Where did you go to school?


Robert Kaiser

unread,
Jan 17, 2003, 5:44:39 PM1/17/03
to
In article <3e27...@news.ucsc.edu>,

Yoann Padioleau <Yoann.P...@irisa.fr> writes:
> perhaps OS research is dead because it is no more an important question.
> Do you see really problems with current kernel ?

Most of them are not very reliable and this is getting worse, not better.
This is IMHO a direct consequence of the monolitic approach.

We've come to take it as a god-given that a computer needs a daily reboot
in order to operate (more or less) reliably, when this is in fact a
consequence of the attempt to solve increasingly complex problems with
ancient methods that are simply no longer up to the task.


> the fact to do a monolythic or micro kernel is not that important
> for the user, and what i think is the most important question for an OS

> is what it brings to the user. Micro kernel brings nothing to the user.
>

Hmm, modularity, scalability, better stability, more safety, security,
portability, openness for new concepts, coexistence of different OS paradigms ?

However, to appreciate such benefits, "the user" needs to have
some requirements that would be considered "unusual" in today's
mainstream market: Joe Blow surfing the 'net won't care.

I believe there is still lots of things to be researched in the
area of embedded systems, real-time systems and (especially)
safety-critical systems. Many of the problems in these areas
are emerging just now and I hope we will see OS research being
revisited in the next years.

Rob

--
Robert Kaiser email: rkaiser AT sysgo DOT de
SYSGO AG http://www.elinos.com
Klein-Winternheim / Germany http://www.sysgo.de


Sotiris Ioannidis

unread,
Jan 18, 2003, 9:56:57 AM1/18/03
to
Isaac Stern wrote:

> > Micro kernel brings nothing to the user.
>
> Really, and who are you?
>
> How do you create a secure OS without microkernel approach?
>

openbsd is not microkernel and its probably the most secure os out there
(any *bsd and linux can be made equally secure, just to avoid flame wars
:) )

also think SPIN, SOSP'95 for a different approach

>
> Microkernel is the only to build secure operating system and

no, see above

>
> Multiserver is probably most logical way to build a true network OS.
>

amoeba...

>
> Why second point could be argued, however first statement is a fact.

nope, see above

>
>
> Reason being is very simple all software that runs in ring 0 must be
> audited to guarantee security. Auditing monolithic OS is not possible.

wtf??

>
>
> Where did you go to school?

see below, where did you go to school?
&si


--
Sotiris Ioannidis
Ph.D. candidate, Distributed Systems Lab, UPenn
mailto:sot...@dsl.cis.upenn.edu


Casper H.S. Dik

unread,
Jan 21, 2003, 9:49:57 AM1/21/03
to
drm...@techie.com (Isaac Stern) writes:

>How do you create a secure OS without microkernel approach?

>Microkernel is the only to build secure operating system and

>Reason being is very simple all software that runs in ring 0 must be


>audited to guarantee security. Auditing monolithic OS is not possible.

You do not provide proof that you cannot audit a monolithic
kernel (harder != can not).

But I also think that this is a common microkernel falacy;
there will be requirements on the services that run on the
the microkernel to provide security services as well and
these services need to be audited too.

Casper


Julian Squires

unread,
Jan 21, 2003, 9:49:59 AM1/21/03
to
In article <3e296b39$1...@news.ucsc.edu>, Sotiris Ioannidis wrote:

> Isaac Stern wrote:
>> How do you create a secure OS without microkernel approach?
>>
>
> openbsd is not microkernel and its probably the most secure os out there
> (any *bsd and linux can be made equally secure, just to avoid flame wars
>:) )

OpenBSD is probably the free OS that _tries_ the most to be secure.
It's arguable whether it is really that secure at the kernel level. At
a user level of course, it is mostly saved by very good choices of
defaults and a paranoid attitude.
(Isaac's auditing comment rings very true when you consider some of the
bugs in OpenBSD in the last year or so, and no doubt, more waiting to be
discovered)

[note: I use openbsd on many of my computers, and I think it's a great
OS relative to many other free UNIXes... but it's not fair to say that
it's a secure OS in the context that Isaac was discussing.]

> also think SPIN, SOSP'95 for a different approach

But this isn't really a different approach. The most important parts to
audit are the core system and the type-safety system, everything else is
a kernel extension which /should/ be safe if those parts are safe. (of
course, if no one audited, say, the file system, I'd be very worried)

>> Reason being is very simple all software that runs in ring 0 must be
>> audited to guarantee security. Auditing monolithic OS is not possible.
>
> wtf??

I'm not sure that I agree that microkernel is absolutely the only way to
build a secure OS, but look at the case of OpenBSD -- the code is
constantly being audited, and yet there are still bugs which are legacy
from NetBSD which turn up.

Would you rather audit the ~1 million lines of code in the OpenBSD
kernel, or the ~150 thousand lines of code in Mach?

Cheers.

--
Julian Squires


Francois-Rene Rideau

unread,
Jan 21, 2003, 9:50:01 AM1/21/03
to
bitb...@invalid-domain-see-sig.nil (Robert Kaiser) writes:

> Yoann Padioleau <Yoann.P...@irisa.fr> writes:
>> Micro kernel brings nothing to the user.
>
> Hmm, modularity, scalability, better stability, more safety, security,
> portability, openness for new concepts,
> coexistence of different OS paradigms ?
Microkernels bring NONE of these.

* Modularity: uK's put it upside down.
Modularity is useful as a source-level concept, to master complexity.
The way to achieve it is to use a modular language:
LISP, Modula-3, Oberon, SML, OCaml, MzScheme, Erlang, etc.
Microkernels do it the other way round:
they make the *runtime* modular at the expense of making the source
much more spaghetti-like, with lots of shared .h files
and manually-enforced server event polling and data marshalling.

* Scalability: uK's do not in any way reduce the overhead needed to
write code that works in a large range of situations. On the contrary,
the it *increases* the overhead, due to the need to manually enforce
all the uK-enforced communication protocols. If you want a tool that
does help for scalability, try a concurrent language like Erlang,
that helps build and manage huge distributed infrastructure.

* Better stability: just how so? Once again, stability is an overall system
property. There is no use in the system being able to kill a wild device
driver, if killing said wild device driver has the same effect of stopping
the whole system as saying "Oops" and stopping the system, as "monolithic"
systems do. Robustness does not lie in pseudo-modularity, but in
automatically-enforced (or painfully manually-audited) code invariants,
and the ability to dynamically cope with failures. In other words,
with high-level and dynamic languages. All the opposite of C/C++ uKs.

* safety and security: safety and security are high-level concepts
that do not correspond in any direct way with the low-level protection
provided by either kernels or microkernels. Once again, it comes to
either using a high-level language that can directly express resources
and capabilities, or painstakingly emulating one with C/C++ and your
kernel, using a very rigid two- or three- level programming model
(in-process C/C++ pointer sharing, cross-process data marshalling,
and control through string-processing script interpreters and GUI horrors).

* portability: weird how linux is much more portable than any microkernel.
Weird how the latest trend in microkernels is precisely in very unportable,
processor-specific things like L4.

* openness to new concepts: just how that? On top of a high-level language
like LISP, I can build logic programming (Screamer), orthogonal
object-oriented database persistence (Statice, PLOB!), distributed
programming, etc. -- thanks to the metaprogramming features of it.
How does a uK architecture help me write those?

* coexistence of different OS paradigms: just how is making linux on top
of L4 or BSD on top of Mach something for which to praise L4 or Mach,
instead of linux and BSD's superior portability? High-level languages
can adapt to OS paradigms. Extensible high-level languages can help
provide new ones. A uK is part of the implementation infrastructure,
not of the user-visible programming paradigm. microkernels are part
of the problem to be coped with, not of the solution.

Microkernels are an abstraction inversion.
http://cliki.tunes.org/microkernel

> However, to appreciate such benefits, "the user" needs to have
> some requirements that would be considered "unusual" in today's
> mainstream market: Joe Blow surfing the 'net won't care.

Yes, the requirements being "refusing to put one's head out of one's ass":
once one has committed one's mental sanity to the orthodox belief in the
greatness of microkernels, one will not want to admit to having been a fool
for over one decade, together with a vast number of academic summities.
After all, if professors like Andy Tanenbaum say it, together with
well-funded industrial researchers, whereas only degree-less students
like Linus Torvalds counter-argument, then it must be true.
Yeah. And for all the hype around it, Java must be one hell of a great
and innovative language!

> I believe there is still lots of things to be researched in the
> area of embedded systems, real-time systems and (especially)
> safety-critical systems. Many of the problems in these areas
> are emerging just now and I hope we will see OS research being
> revisited in the next years.

There is a lot of research to be done in a lot of areas, including these.
And microkernels are a stupid clogging burden in all of these areas.

By the way, improving on the concept of Xok and L4, that make microkernels
smaller by having them do less, in a processor-dependent way, here's the
full source code of my my newfangled micro-kernel. It's so small and relies
so much on processor-dependent things that it's back to being
processor-independent:
---CODE BEGINS AFTER THIS LINE---
---CODE ENDS BEFORE THIS LINE--
It has the advantage that now, I can patent it, and demand royalties from
all those intellectual property trespassers who will develop code derived
from it.

[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[ TUNES project for a Free Reflective Computing System | http://tunes.org ]
The worst thing that can happen to a good cause is, not to be
skillfully attacked, but to be ineptly defended. -- F. Bastiat


Patrick Bridges

unread,
Jan 21, 2003, 9:50:02 AM1/21/03
to
>>>>> "IS" == Isaac Stern <drm...@techie.com> writes:

>> Micro kernel brings nothing to the user.

IS> Really, and who are you?

IS> How do you create a secure OS without microkernel approach?

IS> Microkernel is the only to build secure operating system and
IS> Multiserver is probably most logical way to build a true
IS> network OS.

IS> Why second point could be argued, however first statement is a
IS> fact.

Microkernels, especially the second generation microkernels like
L4/K42, are certainly good work, but this does not mean that they are
the be-all and end-all of operating system design. As other posters
have pointed out, there are a nuumber of different possible approaches
to building systems, even secure systems. Oliver Spaatscheck's work on
securing paths in Scout (OSDI '99), for example, shows another viable
approach to constructing configurable, secure operating
systems. Others have also cited interesting work, such as
language-based security techniques (e.g., SPIN) and software fault
isolation techniques (e.g. VINO's transaction-based approach and Wahbe
and Lucco's sandboxing work).

-Patrick Bridges
University of New Mexico


Patrick Bridges

unread,
Jan 21, 2003, 9:50:04 AM1/21/03
to
>>>>> "RK" == Robert Kaiser <bitb...@invalid-domain-see-sig.nil> writes:

RK> In article <3e27...@news.ucsc.edu>, Yoann Padioleau


RK> <Yoann.P...@irisa.fr> writes:
>> perhaps OS research is dead because it is no more an important
>> question. Do you see really problems with current kernel ?

RK> Most of them are not very reliable and this is getting worse,
RK> not better. This is IMHO a direct consequence of the
RK> monolitic approach.

I don't see that as necessarily following. A bug in any critical
software component, whether it runs in protected mode or not, is going
to result in system problems. The simple fact is that we're asking
system software to do more, work with more complicated devices, larger
banks of memory that may not contain any ECC, and to provide more
sophisticated services. Complexity isn't free, monolithic system or
not. This is one reason why principled methods for dealing with
complexity and faults are increasingly important, especially compared
to revisiting old religous wars.

I'm surprised that a number of posters seem to be making the
assumption that microkernels are the only way to build modular
operating systems, despite the fact that a wide range of groups have
shown other approaches to structuing modular kernels. Even most modern
"monolithic" kernels are well-structured and pretty modular. Oh, and
for the record, I've regularly used (monolithic) Linux boxes with
consistent uptimes in the weeks or months range.

Isaac Stern

unread,
Jan 21, 2003, 9:50:07 AM1/21/03
to
> openbsd is not microkernel and its probably the most secure os out there
> (any *bsd and linux can be made equally secure, just to avoid flame wars
> :) )
>
> also think SPIN, SOSP'95 for a different approach

that's not correct. you don't know what secure os is.

> > Multiserver is probably most logical way to build a true network OS.

> amoeba...

amoeba is a microkernel.

> > Why second point could be argued, however first statement is a fact.
>
> nope, see above

below

> > Reason being is very simple all software that runs in ring 0 must be
> > audited to guarantee security. Auditing monolithic OS is not possible.
>
> wtf??

wtf?

> > Where did you go to school?
>
> see below, where did you go to school?

UCB.

I don't know what your speciality is, but I hope it's not related to
either oses or security.

Ask your teachers to give you some pointers such as what capabilities
are and what is a secure or trusted os.

[hint, it has a little to do with default configuration.]

too bad. :o(


David Moore

unread,
Jan 21, 2003, 9:50:09 AM1/21/03
to
Yoann,

>Yoann Padioleau <Yoann.P...@irisa.fr> wrote in message >news:<3e27...@news.ucsc.edu>...


> marlin...@aol.com (David Moore) writes:
>
>
> perhaps OS research is dead because it is no more an important question.

Wrong. Silicon Design can take place ahead of Kernel or Compiler
design but in many instances Kernel and Compiler design can directly
affect elements of CPU design - so at very least Kernel and Compiler
design are on the same level as CPU design. But perhaps you think CPU
design is 'no more an important question'?

> Do you see really problems with current kernel ?

Sure - progress always obsoletes old technology.

> the fact to do a monolythic or micro kernel is not that important
> for the user,

This is extremely convoluted logic and basically it is an ignorant
argument.

>and what i think is the most important question for an OS
> is what it brings to the user.

ditto.

>Micro kernel brings nothing to the user.

Then I assume as well as silicon, kernel and compiler design being "no
longer an important question" you also think that the existence of
distributed UNIX also is "no longer an important question"; because
microkernels helped USL turn UNIX into a distributed system.


David Moore
Chief Engineer R&D for OS Kernel and Compiler technology
Incantation Systems
Belfast
U.K.
marlin...@aol.com


Joe Pfeiffer

unread,
Jan 21, 2003, 11:26:47 PM1/21/03
to
drm...@techie.com (Isaac Stern) writes:

> > openbsd is not microkernel and its probably the most secure os out there
> > (any *bsd and linux can be made equally secure, just to avoid flame wars
> > :) )
> >
> > also think SPIN, SOSP'95 for a different approach
>
> that's not correct. you don't know what secure os is.

It's not enough to make sweeping claims, and insult people who don't
agree with you. Why is SPIN's approach not a viable approach to
security.

> > > Multiserver is probably most logical way to build a true network OS.
>
> > amoeba...
>
> amoeba is a microkernel.

Is it a multiserver? Or is the term multiserver broad (or vague)
enough that any microkernel on a network would be a multiserver?

> > > Reason being is very simple all software that runs in ring 0 must be
> > > audited to guarantee security. Auditing monolithic OS is not possible.
> >
> > wtf??
>
> wtf?

I don't know your actual background... but you sound like you've
learned one particular set of terminology (and one particular OS
religion), and aren't familiar with other approaches. Using terms
like "ring 0" gives that distinct impression (not that there's
anything wrong with saying "ring 0" instead of "kernel mode," it's
just a flag). Blanket statements about whether it's possible to audit
a monolitic kernel, and no discussion of whether it's also necessary
to audit the device drivers if they're built as servers, *really*
gives that impression

Knowledgeable people can (and do) disagree about these things. You
can't just repeat claims as if that were an argument.

> Ask your teachers to give you some pointers such as what capabilities
> are and what is a secure or trusted os.

Again -- capabilities are just one approach to one part of the problem
of security.
--
Joseph J. Pfeiffer, Jr., Ph.D. Phone -- (505) 646-1605
Department of Computer Science FAX -- (505) 646-1002
New Mexico State University http://www.cs.nmsu.edu/~pfeiffer
Southwestern NM Regional Science and Engr Fair: http://www.nmsu.edu/~scifair


Sotiris Ioannidis

unread,
Jan 21, 2003, 11:26:49 PM1/21/03
to
Isaac Stern wrote:

> > openbsd is not microkernel and its probably the most secure os out there
> > (any *bsd and linux can be made equally secure, just to avoid flame wars
> > :) )
> >
> > also think SPIN, SOSP'95 for a different approach
>
> that's not correct. you don't know what secure os is.
>

it uses a language approach to security, that is the language guarantees a
lot
of the security/safety features

>
> > > Multiserver is probably most logical way to build a true network OS.
>
> > amoeba...
>
> amoeba is a microkernel.

yes it its, by multiserver you meant it has servers running on top of the
microkernel
i missunderstood i guess

>
>
> > > Why second point could be argued, however first statement is a fact.
> >
> > nope, see above
>
> below
>
> > > Reason being is very simple all software that runs in ring 0 must be
> > > audited to guarantee security. Auditing monolithic OS is not possible.
> >
> > wtf??
>
> wtf?
>

wtf?

>
> > > Where did you go to school?
> >
> > see below, where did you go to school?
>
> UCB.

>
> I don't know what your speciality is, but I hope it's not related to
> either oses or security.
>

my specialty is actually both os and network security,
you can get all my publications online

>
> Ask your teachers to give you some pointers such as what capabilities
> are and what is a secure or trusted os.

i have written papers on capabilities and secure/trusted operating systems
check my pubs

>
>
> [hint, it has a little to do with default configuration.]
>
> too bad. :o(

indeed

Scott Schwartz

unread,
Jan 22, 2003, 12:39:04 PM1/22/03
to
Julian Squires <t...@balance.wiw.org> writes:
> Would you rather audit the ~1 million lines of code in the OpenBSD
> kernel, or the ~150 thousand lines of code in Mach?

How about the ~30 thousand lines in Plan 9?

Of course, in practice Mach means Mach+BSD, or maybe Mach+Hurd, or
something like that.

How big is QNX?


Sotiris Ioannidis

unread,
Jan 22, 2003, 12:39:05 PM1/22/03
to
hi joe,

i think you are wasting bits, he clearly doesnt understang even the basics
and he is confusing Multics terminology with microkernels

&si


Joe Pfeiffer wrote:

--

Scott Schwartz

unread,
Jan 22, 2003, 12:39:08 PM1/22/03
to
Francois-Rene Rideau <fa...@tunes.org> writes:
> Yes, the requirements being "refusing to put one's head out of one's ass":
> once one has committed one's mental sanity to the orthodox belief in the
> greatness of microkernels, one will not want to admit to having been a fool
> for over one decade, together with a vast number of academic summities.
> After all, if professors like Andy Tanenbaum say it, together with
> well-funded industrial researchers, whereas only degree-less students
> like Linus Torvalds counter-argument, then it must be true.


Does no one recall this historic thread?

From: research!rob (r...@alice.att.com)
Subject: Andy Tanenbaum hasn't learned anything
Date: 1992-04-06 13:06:28 PST

http://groups.google.com/groups?threadm=32234%40darkstar.ucsc.edu


Isaac Stern

unread,
Jan 22, 2003, 12:39:10 PM1/22/03
to
> You do not provide proof that you cannot audit a monolithic
> kernel (harder != can not).

That cannot be proven. On contrary any computer system could be auited
(as a finite state machine). However, this would take forever.
Therefore, it's not practically useful.

> But I also think that this is a common microkernel falacy;
> there will be requirements on the services that run on the
> the microkernel to provide security services as well and
> these services need to be audited too.

Therefore?

You cannot seriously be trying to argue that auditing microkernel and
security manager is in a same order of magnitude of difficulty as
auding every driver + the above mentioned components + personality API
(such POSIX or WIN32) code.


Robert Kaiser

unread,
Jan 22, 2003, 12:39:12 PM1/22/03
to
In article <3e2d5e19$1...@news.ucsc.edu>,
Francois-Rene Rideau <fa...@tunes.org> writes:

> bitb...@invalid-domain-see-sig.nil (Robert Kaiser) writes:
>> Hmm, modularity, scalability, better stability, more safety, security,
>> portability, openness for new concepts,
>> coexistence of different OS paradigms ?
> Microkernels bring NONE of these.
> ....

Look, I could rebute all the points you made. In fact I had a response
along those lines half-written when I decided to discard it because I
realized I was just contributing to yet another silly "microkernel vs.
monolithic kernel" thread. Usenet has had enough of these and none of
them prove anything.

The subject of this thread is "Why is OS Research Dead?". My opinion
on that is: OS research is not dead, but is in a state of hibernation
for as long as it takes until the traditional OSes fail to solve the
problems at hand. I believe we are seeing first signs of this failure
just now, manifested by the increasing unreliability of today's systems,
when, at the same time there is an increasing call for more reliable
systems as computers are being used more an more in safety critical
applications. So there is hope that we will see a revival of OS research
in the near future.

I happen to believe that microkernels are part of the solution to these
problems (I am not as naive as to think that they are *the* one and
only solution to all of the problems -- sorry if I sounded like that).
I also believe that the microkernel approach has been discarded all too
quickly by many, mainly because of Mach's lack of success. Even today,
Mach's weaknesses are being used as counter-arguments against microkernels
in general, ignoring the fact that there are a host of new approaches in
this area that have sucessfully avoided these weaknesses.

But all that is just my opinion, and it is pointless to argue about
opinions, so please, don't.

Robert Kaiser

unread,
Jan 22, 2003, 12:39:14 PM1/22/03
to
In article <3e2d5e1c$1...@news.ucsc.edu>,

Patrick Bridges <bri...@CS.Arizona.EDU> writes:
>>>>>> "RK" == Robert Kaiser <bitb...@invalid-domain-see-sig.nil> writes:
>
> RK> In article <3e27...@news.ucsc.edu>, Yoann Padioleau
> RK> <Yoann.P...@irisa.fr> writes:
> >> perhaps OS research is dead because it is no more an important
> >> question. Do you see really problems with current kernel ?
>
> RK> Most of them are not very reliable and this is getting worse,
> RK> not better. This is IMHO a direct consequence of the
> RK> monolitic approach.
>
> I don't see that as necessarily following. A bug in any critical
> software component, whether it runs in protected mode or not, is going
> to result in system problems.

If you look at a typical monolithic kernel, I believe you will agree
that there are a lot of things in there that are not critical components
because the service they implement is not vital to the system. Yet they
all have the potential to cause a total system failure because they run
in kernel mode.

And even if they do implement a critical service: running each of them
in a private address space causes bugs (e.g. bad pointer dereference)
to manifest themselves as they happen, rather than through some obscure
side effect they might have. So there is a much better chance to debug
and thoroughly validate a critical component, before deploying it.

> The simple fact is that we're asking
> system software to do more, work with more complicated devices, larger
> banks of memory that may not contain any ECC, and to provide more
> sophisticated services. Complexity isn't free, monolithic system or
> not.

Yes, exactly.

> This is one reason why principled methods for dealing with

> complexity and faults are increasingly important, ..

A proven successful method to tackle increasing complexity is by
splitting the system into subsystems, each of which can be validated
properly, and subsequently be treated as a reliable black box. This
modular approach is encouraged (but not enforced) by microkernels while
monolithic kernels tend to discourage (though not prohibit) it.
Therefore I think the microkernel approach has some advantage, but
it is certainly not the one and only solution.

> ... especially compared


> to revisiting old religous wars.

I don't want to revisit old religous wars. Sorry if it looked like
that. However I do think that the microkernel approach has been
discarded way too quickly in the past (mainly due to the failure
of Mach) and it might be worthwhile to take a second look.

>
> Oh, and
> for the record, I've regularly used (monolithic) Linux boxes with
> consistent uptimes in the weeks or months range.

Uptime is only a weak indication of reliability (*). Would you dare to
fly in an airplane knowing that it's steer-by-wire system is built
on Linux (or *BSD, Windows, NT, ...) ?

Rob


(*) The fact that Microsoft at some point even managed to break uptime
by delivering a system that could be crashed simply by leaving it
alone doing nothing for ~50 days is just a depressing anecdote that
shows how low our standards are these days.

Isaac Stern

unread,
Jan 24, 2003, 10:50:45 AM1/24/03
to
> > > openbsd is not microkernel and its probably the most secure os out there
> > > (any *bsd and linux can be made equally secure, just to avoid flame wars

You are missing the point. There is no such thing as more secure OS.
System is either secure or not. That's why OpenBSD got hacked after
all.

> it uses a language approach to security, that is the language guarantees a
> lot
> of the security/safety features

Old news. Confinement is the property that must be guaranteed.

> > > > Multiserver is probably most logical way to build a true network OS.
>
> > > amoeba...
> >
> > amoeba is a microkernel.
>
> yes it its, by multiserver you meant it has servers running on top of the
> microkernel
> i missunderstood i guess

You agree then?!

> > > > Reason being is very simple all software that runs in ring 0 must be
> > > > audited to guarantee security. Auditing monolithic OS is not possible.

> > I don't know what your speciality is, but I hope it's not related to


> > either oses or security.
> >
>
> my specialty is actually both os and network security,
> you can get all my publications online

Thank you, but no thank you.

> > Ask your teachers to give you some pointers such as what capabilities
> > are and what is a secure or trusted os.
>
> i have written papers on capabilities and secure/trusted operating systems
> check my pubs

Then you should understand that neither linux, bsd or windows are
secure systems.

> > [hint, it has a little to do with default configuration.]
> >
> > too bad. :o(
>
> indeed

Then OpenBSD is NOT "the most secure os out there"?

TY.


Isaac Stern

unread,
Jan 24, 2003, 10:50:51 AM1/24/03
to
> > > openbsd is not microkernel and its probably the most secure os out there
> > > (any *bsd and linux can be made equally secure, just to avoid flame wars
> > > :) )
> > >
> > > also think SPIN, SOSP'95 for a different approach
> >
> > that's not correct. you don't know what secure os is.
>
> It's not enough to make sweeping claims, and insult people who don't
> agree with you. Why is SPIN's approach not a viable approach to
> security.

Because SPIN does not guarantee confinement.

If you think I'm wrong, prove it. [I'll leave insults issue alone.]

If you read my post, you'll see I never claimed SPIN is not viable,
but it is does not guarantee security.

> > > > Multiserver is probably most logical way to build a true network OS.
>
> > > amoeba...
> >
> > amoeba is a microkernel.
>
> Is it a multiserver? Or is the term multiserver broad (or vague)
> enough that any microkernel on a network would be a multiserver?

It is a multiserver.

> > > > Reason being is very simple all software that runs in ring 0 must be
> > > > audited to guarantee security. Auditing monolithic OS is not possible.
> > >
> > > wtf??
> >
> > wtf?
>
> I don't know your actual background... but you sound like you've
> learned one particular set of terminology (and one particular OS
> religion), and aren't familiar with other approaches. Using terms
> like "ring 0" gives that distinct impression (not that there's
> anything wrong with saying "ring 0" instead of "kernel mode," it's
> just a flag). Blanket statements about whether it's possible to audit
> a monolitic kernel, and no discussion of whether it's also necessary
> to audit the device drivers if they're built as servers, *really*
> gives that impression
>
> Knowledgeable people can (and do) disagree about these things. You
> can't just repeat claims as if that were an argument.

Who disagres? (I suppose you think that people like me invented graph
explosion issue.) Or do you maybe have an argument against prove that
capabilities provide a way to enforce information transfer security
and your regular system does not?

> Again -- capabilities are just one approach to one part of the problem
> of security.

Yes, but at present there are no alternatives that provide
confinement.


Patrick Bridges

unread,
Jan 24, 2003, 10:50:48 AM1/24/03
to
[I'm limiting my reply here, as most of the actual issues here have
been well-hashed out in prior threads]

>>>>> "RK" == Robert Kaiser <bitb...@invalid-domain-see-sig.nil> writes:

RK> [A] modular approach is encouraged (but not
RK> enforced) by microkernels while monolithic kernels tend to
RK> discourage (though not prohibit) it.

While I agree with the first half of your statement, I disagree with
the second half. Monolithic kernels do not *discourage* modularity. In
every successful monolithic system that I'm aware of has well-defined
modular boundaries, for example the VFS interface, the network
protocol interface, and the device-driver interface. Those that don't
will collapse under their own weight so quickly that they're not even
worth considering.

RK> Uptime is only a weak indication of reliability .

Oh, I know. I was simply responding to your statement that: "We've


come to take it as a god-given that a computer needs a daily reboot in

order to operate (more or less) reliably."

In general, the whole monolithic vs. microkernel approach is, IMO,
simply a false choice. There is no reason to limit the choice for any
system to either minimal microkernels or "everything-in-kernel"
monolithic systems.It makes a lot of sense in desktop systems to put
performance-critical components that are used by most every
application in the kernel. On the other hand, requiring all of these
services on embedded systems doesn't necessarily make sense.
--
Patrick G. Bridges bri...@cs.unm.edu GPG ID = CB074C71
GPG fingerprint = FEEA ECFF 1E23 148C 2804 FDD9 DB63 6993 CB07 4C71

"Anyone that can't make money on Sports Night should get out of the
money-making business" - Calvin, on the last episode of Sports Night


Isaac Stern

unread,
Jan 24, 2003, 10:50:53 AM1/24/03
to
> i think you are wasting bits, he clearly doesnt understang even the basics
> and he is confusing Multics terminology with microkernels

Well, you heard of multics, that's a surprise. If you'd heard of OS/2
(implementaion not the name alone) you'd know that there are systems
that use more than just 2 (kernel and user) rings.

Since you are so knowledgable. Why don't come up with some kind of
argument to disprove ANYTHING I said?


Christopher Browne

unread,
Jan 24, 2003, 10:50:57 AM1/24/03
to
After takin a swig o' Arrakan spice grog, Scott Schwartz <"schwartz+@usenet "@bio.cse.psu.edu> belched out...:

> Does no one recall this historic thread?
>
> From: research!rob (r...@alice.att.com)
> Subject: Andy Tanenbaum hasn't learned anything
> Date: 1992-04-06 13:06:28 PST
>
> http://groups.google.com/groups?threadm=32234%40darkstar.ucsc.edu

Yes, that's a well-known thread of discussion.

It pointed out nicely that, in 1992, computer science researchers
looking for funding for OS projects could get funding for microkernel
work, but little else.

Since then, things have changed, so that there is virtually no funding
for OS projects irrespective of their degree of conformance to
"microkernel dogma."

Many researchers thought they might create something new, interesting,
and perhaps even commercially viable.

Today? Microsoft bought out a bunch of the creative researchers,
eliminating a number of research groups, and the "practical" research
involves patching things onto Linux. The alternatives require so much
up front work to create environment (you need hardware, compilers, and
other tools to get an OS hosted) that only a few persistent
institutions have the resources to persist with projects that aren't
either:

a) Minor tweaks to something looking like Unix, or

b) Tweaks to Windows NT that could turn out to lead to someone
offering the researcher $lot$ of buck$.
--
(concatenate 'string "cbbrowne" "@ntlug.org")
http://www.ntlug.org/~cbbrowne/oses.html
"I'd crawl over an acre of 'Visual This++' and 'Integrated Development
That' to get to gcc, Emacs, and gdb. Thank you."
-- Vance Petree, Virginia Power


Bryan

unread,
Jan 24, 2003, 10:51:00 AM1/24/03
to
I know c++ and lots of other stuff. What's the difference between a
monolithic kernel and a microkernal?


Alex Colvin

unread,
Jan 25, 2003, 8:05:48 PM1/25/03
to
>I know c++ and lots of other stuff. What's the difference between a
>monolithic kernel and a microkernal?

The number and size of the liths, obviously.
Your posting points out that these are not opposites, but orthogonal.

monolithic polylithic
- - - -
microkernel: Plan 9? QNX
macrokernel: BSD Windows

A monolithic microkernel is small and self-contained, and not necessarily
extensible.

A polylithic microkernel is built out of separable parts, all small.

A monolithic macrokernel is a large integrated program.

A polylithic macrokernel is a collection of large unintegrated programs.


--
mac the naïf


Steinar Haug

unread,
Jan 25, 2003, 8:05:50 PM1/25/03
to
[Isaac Stern]

| You are missing the point. There is no such thing as more secure OS.
| System is either secure or not.

Maybe that's the case in your world. In *my* world there's a dramatic
difference in security between a properly configured FreeBSD system
and, say, Windows 98 out of the box. I call the FreeBSD box more
secure - and I couldn't care less whether you agree with my choice of
language or not.

Steinar Haug, Nethelp consulting, sth...@nethelp.no


Robert Kaiser

unread,
Jan 25, 2003, 8:05:55 PM1/25/03
to
In article <3e3160d8$1...@news.ucsc.edu>,

Patrick Bridges <bri...@cs.unm.edu> writes:
> In every successful monolithic system that I'm aware of has well-defined
> modular boundaries, for example the VFS interface, the network
> protocol interface, and the device-driver interface.

Hmm, you have a different perception of well-defined boundaries then.

Try to run (or even compile) a Linux 2.2.xx driver in a 2.4.xx kernel
environment to see what I mean.

>
> RK> Uptime is only a weak indication of reliability .
>
> Oh, I know. I was simply responding to your statement that: "We've
> come to take it as a god-given that a computer needs a daily reboot in
> order to operate (more or less) reliably."

Ah, OK.

(BTW, The "we" in this sarcastic remark was meant to address the
majority of computer users who don't know (and don't care) about
the inner workings of the machine. Neither you nor I belong to
that group ;-)


> It makes a lot of sense in desktop systems to put
> performance-critical components that are used by most every
> application in the kernel.

Why? Because having them in user-space would make them slow?
This may be the case with first generation microkernels such
as Mach, but the L4 people would tend to disagree (and they
have presented some quite impressive measurements to back their
claims).

Rob

Sotiris Ioannidis

unread,
Jan 25, 2003, 8:05:58 PM1/25/03
to
ok this is last post on this subject because you clearly dont understand:
1. there is _nothing_ that can guarantee security
(at least until human learn how to write bug free s/w)
2. there are a number of approaches to security
(compartments, languages, auditing, firewalls, etc.)
3. no approach fits all
4. often you combine methods

also:
1. i do recommend you go out and read some security papers, good sources are:
USENIX techinical conference, USENIX sec, LISA, CCS, SNDSS, SOSP, SIGOPS, OSDI
2. learn the right terminology and what systems it referes to, to avoid confusion

3. learn to listen to other ppl that might have more expertise in certain areas

&si

Isaac Stern wrote:

--

Yoann Padioleau

unread,
Jan 27, 2003, 4:35:52 PM1/27/03
to
drm...@techie.com (Isaac Stern) writes:

> > Micro kernel brings nothing to the user.
>

> Really, and who are you?

i am a computer scientist that say its opinion, that's all.

>
> How do you create a secure OS without microkernel approach?

but who care ? not the millions of windows users.

>
> Microkernel is the only to build secure operating system and

> Multiserver is probably most logical way to build a true network OS.

perhaps, but this is stuff that interests only designer of operating systems,
not user. The only thing a user wants is an os that allow to manage its
hardware, that have multitasking, and some virtual memory, that's all.
So micro kernel does not bring anything to the user.

Then we can ask effectively if micro kernel are os easier to build from
a programmer point of view.
I dont see micro kernel as such a big advance for programmer.
Linux Core (and seems plan9) are quite small operating systems and are not
micro kernel. They all use too module technology (with clean API for drivers, file systems, ...)
via the compiler.

Micro kernel are for me clean modules made in C, that's ok, but i
would prefer a big kernel made in a higher level langage.


>
> Why second point could be argued, however first statement is a fact.
>

> Reason being is very simple all software that runs in ring 0 must be
> audited to guarantee security. Auditing monolithic OS is not possible.
>

> Where did you go to school?

in france, in a school where we are taught that you can have your own opinion.
>
>

--
Yoann Padioleau, INSA de Rennes, France,
Opinions expressed here are only mine. Je n'écris qu'à titre personnel.
**____ Get Free. Be Smart. Simply use Linux and Free Software. ____**


Patrick Bridges

unread,
Jan 27, 2003, 4:35:50 PM1/27/03
to
>>>>> "RK" == Robert Kaiser <bitb...@invalid-domain-see-sig.nil> writes:

RK> Why? Because having them in user-space would make them slow?
RK> This may be the case with first generation microkernels such
RK> as Mach, but the L4 people would tend to disagree (and they
RK> have presented some quite impressive measurements to back
RK> their claims).

For the record, I've made it a point *not* to cite Mach, as I'm well
aware of its limitations. Doing so would be about as specious as
citing Windows as evidence for the instability of all monolithic
systems. It's worth noting, however, that microkernels must always
perform *strictly more work* to provide the same services as a
monolithic kernel. There are other reasons why microkernels might
still be a good idea, such as flexibility, customizability, etc., but
performance isn't it.

How big is the performance hit? The 1997 L4 SOSP paper by Hartig,
et. al showed about a 8% performance hit in a monolithic L4 Linux
server implementation running various macrobenchmarks. lmbench and
hbench:OS microbenchmark numbers were generally a good bit worse than
that, ranging from minimal slowdown to for the TCP benchmark, 50%
slower for creating /bin/sh, and more than a factor of 2 slower in
writing to /dev/null and a conext switch test. To quote that paper:
"Both versions of MkLinux have a much higher penalty than
L4Linux. However, even the L4Linux penalties are not as low as we
hoped." Do you have newer numbers than this? If so, I'd be interested
in seeing them.

These penalties would presumably be even higher in a multiserver
implementation, since there will be more traps and address space
transitions going on; without a multiserver implementation, any
supposed stability gains will be minimal, since all of the services in
the monolithic kernel *are still running in the same protection
domain*. In addition, trap times are, IIRC, also becoming *slower* in
terms of numbers cycles required (staying roughly the same in terms of
real time), meaning that this performance hit is only increasing in
percentage terms.

Don't get me wrong - L4 is *very* good. I am in *no way* putting down
the excellent work they've done. If the modularity or lightweight
structure it gives is right for your application (and it is for some),
then it's great. I'm aware of a number of areas where this tradeoff is
worth making. In particular, in areas that don't need the generality
of Linux, the low-level interface provided by L4 can actually make
some applications perform much faster than they would in a
general-purpose monolithic system like Linux. The 1997 L4 paper showed
this as well.

It is worth noting, however, that in this case L4 would still be
slower than a customized lightweight monolithic kernel that performed
exactly the necessary services in-kernel. This is one of the reasons
that work on composable and configurable kernels is interesting, and
there has been a decent amount of work done on that recently, too
(e.g. Pebble from AT&T presented at USENIX'99). Microkernels are near
one end of the design spectrum, monolithic kernels the other. There is
no reason, however, to limit ourselves to staying just at the
extremities.

-Patrick

Isaac Stern

unread,
Jan 27, 2003, 4:35:47 PM1/27/03
to
> Microkernels, especially the second generation microkernels like
> L4/K42, are certainly good work, but this does not mean that they are
> the be-all and end-all of operating system design. As other posters
> have pointed out, there are a nuumber of different possible approaches
> to building systems, even secure systems. Oliver Spaatscheck's work on
> securing paths in Scout (OSDI '99), for example, shows another viable
> approach to constructing configurable, secure operating
> systems. Others have also cited interesting work, such as
> language-based security techniques (e.g., SPIN) and software fault
> isolation techniques (e.g. VINO's transaction-based approach and Wahbe
> and Lucco's sandboxing work).

I absolutely agree that not every OS needs to be a microkernel. Same
as not every system needs vitual memory. Target use of product
dictates requirements for every particular os. Security does not come
free. If requirement is DoS resistant OS, fine grained resource
accounting is enough. If system should work in appliance, hard RT is
far more important.

Some instances require secure OS, for that system must be audited and
provide confinement. I don't think any of the above OSes fits.
Therefore microkernel *as of today* is the only way to build secure
system.


Joe Pfeiffer

unread,
Jan 27, 2003, 4:35:49 PM1/27/03
to
bitb...@invalid-domain-see-sig.nil (Robert Kaiser) writes:

> In article <3e3160d8$1...@news.ucsc.edu>,
> Patrick Bridges <bri...@cs.unm.edu> writes:
> > In every successful monolithic system that I'm aware of has well-defined
> > modular boundaries, for example the VFS interface, the network
> > protocol interface, and the device-driver interface.
>
> Hmm, you have a different perception of well-defined boundaries then.
>
> Try to run (or even compile) a Linux 2.2.xx driver in a 2.4.xx kernel
> environment to see what I mean.

Not a fair test. Linux 2.2 had very well-defined boundaries;
unfortunately, they were very poorly documented. 2.4 also has very
well-defined boundaries; unfortunately, also very poorly documented.
But the reason the module-compile test fails is that the definition is
different between the two families, not that it doesn't exist.

Yoann Padioleau

unread,
Jan 27, 2003, 4:35:56 PM1/27/03
to
drm...@techie.com (Isaac Stern) writes:

> > i think you are wasting bits, he clearly doesnt understang even the basics
> > and he is confusing Multics terminology with microkernels
>
> Well, you heard of multics, that's a surprise. If you'd heard of OS/2
> (implementaion not the name alone) you'd know that there are systems
> that use more than just 2 (kernel and user) rings.

But again, what does it brings to the user ? nothing, this
is something useful only from a programmer of os point of view,
and even from this point of view, i dont find that having more that 2 rings
so useful.

Ok, you can place some deameon in say ring 1bis, but what it brings ?
Is it really useful ? have some facts that prove that its show
an improvement ?
Because i have facts that show that users really appreciate multi-tasking
(a truly innovative operating system concept) and really
appreciate virtual memory, ...


>
> Since you are so knowledgable. Why don't come up with some kind of
> argument to disprove ANYTHING I said?
>
>

--

David Moore

unread,
Jan 27, 2003, 4:35:53 PM1/27/03
to
>>Robert Kaiser (bitb...@invalid-domain-see-sig.nil)

>The subject of this thread is "Why is OS Research Dead?".
>I believe we are seeing first signs of this failure
>just now, manifested by the increasing unreliability of today's
systems,

Really? All the data I have seen suggests that Operating Systems are
becoming more reliable.

For example Gordon Bell somewhere states that in the early 70s the
uptime for a VAX 780 was measured in hours. Nowadays a VAX Cluster
based system ie. (a system which is supporting disk striping, disk
mirroring, redundant disk controllers, a careful-write filesytstem,
redundant CI busses, redundant compute nodes, compute and storage
bridged by fibre optics to secondary geographic sites) -- all
supported by the Operating System provides as close to 24x7 operation
as anyone can get.

and I am not aware of any research OS that took reliability as its
main theme and indeed I doubt that they could produce the engineering
effort required to
quantitatively improve on the above quoted production system.

David Moore
Chief Engineer OS Kernel & Compiler R&D
Incantation Systems Ltd.
//members.aol.com/marlinsmeadow


Isaac Stern

unread,
Jan 28, 2003, 11:09:09 AM1/28/03
to
Yoann Padioleau <Yoann.P...@irisa.fr> wrote in message news:<3e35a63c$1...@news.ucsc.edu>...

> drm...@techie.com (Isaac Stern) writes:
>
> > > i think you are wasting bits, he clearly doesnt understang even the basics
> > > and he is confusing Multics terminology with microkernels
> >
> > Well, you heard of multics, that's a surprise. If you'd heard of OS/2
> > (implementaion not the name alone) you'd know that there are systems
> > that use more than just 2 (kernel and user) rings.
>
> But again, what does it brings to the user ? nothing, this
> is something useful only from a programmer of os point of view,
> and even from this point of view, i dont find that having more that 2 rings
> so useful.

Probably not very useful (otherwise you woud see it everywhere). I was
simply explaining terminology to our "educated" friend.

> Ok, you can place some deameon in say ring 1bis, but what it brings ?
> Is it really useful ? have some facts that prove that its show
> an improvement ?
> Because i have facts that show that users really appreciate multi-tasking
> (a truly innovative operating system concept) and really
> appreciate virtual memory, ...

virtual memory is not needed for many embedded systems, but you give
it value. Why are microkernels an exception?


Robin Fairbairns

unread,
Jan 28, 2003, 11:09:15 AM1/28/03
to
marlin...@aol.com (David Moore) writes:
>>>Robert Kaiser (bitb...@invalid-domain-see-sig.nil)
>>The subject of this thread is "Why is OS Research Dead?".
>>I believe we are seeing first signs of this failure
>>just now, manifested by the increasing unreliability of today's
>>systems,
>
>Really? All the data I have seen suggests that Operating Systems are
>becoming more reliable.
>
>For example Gordon Bell somewhere states that in the early 70s the
>uptime for a VAX 780 was measured in hours.

and when was the vax 780 released to the public? i first saw one in
the showrooms in 1977; i would be surprised if it was available to
americans earlier than 1976. so an "early 70s" instance would have
been in a development lab; i've often encountered unreliable operating
systems in that sort of environment.

>Nowadays a VAX Cluster based system [... runs 24/7]

if you can find one. i thought they stopped making vax-based systems
_ages_ ago. however, on the assumption you meant vms rather than vax:
that's what you get from having a operating system that's been on the
road for nearly 30 years. (and it's a nice os: at least, _i_ enjoyed
system programming using it.)

however, i suspect robert was thinking of the mass market for windows
systems; the proportion of the market that runs systems that i would
classify as "reliable" is pretty small.
--
Robin Fairbairns, Cambridge -- voice mending ... I _think_


Yoann Padioleau

unread,
Jan 28, 2003, 11:31:36 PM1/28/03
to
drm...@techie.com (Isaac Stern) writes:

>
> > Ok, you can place some deameon in say ring 1bis, but what it brings ?
> > Is it really useful ? have some facts that prove that its show
> > an improvement ?
> > Because i have facts that show that users really appreciate multi-tasking
> > (a truly innovative operating system concept) and really
> > appreciate virtual memory, ...
>
> virtual memory is not needed for many embedded systems, but you give
> it value. Why are microkernels an exception?

Because virtual memory are a useful concept for the mass of windows user.
When you have only 64Mb of memory, but a big disk, you like to be able
to run Word (that needs 100Mb). This is what i call something useful.
A kernel made via monolithic technique or micro kernel technique
make no difference from the point of view of the user
(that's why i dont see micro kernel as a big advance in the operating system
research domain).

u130...@mail.ru

unread,
Jan 30, 2003, 10:17:41 AM1/30/03
to
Frank D. Greco <fgr...@crossroadstechnospam.com> wrote:
> I haven't seen anything in the past few years that resembles
> bonafide OS research. Is it dead?

It's possible that research just transferred in areas without
90 % legacy APIs and protocols. One such area is the J2EE application
servers devepomlent. Structure of Jboss server very alike to OS kernel
but there is no TCP stack headachy.

--
Alexandr Konovalov avkon...@imm.uran.ru


Christopher Browne

unread,
Jan 30, 2003, 10:17:42 AM1/30/03
to
A long time ago, in a galaxy far, far away, Yoann Padioleau <Yoann.P...@irisa.fr> wrote:
> drm...@techie.com (Isaac Stern) writes:
>> virtual memory is not needed for many embedded systems, but you give
>> it value. Why are microkernels an exception?
>
> Because virtual memory are a useful concept for the mass of windows
> user. When you have only 64Mb of memory, but a big disk, you like
> to be able to run Word (that needs 100Mb). This is what i call
> something useful.

.. But when systems typically get deployed with lots of memory, now,
this is no longer the prime reason for VM to be useful.

VM is also useful for two other notable reasons:

-> It provides a memory model allowing running multiple applications
at once;

-> It provides security so that those multiple applications can be
assured of not trampling on one another.

Those uses of VM are still relevant today even though it's easy and
cheap to have plenty of memory.

> A kernel made via monolithic technique or micro kernel technique
> make no difference from the point of view of the user (that's why i
> dont see micro kernel as a big advance in the operating system
> research domain).

In most cases, microkernels haven't been used to do things that
fundamentally couldn't be done using monolitic kernels. People
haven't gone all that far in making use of multiserver systems.

For the most part, MK has gotten used to create systems with the same
sets of user-visible abstractions as there were with monolithic
kernels.

The big user-visible differences have been that since the MK systems
are less mature, they have typically been _less_ robust, and since
they have more IPC work to do, they have typically been slower than
the monolithic systems.

The result is not too surprising: People interested in "making a
real-world difference" have found it generally more worthwhile to put
efforts into monolithic systems.
--
If this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me
http://www.ntlug.org/~cbbrowne/oses.html
"The Unix philosophy is to provide some scraps of metal and an
enormous roll of duct tape. With those -- and possibly some scraps of
your own -- you can conquer the world." -- G. Sumner Hayes


Robert Kaiser

unread,
Jan 30, 2003, 10:17:44 AM1/30/03
to
In article <3e35a636$1...@news.ucsc.edu>,

Patrick Bridges <bri...@cs.unm.edu> writes:
> It's worth noting, however, that microkernels must always
> perform *strictly more work* to provide the same services as a
> monolithic kernel. There are other reasons why microkernels might
> still be a good idea, such as flexibility, customizability, etc., but
> performance isn't it.

We agree on this.

Another way to look at it is: any operating system puts up barriers for
the applications. To traverse these barriers costs some performance, but
it brings benefits (such as safety or a portable means of accessing devices,
etc.). By selecting an OS (if you decide to choose one at all), you accept
that these benefits are worth the cost.

The difference of a microkernel in this picture is simply that it imposes
more barriers at a finer granularity. It is only logical that this is more
costly, but it also opens ways to implement more security.

>
> How big is the performance hit? The 1997 L4 SOSP paper by Hartig,
> et. al showed about a 8% performance hit in a monolithic L4 Linux
> server implementation running various macrobenchmarks.

.. , which IMO this is a pretty good result ...

> OS microbenchmark numbers were generally a good bit worse than
> that, ranging from minimal slowdown to for the TCP benchmark, 50%
> slower for creating /bin/sh,

Micro benchmarks should be taken with a grain of salt. They concentrate
on measuring a particular feature and they are useful to judge the
implementation of that particular feature, but they don't tell much
about the effect on the overall system performance.

> and more than a factor of 2 slower in
> writing to /dev/null and a conext switch test.

While this looks like a particularly bad result at first glance, it is
actually very interesting as it proves my point: This particular test is
designed to (ideally) measure pure system call overhead. To do this,
it invokes functions in the kernel that are known to do little or
nothing at all, and it does so in a tight loop. Note that this is a
very pathological case: normally, applications make calls to the kernel
because they want it to do some useful work. It only makes sense in the
context of this micro benchmark.

System call overhead is caused mainly by the cost of crossing the
user-kernel threshold. Since the L4Linux system has to cross that
theshold twice as many times as native Linux, I would actually
*expect* a result in that ballpark, anything different would be
an indication of a flaw either in L4, in Linux or in the benchmark
itself.

However, to realistically judge the performance hit, it is important to
keep an eye on the absolute numbers here: In the SOSP Paper you cited,
the system call overhead was reported to be 3.95 microseconds for
L4Linux versus 1.68 microseconds for native Linux on a 133MHz Pentium.
So, the cost of the microkernel overhead would be 2.27 microseconds per
system call. Yes, this is more than a factor of two as you said, but,
in order for this to have a visible impact on system performance (say
10% performance loss), that system would have to make roughly 4400 system
calls per second. I'm not sure if this kind of load is practically
realistic. Anyway, even assuming it is, the total performance hit caused
by the system call overhead in this load situation would be 17%, versus
7% for native Linux, which is not exactly negligible either.

Let me propose another benchmark: count the total number of system calls
handled by a Linux kernel during a normal busy day. That number times
2.27 microseconds gives the total amount of CPU time that would be wasted
in an L4Linux system. Compare that to the amount of time spent by the
CPU doing real, useful work, regardless of wether it does that in user mode
or kernel mode, during that same normal busy day. Unfortunately, I haven't
done such a benchmark, but I'd venture to say that the latter amount would
exceed the first by several orders of magnitude.

Lesson learned from that: Lack of CPU horsepower is not a problem in
today's systems. We can obviously afford to even burn far more of it
in idle loops and screen savers than any microkernel implementation
would cost us. But lack of safety, security and reliability *is* a
problem. I'd say that investing some of that excess CPU power into
concepts that promise better solutions to these problems is well
worthwhile.


>
> These penalties would presumably be even higher in a multiserver
> implementation, since there will be more traps and address space
> transitions going on;

... yes, probably by another ~2.27 microseconds per transition. Wether
or not this matters depends on the functionality implemented in the
servers vs. the number of transitions that are necessary to reach them.

> without a multiserver implementation, any
> supposed stability gains will be minimal, since all of the services in
> the monolithic kernel *are still running in the same protection
> domain*.

Yes, although even that approach may make sense if you want to have multiple
OSes coexist on a single machine: Consider, e.g. a system with a safety-
critical, real-time Ada program running alongside a non safety-critical,
non real-time Linux system ...

> In addition, trap times are, IIRC, also becoming *slower* in
> terms of numbers cycles required (staying roughly the same in terms of
> real time), meaning that this performance hit is only increasing in
> percentage terms.

Sorry, I'm not sure I understand what you mean. What are traps in this
context (segfaults, system calls)? Could you explain this a little further?

> This is one of the reasons
> that work on composable and configurable kernels is interesting, and
> there has been a decent amount of work done on that recently, too
> (e.g. Pebble from AT&T presented at USENIX'99).

To be honest, I'm not really familiar with these (guess I should do some
reading), so I can't really comment.

> Microkernels are near
> one end of the design spectrum, monolithic kernels the other. There is
> no reason, however, to limit ourselves to staying just at the
> extremities.

Agreed.

But, (returning to topic) I believe we both agree that there is even
less reason to conclude that "OS research is dead" because there is
allegedly "nothing to be gained from it".

Isaac Stern

unread,
Jan 30, 2003, 10:17:48 AM1/30/03
to
> > virtual memory is not needed for many embedded systems, but you give
> > it value. Why are microkernels an exception?
>
> Because virtual memory are a useful concept for the mass of windows user.
> When you have only 64Mb of memory, but a big disk, you like to be able
> to run Word (that needs 100Mb). This is what i call something useful.
> A kernel made via monolithic technique or micro kernel technique
> make no difference from the point of view of the user
> (that's why i dont see micro kernel as a big advance in the operating system
> research domain).

Most systems have no virtual memory or disk. Why windows user? Why not
people who require security?

Maybe microkernel is not "what you call useful", does it mean that it
brought nothing to others?


Robert Kaiser

unread,
Jan 30, 2003, 10:17:50 AM1/30/03
to
In article <3e35...@news.ucsc.edu>,

Joe Pfeiffer <pfei...@cs.nmsu.edu> writes:
> bitb...@invalid-domain-see-sig.nil (Robert Kaiser) writes:
>
>> Try to run (or even compile) a Linux 2.2.xx driver in a 2.4.xx kernel
>> environment to see what I mean.
>
> Not a fair test. Linux 2.2 had very well-defined boundaries;
> unfortunately, they were very poorly documented. 2.4 also has very
> well-defined boundaries; unfortunately, also very poorly documented.

What good is a definition if:

1) it is not documented
2) there is no way to enforce it upon the programmers that are
supposed to program to it?

> But the reason the module-compile test fails is that the definition is
> different between the two families, not that it doesn't exist.

FWIW, I seem to remember some changes to the driver-kernel interface
(if such a thing exists at all) in the middle of the 2.2.x kernel. You
might argue that those were minor changes only.

In my experience, building a driver in a kernel environment different
from the one it was written for (even if both are in the same family)
is usually a PITA. It rarely works out of the box. (And don't get me
started about using binary modules without recompilation!)

Not my idea of modularity.

Yoann Padioleau

unread,
Jan 31, 2003, 9:08:05 AM1/31/03
to
drm...@techie.com (Isaac Stern) writes:

> > > virtual memory is not needed for many embedded systems, but you give
> > > it value. Why are microkernels an exception?
> >
> > Because virtual memory are a useful concept for the mass of windows user.
> > When you have only 64Mb of memory, but a big disk, you like to be able
> > to run Word (that needs 100Mb). This is what i call something useful.
> > A kernel made via monolithic technique or micro kernel technique
> > make no difference from the point of view of the user
> > (that's why i dont see micro kernel as a big advance in the operating system
> > research domain).
>
> Most systems have no virtual memory or disk. Why windows user? Why not
> people who require security?

because they are far less.

>
> Maybe microkernel is not "what you call useful", does it mean that it
> brought nothing to others?

How many are those "others" ? I mean if a concept such as micro kernel
is only useful for 100 persons, i will not claim that micro kernel
is a big advance in the operating research area.
I dont see how micro kernel help making the system more secure.
I will prefer to rely on a better langage to ensure security.
modules (even small and well designed) made in C, this is not what i call
a secure system. A monolithic kernel, made with the same module
will have the same level of security.

Patrick Bridges

unread,
Jan 31, 2003, 9:08:08 AM1/31/03
to
>>>>> "RK" == Robert Kaiser <bitb...@invalid-domain-see-sig.nil> writes:

>> In addition, trap times are, IIRC, also becoming *slower* in
>> terms of numbers cycles required (staying roughly the same in
>> terms of real time), meaning that this performance hit is only
>> increasing in percentage terms.

RK> Sorry, I'm not sure I understand what you mean. What are traps
RK> in this context (segfaults, system calls)? Could you explain
RK> this a little further?

Cycle times are decreasing. Trap handling times, however, (for system
calls, for example) are not IIRC. So, a program that used to take 100
seconds on a 133MHz machine and had 8 seconds of that as system
overhead (8%) might now take 40 seconds to execute, with 5 seconds
(12.5%) of that still being system overhead.

RK> But, (returning to topic) I believe we both agree that there
RK> is even less reason to conclude that "OS research is dead"
RK> because there is allegedly "nothing to be gained from it".

Oh, I completely agree. OS research is by no means dead. It's just
been going in more directions than just kernel design. There's been
insteresting work on fault tolerance, peer-to-peer systems,
configurable systems, adaptation in OSes, and a variety of other
things.

Isaac Stern

unread,
Jan 31, 2003, 9:08:10 AM1/31/03
to
> The big user-visible differences have been that since the MK systems
> are less mature, they have typically been _less_ robust, and since
> they have more IPC work to do, they have typically been slower than
> the monolithic systems.
>
> The result is not too surprising: People interested in "making a
> real-world difference" have found it generally more worthwhile to put
> efforts into monolithic systems.

That's not accurate. QNX is quite robust and popular.


Isaac Stern

unread,
Jan 31, 2003, 9:08:12 AM1/31/03
to
> > > Micro kernel brings nothing to the user.
> >
> > Really, and who are you?
>
> i am a computer scientist that say its opinion, that's all.
>
> >
> > How do you create a secure OS without microkernel approach?
>
> but who care ? not the millions of windows users.

How about corporations that have secrets of trade, governments and
many people who don't want troans stealing their private data?

> perhaps, but this is stuff that interests only designer of operating systems,
> not user. The only thing a user wants is an os that allow to manage its
> hardware, that have multitasking, and some virtual memory, that's all.
> So micro kernel does not bring anything to the user.

do windows users these days know what virtual memory is? I doubt that.

> Micro kernel are for me clean modules made in C, that's ok, but i
> would prefer a big kernel made in a higher level langage.

Microkernels for most of us, are kernels with smaller set of api.

> in france, in a school where we are taught that you can have your own opinion.

I could have an opinion that 2+3=8. It's nice to have data to support
yours before invalidating research performed by others.

IS.


Isaac Stern

unread,
Jan 31, 2003, 9:08:14 AM1/31/03
to
> RK> Why? Because having them in user-space would make them slow?
> RK> This may be the case with first generation microkernels such
> RK> as Mach, but the L4 people would tend to disagree (and they
> RK> have presented some quite impressive measurements to back
> RK> their claims).
>
> For the record, I've made it a point *not* to cite Mach, as I'm well
> aware of its limitations. Doing so would be about as specious as
> citing Windows as evidence for the instability of all monolithic
> systems. It's worth noting, however, that microkernels must always
> perform *strictly more work* to provide the same services as a
> monolithic kernel. There are other reasons why microkernels might
> still be a good idea, such as flexibility, customizability, etc., but
> performance isn't it.

Mach has two serious problems - complex IPC semantics and lack of
common optimizations. Second has areay been addressed. [OSDI'02 wip]

> How big is the performance hit? The 1997 L4 SOSP paper by Hartig,
> et. al showed about a 8% performance hit in a monolithic L4 Linux
> server implementation running various macrobenchmarks. lmbench and
> hbench:OS microbenchmark numbers were generally a good bit worse than
> that, ranging from minimal slowdown to for the TCP benchmark, 50%
> slower for creating /bin/sh, and more than a factor of 2 slower in
> writing to /dev/null and a conext switch test. To quote that paper:
> "Both versions of MkLinux have a much higher penalty than
> L4Linux. However, even the L4Linux penalties are not as low as we
> hoped." Do you have newer numbers than this? If so, I'd be interested
> in seeing them.

Partially that's because L4[/Mk]Linux emulates another system and
entire port was a fast hack. That guy (see above) has newer data for
MkLinux and you could download L4Linux and run your own lmbench.


Robert Kaiser

unread,
Feb 3, 2003, 3:16:45 PM2/3/03
to
In article <3e3a8348$1...@news.ucsc.edu>,

Patrick Bridges <bri...@cs.unm.edu> writes:
> Cycle times are decreasing. Trap handling times, however, (for system
> calls, for example) are not IIRC. So, a program that used to take 100
> seconds on a 133MHz machine and had 8 seconds of that as system
> overhead (8%) might now take 40 seconds to execute, with 5 seconds
> (12.5%) of that still being system overhead.

Ah, OK. Are you sure this is true ? I do not have any data to back
nor refute this (anyone?) I could understand how that could be true for
interrupts because they involve I/O (especially on PCs with that obsolete
interrupt controller), but does it also apply to system calls and segfaults ?

Gilles Maigne

unread,
Feb 3, 2003, 3:16:49 PM2/3/03
to
Yoann Padioleau <Yoann.P...@irisa.fr> wrote in message news:<3e27...@news.ucsc.edu>...
> marlin...@aol.com (David Moore) writes:
>
> > OS Research is Dead because people like me cannot get work.
>
> perhaps OS research is dead because it is no more an important question.
> Do you see really problems with current kernel ?
> the fact to do a monolythic or micro kernel is not that important
> for the user, and what i think is the most important question for an OS
> is what it brings to the user. Micro kernel brings nothing to the user.
>

Hello Yoann,


I think, you underestimate the usefulness of micro-kernel approach.


I see micro-kernel as a toolbox for writing kernel-related
application, and for this it is powerfull tool. And it allows to develop
quickly kernel application, and I think in that sense it is usefull for
user.

I have been working for ten years in a company which makes micro-kernel.
And I can give you many usefull examples of micro-kernel use :


1/ Making subsystem for proprietary OS.

Some Telecom company have proprietary OSs, with big applications
(millions of lines of code) running on top of these OSs. When they want
to migrate to Unix-like OS, they do not to rewrite all the applications.
Int that case micro-kernels are usefull, because they offer a good
environment to write a "subsystem" implementing the old OS services on
top of a micro-kernel. The approach allows to have a system which runs
both a Unix personnality and an propriatary OS personnality. I have a
few example of success story .

2/ Os for a secure environment

For instance bank application requires highly secure execution
environment. Some of our customer have written a sub-system providing
secure API for bank applications. In that case Unix was deemed too
unsecure.

3/ Micro-kernel provides a programming environment for kernel application.

For instance it is possible to write a kernel application (think
something like a supervisor unix process) which manages a device (ATM
for instance). This application can export some API to user application
to access the device. This kernel application can use Posix API, can be
debugged, stopped like a Unix process. If you want to do similar things
with Unix, you must write a kernel module and it becomes much more
difficult :
- it is not easy to debug
- it is difficult (or impossible) to shutdown the application and
to restart it
- you will have to use internal API to communicate
- you do not have a lot of flexibility to export service to user
application (process).

4/ Writting a Single System image system
Chorus Micro-kernel was used with success to write single system
image system (The amadeus project was one branch of this.) . In that
case I think the distributed nature of the micro-kernel interface eased
the development. SSI is really something of usefull for "user".

Gilles.


Yoann Padioleau

unread,
Feb 5, 2003, 9:41:19 AM2/5/03
to
gilles...@jaluna.com (Gilles Maigne) writes:

>
> 1/ Making subsystem for proprietary OS.
>
> Some Telecom company have proprietary OSs, with big applications
> (millions of lines of code) running on top of these OSs. When they want
> to migrate to Unix-like OS, they do not to rewrite all the applications.
> Int that case micro-kernels are usefull, because they offer a good
> environment to write a "subsystem" implementing the old OS services on
> top of a micro-kernel. The approach allows to have a system which runs
> both a Unix personnality and an propriatary OS personnality. I have a
> few example of success story .

Good point indeed.
But do you agree that micro kernel are good only for a "niche" ?
For the mass of user, does micro kernel really have a benefit ?

>
> 2/ Os for a secure environment
>
> For instance bank application requires highly secure execution
> environment. Some of our customer have written a sub-system providing
> secure API for bank applications. In that case Unix was deemed too
> unsecure.
>
>
>
> 3/ Micro-kernel provides a programming environment for kernel application.
>
> For instance it is possible to write a kernel application (think
> something like a supervisor unix process) which manages a device (ATM
> for instance). This application can export some API to user application
> to access the device. This kernel application can use Posix API, can be
> debugged, stopped like a Unix process. If you want to do similar things
> with Unix, you must write a kernel module and it becomes much more
> difficult :
> - it is not easy to debug

i thought linux provides some stuff to make debugging easy.

> - it is difficult (or impossible) to shutdown the application and
> to restart it

what about kernel loadable module as in Linux ?

> - you will have to use internal API to communicate

and ??

> - you do not have a lot of flexibility to export service to user
> application (process).

and ??

>
> 4/ Writting a Single System image system
> Chorus Micro-kernel was used with success to write single system
> image system (The amadeus project was one branch of this.) . In that
> case I think the distributed nature of the micro-kernel interface eased
> the development. SSI is really something of usefull for "user".
>
> Gilles.
>
>

--

Espen Skoglund

unread,
Feb 5, 2003, 9:41:20 AM2/5/03
to
[Robert Kaiser]

> In article <3e3a8348$1...@news.ucsc.edu>,
> Patrick Bridges <bri...@cs.unm.edu> writes:
>> Cycle times are decreasing. Trap handling times, however, (for
>> system calls, for example) are not IIRC. So, a program that used to
>> take 100 seconds on a 133MHz machine and had 8 seconds of that as
>> system overhead (8%) might now take 40 seconds to execute, with 5
>> seconds (12.5%) of that still being system overhead.

> Ah, OK. Are you sure this is true ? I do not have any data to back
> nor refute this (anyone?) I could understand how that could be true
> for interrupts because they involve I/O (especially on PCs with that
> obsolete interrupt controller), but does it also apply to system
> calls and segfaults ?

System calls tend to use software interrupts to trap into privileged
mode to perform system calls, so yes this does apply. On many
architectures there are, however, other means to enter privileged
mode. On ia32 one can use the sysenter/sysexit instructions instead
of software interrupts. On ia64 one can use the epc instruction. The
problem is that using these schemes tend to break the existing syscall
ABI (e.g., the syscall ABI for ia32 platforms use "int 0x80" in many
UNIXes). For this reason we always use a kernel provided syscall
trampoline page in the new L4 ABI. The kernel can then decide on the
optimal way to perform system calls. Some system calls might even be
performed entirely in user-level.

Using the alternative syscall mechanisms can really speed things up.
An int/iret (i.e., software interrupt and return) on a 1.5GHz P4 will
for instance take ~1900 cycles. A sysenter/sysexit on the same
machine takes only ~160 cycles---one order of magnitude faster!

For ia64 and the epc instruction, the potential speedup is even
greater. (Unfortunately I haven't got any numbers as of now.)
Basically, the epc instruction does not trap anything and (should)
require no pipeline stalls or flushes.

On a side note, I should mention that the syscall times (and other
micro-benchmarks) you referred to earlier in this thread are a bit
outdated. The following TR:

http://i30www.ira.uka.de/research/documents/l4ka/smallspaces.pdf

gives more updated numbers (on a 1.5GHz P4) for some microbenchmarks.
In particular, the getpid() syscall takes 2200 cycles on L4Linux
compared to 1540 cycles on native Linux. The extra overhead (660
cycles) is mainly due to the extra context switches to and from the
L4Linux server.

eSk


Burton Samograd

unread,
Feb 5, 2003, 9:41:23 AM2/5/03
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 2003-02-03, Gilles Maigne <gilles...@jaluna.com> wrote:
> 3/ Micro-kernel provides a programming environment for kernel application.
>
> For instance it is possible to write a kernel application (think
> something like a supervisor unix process) which manages a device (ATM
> for instance). This application can export some API to user application
> to access the device. This kernel application can use Posix API, can be
> debugged, stopped like a Unix process. If you want to do similar things
> with Unix, you must write a kernel module and it becomes much more
> difficult :
> - it is not easy to debug

> - it is difficult (or impossible) to shutdown the application and
> to restart it

> - you will have to use internal API to communicate

> - you do not have a lot of flexibility to export service to user
> application (process).

I think this is the most important benefit of a micro kernel that is
often overlooked. User space application development is generally
much easier than equivalent kernel development, especially on the
debugging side (ie. running gdb on your server at startup or looking
through a random collection of unfinished how-to's trying to figure
out how to set up a serial port debugger and then rebuilding with a
debug kernel and rebooting, etc).

- --
burton samograd
kru...@hotmail.com
http://kruhftwerk.dydns.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (GNU/Linux)

iD8DBQE+QC1JLq/0KC7fYbURAobtAKCg2WCNvwiiacKSrmckqtMiWxpPdACfd39H
LqFLQsXNEJv5T38RrllX5Go=
=giFR
-----END PGP SIGNATURE-----


Peter da Silva

unread,
Feb 10, 2003, 2:45:46 PM2/10/03
to
In article <3e3ece31$1...@news.ucsc.edu>,
Gilles Maigne <gilles...@jaluna.com> wrote:

Pardon if I "me-too" you here with a "micro-summary"?

> Yoann Padioleau <Yoann.P...@irisa.fr> wrote in message news:<3e27...@news.ucsc.edu>...

> > perhaps OS research is dead because it is no more an important question.
> > Do you see really problems with current kernel ?

Not just "yes" but "hell yes".

> > the fact to do a monolythic or micro kernel is not that important
> > for the user, and what i think is the most important question for an OS
> > is what it brings to the user. Micro kernel brings nothing to the user.

Most of the *real* microkernels are in embedded systems, and while
the user isn't aware of their presence they certainly benefit from
them. :)

--
I've seen things you people can't imagine. Chimneysweeps on fire over the roofs
of London. I've watched kite-strings glitter in the sun at Hyde Park Gate. All
these things will be lost in time, like chalk-paintings in the rain. `-_-'
Time for your nap. | Peter da Silva | Har du kramat din varg, idag? 'U`


Peter da Silva

unread,
Feb 10, 2003, 2:45:48 PM2/10/03
to
In article <3e41228f$1...@news.ucsc.edu>,

Yoann Padioleau <Yoann.P...@irisa.fr> wrote:
> But do you agree that micro kernel are good only for a "niche" ?

Most computers in the world is a pretty big niche. You do realise that most
computers in the world are embedded systems you never even know about, right?
Desktop and server computing is a very visible but in terms of sheer numbers
are a niche so small it's almost negligable.

Gilles Maigne

unread,
Feb 18, 2003, 9:46:18 AM2/18/03
to
Yoann Padioleau <Yoann.P...@irisa.fr> wrote in message news:<3e41228f$1...@news.ucsc.edu>...

Yoann Padioleau wrote:

> Good point indeed.
> But do you agree that micro kernel are good only for a "niche" ?
> For the mass of user, does micro kernel really have a benefit ?

I think, micro-kernel brings flexibility and modularity. And there are
many places where flexibility and modularity are a plus. In embeddeded
world, usually you sell a hardware device and you have to shorten
development time. In that case micro-kernels fit well. In embedded
market, I think that micro-kernel are relatively successfull (Qnx,
ChorusOS have significant market share).


>
>
> i thought linux provides some stuff to make debugging easy.

Yes.But if a kernel module crash, you have to reboot linux. In the
case of Chorus (or Jaluna-1 ), if a supervisor application crashes, it
is usually possible to start a new one.

>
>
>> - it is difficult (or impossible) to shutdown the application
and
>>to restart it
>
>
> what about kernel loadable module as in Linux ?

It is much more difficult to make a module which frees resources
correctly when a module is unloaded.


Gilles.


0 new messages