Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Microkernels vs. Modular Monolithic kernels

624 views
Skip to first unread message

Vassilis Pandis

unread,
Aug 27, 2001, 5:29:03 PM8/27/01
to

First of all i would like to apologise to everyone who has had enough
of Microkernels vs. Modular Monolithic kernels , since countless
debates have been held over this matter. I do understand how modular
monolithic kernrels are superior to normal monolithic kernels .However
, in spite of having read many newsgroup posts concerning this matter
, i dont understand what microkernels have to offer us:

Better RAM usage? Oh please! The kernel doesn't change in size , it
still is a binary chunk of data , but its simply split up. You cannot
unload/page parts of it to disk any more than you can in the modular
monolithic design. How could it be possible to drop the filesystem? Or
the memory management? Or the scheduler?All these are constantly
needed. Microkernels ARE slower than monolithic kernels supposing that
all else is equal , and this is due to the message passing system. To
argue with everyone that says that with the hardware speed today all
this is not noticeable i will say that yes it is noticable. If it
wasn't , why would all microkernel developers spend hours figuring out
how to reduce the messages?Microkernels are also a more complicated
design, and the total output is a bigger kernel (assuming that the
kernel conssits of all the seperate programs that form it). Therefore
, the RAM usage is worse! Modular monolithic kernels unload everything
they do not need from the RAM , and they are simpler and faster
because they dont use messages. The only areas in which i could find
microkernels to be superior to monolithic kernels is the area of
Multiprocessing , because ( i believe) that both CPUs could be running
the kernel at the same time.

Ps: Yes i know that the speed , complexity and clarity of code is very
implementation specific. However , for fairness's sake i suppose that
all these parameters are (somehow magically) equal.


Espen Skoglund

unread,
Aug 29, 2001, 2:43:08 PM8/29/01
to

Bugger, being pretty much involved in microkernel work myself, I feel
sort of obliged to post a followup here.

[Vassilis Pandis]


> Better RAM usage? Oh please! The kernel doesn't change in size , it
> still is a binary chunk of data , but its simply split up. You
> cannot unload/page parts of it to disk any more than you can in the
> modular monolithic design. How could it be possible to drop the
> filesystem? Or the memory management? Or the scheduler? All these
> are constantly needed.

Changing, e.g., a global memory management scheme in a running system
may of course prove difficult. It is, however, possible to layer
memory management systems, providing specialized services to certain
applications.

> Microkernels ARE slower than monolithic kernels supposing that all
> else is equal , and this is due to the message passing system.

Modular monolithic kernels also conceptually use a message passing
system. The only difference between the two is that where the
monolithic kernel uses a regular procedure call, the microkernel
system uses a remote procedure call (i.e., including a context
switch). But of course, you are right, this message passing is more
costly than in a monolithic system.

> Microkernels are also a more complicated design, [...]

The design of the microkernel itself bears very little complexity, and
as for the complete system (i.e., the kernel *and* the operating
system services running in user-level) I don't see how the complexity
of a microkernel system is much more different than a monolithic one.
Sure, a microkernel system needs to have well defined boundaries
between the different system services, but the same design principles
should also be applied to monolithic systems. Not having a well
designed infrastructure for, e.g., creating new device drivers will
hurt you badly as your system evolves.

I could fire a counter-argument here, saying that microkernels
*enforce* system designers to invent well-defined interfaces to their
subsystems, thereby prohibiting a subsystem to be used in an undefined
manner. Surely this must be a good thing. It allows you to make
certain assumptions about how your subsystem is used, and the
underlying architecture of your system ensures that these assumptions
will always hold. In fact, the way that microkernels enforce system
designers to structure their system is perhaps *the* major advantage
of microkernels. It offers a compelling way for us to deal with the
ever-increasing complexity of constructing operating systems.

> [...] and the total output is a bigger kernel (assuming that the


> kernel conssits of all the seperate programs that form
> it). Therefore , the RAM usage is worse! Modular monolithic kernels

> unload everything they do not need from the RAM, [...]

This argument is moot. A microkernel system can also kill/delete the
operating system services (i.e., user-level tasks) which are not
currently used.

> The only areas in which i could find microkernels to be superior to
> monolithic kernels is the area of Multiprocessing , because (i
> believe) that both CPUs could be running the kernel at the same
> time.

A properly designed monolithic kernel can also run simultaneously on
multiple CPUs, so this pro-argument for microkernels is not really
valid.

eSk


Edd Doutre

unread,
Sep 4, 2001, 3:49:48 PM9/4/01
to

-
Having worked on both types of systems in a pragmatic sort of way,
"I feel obliged" to
contribute as well...
-Edd
-
Espen Skoglund wrote:

>Bugger, being pretty much involved in microkernel work myself, I feel
>sort of obliged to post a followup here.
>
>[Vassilis Pandis]
>
>>Better RAM usage? Oh please! The kernel doesn't change in size , it
>>still is a binary chunk of data , but its simply split up. You
>>cannot unload/page parts of it to disk any more than you can in the
>>modular monolithic design. How could it be possible to drop the
>>filesystem? Or the memory management? Or the scheduler? All these
>>are constantly needed.
>>
>
>Changing, e.g., a global memory management scheme in a running system
>may of course prove difficult. It is, however, possible to layer
>memory management systems, providing specialized services to certain
>applications.
>

-
No argument. But to what end? Adds considerable complexity. Need
to be clear on the benefit(s).
-

>
>>Microkernels ARE slower than monolithic kernels supposing that all
>>else is equal , and this is due to the message passing system.
>>
>
>Modular monolithic kernels also conceptually use a message passing
>system. The only difference between the two is that where the
>monolithic kernel uses a regular procedure call, the microkernel
>system uses a remote procedure call (i.e., including a context
>switch). But of course, you are right, this message passing is more
>costly than in a monolithic system.
>

-
Things get a bit more complicated in implementation. Would be an
advantage to have both
facilities available.
-

>
>>Microkernels are also a more complicated design, [...]
>>
>
>The design of the microkernel itself bears very little complexity, and
>as for the complete system (i.e., the kernel *and* the operating
>system services running in user-level) I don't see how the complexity
>of a microkernel system is much more different than a monolithic one.
>Sure, a microkernel system needs to have well defined boundaries
>between the different system services, but the same design principles
>should also be applied to monolithic systems. Not having a well
>designed infrastructure for, e.g., creating new device drivers will
>hurt you badly as your system evolves.
>

-
Been there, seen that. However...
-

>
>
>I could fire a counter-argument here, saying that microkernels
>*enforce* system designers to invent well-defined interfaces to their
>subsystems, thereby prohibiting a subsystem to be used in an undefined
>manner. Surely this must be a good thing. It allows you to make
>certain assumptions about how your subsystem is used, and the
>underlying architecture of your system ensures that these assumptions
>will always hold. In fact, the way that microkernels enforce system
>designers to structure their system is perhaps *the* major advantage
>of microkernels. It offers a compelling way for us to deal with the
>ever-increasing complexity of constructing operating systems.
>

-
(This is the 'However...') There are many areas in an OS where the
boundaries blur, and the
'*enforcing*' becomes a problem. There are several examples I could
draw upon, but I will
pick on an area that seems to get ignored in many cases.

The Loader for the system, whether the application loader or system
loader (assuming there is
any difference) is immaterial, is involved in several related areas
in a traditional OS: Tasking;
(Initial) Scheduling setup; Memory management and horror of horrors
- uses the FileSystem!
In some systems the Loader is involved in handling certain types of
page/segment not-present
faults as well.

Since the Loader needs to set things up for an application, it
usually needs to be in kernel
mode (access to OS data structures/hardware registers/etc.) and
either needs to have special
interfaces (causes all sorts of problems: security, performance,
etc.) or needs to be able to
directly access lower-level OS routines to implement needed
function. This is difficult to
implement in a 'traditional' micro-kernel design.

One example is when an executable (in certain OS designs) is running
and accesses a
page/segment of code for the first time, it can handle the
page/segment not-present fault by
going to the executable file and reading in the code (and applying
relocation info, etc.) Remember
this is during an exception: page/segment not-present. In some MK
designs the FileSystems are
in user mode. The Loader needs to do a special read operation
before returning from the n-p
exception. There are ways to get to the correct routines, but there
needs to be some sort of special
hook to tell the FS to not block and to do special priority
'handling' to not foul up things for the
system (to flush/not flush buffers; to place n-p requests ahead of
'normal' request; other odd things).

In short, while the separation of sub-systems is generally a good
thing, either ALL possible
cases need to be provided for (difficult) and/or special interfaces
need to be allowed for certain
'privileged' parts of the system. Otherwise, '*enforcing*'
separation can cause considerable
duplication of code and many, many complications. I _know_ about
what I am speaking...
experience can be an exacting and painful teacher. That said, I
would love to learn about a way
to _avoid_ these problems.
-Edd
-

Vassilis Pandis

unread,
Sep 4, 2001, 3:49:56 PM9/4/01
to

Espen Skoglund <e...@ira.uka.de> wrote in message news:<3b8d29ac$1...@news.ucsc.edu>...

In modular monolithic kernels you wouldn't have the system task. You
wouldn't
need to bother with messages , if they were blocked , if they were
received
etc.. Microkernels are usually ( usually because it depends in the
implementation) bigger than monolithic kernels.

> > [...] and the total output is a bigger kernel (assuming that the
> > kernel conssits of all the seperate programs that form
> > it). Therefore , the RAM usage is worse! Modular monolithic kernels
> > unload everything they do not need from the RAM, [...]
>
> This argument is moot. A microkernel system can also kill/delete the
> operating system services (i.e., user-level tasks) which are not
> currently used.

Yes. But the same thing happens on the modular monolithic kernel.
However ,
some services should *always* be loaded. Examples are the memory
management ,
scheduler , drivers and others. You cannot unload the memory manager.
In modular monolithic kernels , everything that cannot be loaded is
part of the
kernel , and all the rest are modules. However , the modules are not
processes. They are part of the kernel. So , the communication is
faster , whereas in the microkernel implementation it is slower (as
you agreed) , and the modules can be un/loaded at run time. The System
Task is also an increased overhead in microkernels , its one more
process , which in monolithic kernels wouldnt even be used ( this is
an example of why microkernels are more complex).



> > The only areas in which i could find microkernels to be superior to
> > monolithic kernels is the area of Multiprocessing , because (i
> > believe) that both CPUs could be running the kernel at the same
> > time.
>
> A properly designed monolithic kernel can also run simultaneously on
> multiple CPUs, so this pro-argument for microkernels is not really
> valid.
>
> eSk

Still the problem remains. Two microkernel system services can run at
the same time but on different CPUs , without having any serious
impact to the design of the kernel. But when it comes to monolithical
kernels , it cannot be in kernel mode on two different CPUs ( except
for the interrupt/exception handlers and the system calls). It is
forced to use some kind of lock ( The big kernel lock in linux 1.3 for
example) . This lock is not exactly *the* most efficient solution.
Another option is coding with 2 CPUs in mind , and this is what kernel
developers do now , but still there are bugs due to the many
processors .


Sotiris Ioannidis

unread,
Sep 6, 2001, 7:18:32 PM9/6/01
to

i sorta followed this thread since comp.os.research doesnt have much traffic anyway
and i have a question: who cares?
read the proceedings of SOSP and OSDI for the last 15years and you will see the
advantages and disadvantages of both, and you dont even have to read the whole paper
just browse the related work section where all the comparisons are

&si


--
Sotiris Ioannidis
Ph.D. candidate, Distributed Systems Lab, UPenn
mailto:sot...@dsl.cis.upenn.edu


Chris Calabrese

unread,
Sep 6, 2001, 7:18:38 PM9/6/01
to

Guys, the war's over, and monolithic kernels have won by co-opting
many of the features of microkernels (think loadable drivers and
streams modules).

Yes, microkernels are easier to mess around with as a programmer and
therefore better for research.

But monolithic kernels are easier to tune for size and performance,
and therefore better in production usage.

Starting about 15 years ago, monolithic kernels began evolving to
include some features of microkernels, including the ability to load
(and sometimes unload) modules. In some cases the modules were
specialized (e.g., streams modules) and in others they were more
general (i.e., dynamic driver loading). The specialized modules also
usually have some kind of structured message passing to make it easier
to program them. I'm even aware of some monolithic kernels that do
MMU tricks so that different modules can run in their own memory
space.

Most successful kernel designs these days are monolitic kernels with
heavy microkernel capabilities, though one or two started life as
microkernel designs rather than the others which started as monolithic
kernels.


Greg Law

unread,
Sep 12, 2001, 5:28:21 PM9/12/01
to
chris_c...@yahoo.com (Chris Calabrese) wrote in message news:<3b97f63e$1...@news.ucsc.edu>...

> Guys, the war's over, and monolithic kernels have won

Hmm, this too simplistic a view IMHO. The guys at QNX and Symbian
wouldn't agree with this analysis. People usually argue that the war
is over because there are three desktop operating systems in
mainstream use:

Win 9x -- doesn't count since it has evolved rather than been designed
:-)
NT -- was a micro kernel, now not
UNIX -- much older than microkernels.

People say "look, MS tried to build a MK and they resorted to a
monolithic one, ergo, microkernels don't work".

Just because one company decided microkernels weren't such a good
idea, doesn't make that gospel.

Then they say "Ah, but Mach sucks". I think it is important to draw
the distinction between first and second generation microkernels. I'm
not saying the second generation microkernels are necessarily the way
forward, but I do think it's far too early to declare the war over.

Of all the new operating systems designed since 1990 (UNIX flavours
don't count since we're talking about designs not implementations),
how many are monolothic kernels? NT, depending on your view, and what
others? I'm sure someone can mention some, but right now, the only
one I can think of (research or industrial) is BeOS, where, again, the
dury is kinda out. A few post 1990 that come immediately to mind:

Amboeba, QNX, Symbian, NT, BeOS, L4, EROS, Nemesis, Spring, Mach (incl
MacOS X).

[maybe Mach and Amoeba were just pre 1990, not sure]

Two of these (NT, and BeOS), seem to be sort of half-way between micro
and monolithic. The rest are clearly microkernels. So to say "the
war's over, microkernels lost" is at best a hasty conclusion.

> Yes, microkernels are easier to mess around with as a programmer and
> therefore better for research.

And development.

>
> But monolithic kernels are easier to tune for size and performance,
> and therefore better in production usage.

This is probably becoming less of an issue though. We know how to
make microkernels perform well now (see L4 and compare to Mach). Plus
hardware is now so powerful and cheap.

I think we must consider how best to use all that computing power we
now have. What is the biggest problem we face as an industry? One
candidate is that (systems) software is too hard to write thus too
buggy, and thus we have these myriad security issues of late. So, why
not use the power to structure things better and have more solid and
robust systems?

>
> ... I'm even aware of some monolithic kernels that do


> MMU tricks so that different modules can run in their own memory
> space.

I think that such systems would be better described as
micro-monolithic hybrid, rather than monolithic.

>
> Most successful kernel designs these days are monolitic kernels

Please can you give some examples? NT and Linux spring to mind, and
sure, they are the most successful desktop OSs (apart from win9x which
shouldn't really figure in a "what's the best way to design an OS"
debate :-). But oranges are not the only fuit.

> heavy microkernel capabilities, though one or two started life as
> microkernel designs

What ones are you thinking of? NT is the most obvious. Are there
others?

Greg

Igor Shmukler

unread,
Sep 12, 2001, 5:28:27 PM9/12/01
to

> Guys, the war's over, and monolithic kernels have won by co-opting
> many of the features of microkernels (think loadable drivers and
> streams modules).

I agree with some below statements, but mololithic didn't win.
As NUMA and architectures will become more and more common,
MKs will start coming in fasion.

> But monolithic kernels are easier to tune for size and performance,
> and therefore better in production usage.

Not the size but maybe performance.

> Starting about 15 years ago, monolithic kernels began evolving to
> include some features of microkernels, including the ability to load
> (and sometimes unload) modules. In some cases the modules were
> specialized (e.g., streams modules) and in others they were more
> general (i.e., dynamic driver loading). The specialized modules also
> usually have some kind of structured message passing to make it easier
> to program them. I'm even aware of some monolithic kernels that do
> MMU tricks so that different modules can run in their own memory
> space.

What does any of this have to do with issue at hand?
Mono/Micro kernel has little to do with either.


> Most successful kernel designs these days are monolitic kernels with
> heavy microkernel capabilities, though one or two started life as
> microkernel designs rather than the others which started as monolithic
> kernels.

What are most successful designs?
How do you even define success? If by # of boxes installed; W9X is the MOST
successful design ever.
It's 16-bit with 32-bit extensions and crashes every few minutes.

No point in arguments? Maybe, but no point in telling people to stop in
unmoderated newsgroup.

Vassilis Pandis

unread,
Sep 12, 2001, 5:28:41 PM9/12/01
to
chris_c...@yahoo.com (Chris Calabrese) wrote in message news:<3b97f63e$1...@news.ucsc.edu>...
> Guys, the war's over, and monolithic kernels have won by co-opting
> many of the features of microkernels (think loadable drivers and
> streams modules).

Tanenbaum 10 years ago , was happy about microkernels winning the
"war" :-) . Seems like they didn't after all :-).


Igor Shmukler

unread,
Sep 12, 2001, 5:28:46 PM9/12/01
to
> Guys, the war's over, and monolithic kernels have won by co-opting
> many of the features of microkernels (think loadable drivers and
> streams modules).

I agree with some below statements, but mololithic didn't win.


As NUMA and architectures will become more and more common,
MKs will start coming in fasion.

> But monolithic kernels are easier to tune for size and performance,


> and therefore better in production usage.

Not the size but maybe performance.

> Starting about 15 years ago, monolithic kernels began evolving to


> include some features of microkernels, including the ability to load
> (and sometimes unload) modules. In some cases the modules were
> specialized (e.g., streams modules) and in others they were more
> general (i.e., dynamic driver loading). The specialized modules also
> usually have some kind of structured message passing to make it easier
> to program them. I'm even aware of some monolithic kernels that do
> MMU tricks so that different modules can run in their own memory
> space.

What does any of this have to do with issue at hand?


Mono/Micro kernel has little to do with either.

> Most successful kernel designs these days are monolitic kernels with
> heavy microkernel capabilities, though one or two started life as
> microkernel designs rather than the others which started as monolithic
> kernels.

What are most successful designs?

Edd Doutre

unread,
Sep 12, 2001, 5:28:53 PM9/12/01
to

Chris Calabrese wrote:

-
You are making the point, in much terser language, what I said in my
first post on this topic.
It's better to have 'both' types of capabilities, msg passing,
typical microkernel style features,
but in a monolithic (hate that word) kernel with dynamic loading.
And as someone else said,
why not? It works. They won.

Now, as to the specifics of which features and how they play in this
arena, that could further
extend this thread and possibly make some contributions to general
knowledge.
-Edd
-


Igor Shmukler

unread,
Sep 18, 2001, 12:38:37 PM9/18/01
to

> UNIX -- much older than microkernels.

Why do posts appear in this fashion?
UNIX is not an OS anymore, is only a copyright plus POSIX compliance.

> Then they say "Ah, but Mach sucks". I think it is important to draw
> the distinction between first and second generation microkernels. I'm
> not saying the second generation microkernels are necessarily the way
> forward, but I do think it's far too early to declare the war over.

How about asking them why Mach sucks?
Very few really know answer.

> Of all the new operating systems designed since 1990 (UNIX flavours
> don't count since we're talking about designs not implementations),

> how many are monolithic kernels? NT, depending on your view, and what


> others? I'm sure someone can mention some, but right now, the only
> one I can think of (research or industrial) is BeOS, where, again, the
> dury is kinda out. A few post 1990 that come immediately to mind:
>
> Amboeba, QNX, Symbian, NT, BeOS, L4, EROS, Nemesis, Spring, Mach (incl
> MacOS X).

I think that L4 and EROS are really 2nd generation, rest is more of first.
(those I am familiar with)
I think that kernel should expose simple enough IPC so that every call does
not take forever.
However not all other 2nd generation concepts are that clear of an
advantage.

> [maybe Mach and Amoeba were just pre 1990, not sure]

AFAIR Mach is was introduced in '86
Amoeba several years later.

> Two of these (NT, and BeOS), seem to be sort of half-way between micro
> and monolithic. The rest are clearly microkernels. So to say "the
> war's over, microkernels lost" is at best a hasty conclusion.

IBM has also played with Mach.

> This is probably becoming less of an issue though. We know how to
> make microkernels perform well now (see L4 and compare to Mach). Plus
> hardware is now so powerful and cheap.

See above about Mach and L4.

> I think we must consider how best to use all that computing power we
> now have. What is the biggest problem we face as an industry? One
> candidate is that (systems) software is too hard to write thus too
> buggy, and thus we have these myriad security issues of late. So, why
> not use the power to structure things better and have more solid and
> robust systems?

Also microkernels scale better.

Peter da Silva

unread,
Sep 18, 2001, 12:38:44 PM9/18/01
to

In article <3b9fc565$1...@news.ucsc.edu>,

Greg Law <gl...@nexwave-solutions.com> wrote:
> Then they say "Ah, but Mach sucks".

There has been a lot of debate as to whether Mach qualifies as a microkernel
or not. Not to mention that at least one conference changed their name to
"Microkernels and other kernel architectures" when they started presenting
papers on NT.

> Amboeba, QNX, Symbian, NT, BeOS, L4, EROS, Nemesis, Spring, Mach (incl
> MacOS X).
>
> [maybe Mach and Amoeba were just pre 1990, not sure]

A lot of real-time operating systems from the '80s, including Amiga's Exec,
have most of the characteristics of microkernels.

--
`-_-' In hoc signo hack, Peter da Silva.
'U` "A well-rounded geek should be able to geek about anything."
-- nic...@esperi.org
Disclaimer: WWFD?


Greg Law

unread,
Sep 18, 2001, 10:38:09 PM9/18/01
to

> > UNIX -- much older than microkernels.
>
> Why do posts appear in this fashion?
> UNIX is not an OS anymore, is only a copyright plus POSIX
> compliance.

Depends on your definition of what UNIX is I guess. But pragmatically,
"UNIX" usually means that group of OSs: Linux, *BSD, Solaris, AIX, True64,
etc. It's pretty obvious whether most OSs are UNIX or not.

They all have an almost identical architecture -- only implementatin details
differ. They're certainly all monolothic kernels.

> How about asking them why Mach sucks?
> Very few really know answer.

Probably true. If I'm honest, I don't really know the answer. I've never
used it, never even seen any source code. From what I've I read however
believe that we've learnt a lot since Mach was designed.

>
> I think that L4 and EROS are really 2nd generation, rest is more
> of first. (those I am familiar with)

Yes. L4 and EROS are the two most interesting microkernels IMO.

> I think that kernel should expose simple enough IPC so that
> every call does not take forever.

Absolutely. A lot of credit due to Jochen Lidkte here, RIP.

> Also microkernels scale better.

Indeed.

One could argue that performance is becoming less and less of an issue.
Many applications are so slow that the OS performance doesn't seem likely to
make any difference. Have you used Mozilla lately? I can't believe how it
manages to sap all those cycles on my PIII at > 800 Mhz.

But the OS must not make the system visibly slower. History shows that if
the OS doesn't perform it won't be accepted no matter what other benefits
are present. NT and UNIX only really caught on on the desktop when PCs were
meaty enough to run them (i.e. Pentium 'classic' era). Before that it was
Win 3.1 despite it's myriad other problems. And before that, DOS. I knew
many people who ran DOS despite Windows 3.1 or even NT being available
because it was snappy on their 386.

Greg

Luke Hart

unread,
Sep 24, 2001, 2:35:21 PM9/24/01
to

"Greg Law" <gl...@removethis.nexwave-solutions.com> wrote in message
news:3ba7f701$1...@news.ucsc.edu...


> > > UNIX -- much older than microkernels.
> >
> > Why do posts appear in this fashion?
> > UNIX is not an OS anymore, is only a copyright plus POSIX
> > compliance.
>
> Depends on your definition of what UNIX is I guess. But pragmatically,
> "UNIX" usually means that group of OSs: Linux, *BSD, Solaris, AIX, True64,
> etc. It's pretty obvious whether most OSs are UNIX or not.
>
> They all have an almost identical architecture -- only implementatin
details
> differ. They're certainly all monolothic kernels.

Tru64 is Mach based, and hence not based on a monolithic kernel. I think
this nicely illistrates what Igor was saying, a 'Unix' can be based on any
kernel architechure.

Luke


Peter da Silva

unread,
Sep 24, 2001, 2:35:28 PM9/24/01
to

In article <3ba76a7d$1...@news.ucsc.edu>,

Igor Shmukler <shmukle...@mailNO.SPAMru> wrote:
> > UNIX -- much older than microkernels.

> Why do posts appear in this fashion?
> UNIX is not an OS anymore, is only a copyright plus POSIX compliance.

UNIX is a family of operating systems with a common API. UNIX is an
approach to OS design that involves an opaque raw stream object as the
fundamental communication channel between cooperating programs. UNIX
is many things, none of which have anything to do with POSIX or whoever
owns the copyright this week.

POSIX compliance is meaningless. NT claims POSIX compliance, but it took
an enormous effort from Interix to turn the POSIX subsystem into something
that was actually usable.

Igor Shmukler

unread,
Sep 25, 2001, 5:14:18 PM9/25/01
to
> UNIX is a family of operating systems with a common API. UNIX is an
> approach to OS design that involves an opaque raw stream object as the
> fundamental communication channel between cooperating programs. UNIX
> is many things, none of which have anything to do with POSIX or whoever
> owns the copyright this week.

The very API an OS complies to is the POSIX.
(UNIX is either a AT&T derived code or POSIX compliance. most people agree
to late)
(I cannot seem to grasp your point)

> POSIX compliance is meaningless. NT claims POSIX compliance, but it took
> an enormous effort from Interix to turn the POSIX subsystem into something
> that was actually usable.

What does this prove?
a) MS cannot figure out how to COW for fork()
b) MS wants people to use Win32 API
c) POSIX is not UNIX

I would think that second guess is the best IMHO.

David B Terrell

unread,
Oct 1, 2001, 4:48:35 PM10/1/01
to

Peter da Silva <pe...@abbnm.com> says:
> POSIX compliance is meaningless. NT claims POSIX compliance, but it took
> an enormous effort from Interix to turn the POSIX subsystem into something
> that was actually usable.

Microsoft paid POSIX a lot of money to get them to rewrite the POSIX spec
so they could comply without doing too much work.

--
David Terrell | "Instead of plodding through the equivalent of
Prime Minister, NebCorp | literary Xanax, the pregeeks go for sci-fi and
d...@meat.net | fantasy: LSD in book form." - Benjy Feen,
http://wwn.nebcorp.com | http://www.monkeybagel.com/ "Origins of Sysadmins"


Peter da Silva

unread,
Oct 1, 2001, 4:48:43 PM10/1/01
to

In article <3baf6ed9$1...@news.ucsc.edu>,

Luke Hart <jl...@jlh92.freeserve.co.uk> wrote:
> Tru64 is Mach based, and hence not based on a monolithic kernel. I think
> this nicely illistrates what Igor was saying, a 'Unix' can be based on any
> kernel architechure.

Unless things have changed recently, Tru64 uses a "single server" model, which
is (as I understand it) not unlike running a monolithic kernel *under* Mach.

Peter da Silva

unread,
Oct 1, 2001, 4:48:50 PM10/1/01
to

In article <3bb0e59a$1...@news.ucsc.edu>,

Igor Shmukler <shmukle...@mailNO.SPAMru> wrote:
> > UNIX is a family of operating systems with a common API. UNIX is an
> > approach to OS design that involves an opaque raw stream object as the
> > fundamental communication channel between cooperating programs. UNIX
> > is many things, none of which have anything to do with POSIX or whoever
> > owns the copyright this week.

> The very API an OS complies to is the POSIX.
> (UNIX is either a AT&T derived code or POSIX compliance. most people agree
> to late)
> (I cannot seem to grasp your point)

I have used a number of systems I consider implementations of UNIX that are
not derived from AT&T code nor conform to POSIX (and in some cases predate
POSIX).

> > POSIX compliance is meaningless. NT claims POSIX compliance, but it took
> > an enormous effort from Interix to turn the POSIX subsystem into something
> > that was actually usable.

> What does this prove?
> a) MS cannot figure out how to COW for fork()
> b) MS wants people to use Win32 API
> c) POSIX is not UNIX

(b) and (c), since many historical UNIX systems didn't COW for fork()... in
fact that's why vfork() was created.

Peter da Silva

unread,
Oct 1, 2001, 4:49:14 PM10/1/01
to

In article <3bb0e59a$1...@news.ucsc.edu>,

Igor Shmukler <shmukle...@mailNO.SPAMru> wrote:
> > UNIX is a family of operating systems with a common API. UNIX is an
> > approach to OS design that involves an opaque raw stream object as the
> > fundamental communication channel between cooperating programs. UNIX
> > is many things, none of which have anything to do with POSIX or whoever
> > owns the copyright this week.

> The very API an OS complies to is the POSIX.
> (UNIX is either a AT&T derived code or POSIX compliance. most people agree
> to late)
> (I cannot seem to grasp your point)

I have used a number of systems I consider implementations of UNIX that are


not derived from AT&T code nor conform to POSIX (and in some cases predate
POSIX).

> > POSIX compliance is meaningless. NT claims POSIX compliance, but it took


> > an enormous effort from Interix to turn the POSIX subsystem into something
> > that was actually usable.

> What does this prove?
> a) MS cannot figure out how to COW for fork()
> b) MS wants people to use Win32 API
> c) POSIX is not UNIX

(b) and (c), since many historical UNIX systems didn't COW for fork()... in


fact that's why vfork() was created.

--

Robin Fairbairns

unread,
Oct 8, 2001, 2:56:51 PM10/8/01
to
David B Terrell <d...@meat.net> wrote:
>Peter da Silva <pe...@abbnm.com> says:
>> POSIX compliance is meaningless. NT claims POSIX compliance, but it took
>> an enormous effort from Interix to turn the POSIX subsystem into something
>> that was actually usable.
>
>Microsoft paid POSIX a lot of money to get them to rewrite the POSIX spec
>so they could comply without doing too much work.

Gosh. I was once on a standards committee which was being lobbied by
microsoft for the semantics of some operations to be changed. Plainly
i was in the wrong sort of area: the microsoft people didn't even pay
for my beer at lunch time. :-(

Seriously, though, are you suggesting that microsoft bought up the
IEEE and ISO Posix committees? It's a hell of a scandal if they did.
The nearest i can recall is a sleazy little American outfit lobbying
via ANSI for a change to the parameterisation of a function in the CGM
... because they held a patent [ha ha] on the alternative
parameterisation. When the ANSI committee realised the agenda they
told the outfit, whose name i've forgotten, to get lost.
--
Robin Fairbairns, Cambridge -- rf10 at cam dot ac dot uk


Igor Shmukler

unread,
Oct 8, 2001, 2:57:04 PM10/8/01
to

> > Tru64 is Mach based, and hence not based on a monolithic kernel. I think
> > this nicely illistrates what Igor was saying, a 'Unix' can be based on
any
> > kernel architechure.
>
> Unless things have changed recently, Tru64 uses a "single server" model,
which
> is (as I understand it) not unlike running a monolithic kernel *under*
Mach.

Things have not changed. You just don't understand what single-server is.

BTW Tru64 is not 100%, but rather 99% microkernel.
It's Mach 2.5 originated with some Mach 3 and Utah additions.
(Plus the Digital written code)

Igor Shmukler

unread,
Oct 8, 2001, 2:57:09 PM10/8/01
to

First time I didn't want to answer, but my ng had it posted twice. Must be
for a reason.

I would love to know when is comp.os.research moderator will get some help
and/or time.

I don't see point just emotions.
You don't need to teach us about vfork. It won't achieve much.

I just expressed most common opinion of what UNIX is today.
If can be defined in the way that nobody will argue about.
Like a check list
a) at&t derived y/n?
b) posix compliant y/n?
score of 0 says not UNIX, above UNIX.


And no religious wars to be fought.

Feel free to think what you like. More power to you.


0 new messages