Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Monolithic vs Micro

2 views
Skip to first unread message

Siva

unread,
Nov 21, 2007, 3:41:28 AM11/21/07
to
Hi,
Im Siva. I was reading a lot of OS literature these days and i cannot
understand why monolithic is preferred to micro kernel.
What i feel is, micro kernel reduces your kernel size and reduces the
chance of bugs....more security...
Can anybody tell me why linux is monolithic??


Siva Prakash
svp....@gmail.com

Bjarni Juliusson

unread,
Nov 21, 2007, 5:36:26 AM11/21/07
to

It doesn't reduce the size of anything, all the code is still there but
divided up into modules. As I understand, most current kernels are
monolithic for a few reasons:

1. There is a speed penalty for inter-module communication in
microkernels.
2. Writing monolithic kernels seems easier and is a more well-known
process. How much did Linus know about microkernels when he started
Linux?
3. It appears to be the most common sentiment that the advantages of
microkernels are very small and not worth the work and the
performance loss.

The literature I guess teaches current practice, which is what a student
will have the most use of understanding. Thus, it discusses mostly
monolithic kernels. When it comes to hobby projects, it seems that most
people are not interested in making a whole new operating system, but
just another Unix-clone, so they make the same implementation choices
everyone else has made. It's a good programming exercise I suppose.


Bjarni
--

INFORMATION WANTS TO BE FREE

Rod Pemberton

unread,
Nov 21, 2007, 6:18:28 AM11/21/07
to

"Siva" <svp....@gmail.com> wrote in message
news:9f41c060-99e1-4b2b...@b15g2000hsa.googlegroups.com...

> Hi,
> Im Siva. I was reading a lot of OS literature these days and i cannot
> understand why monolithic is preferred to micro kernel.

Is it?

> What i feel is, micro kernel reduces your kernel size and reduces the
> chance of bugs....more security...
> Can anybody tell me why linux is monolithic??
>

Yes, but you can read about for yourself. Summary of the 1992 "LINUX is
obsolete" debate between Andrew Tanenbaum (MINIX) and Linus Torvalds (Linux)
on comp.os.minix:
http://en.wikipedia.org/wiki/Tanenbaum-Torvalds_debate

Also read this, especially the Mach section. It's about the XNU hybrid
kernel used in Mac's OS X. It covers some of the complexities that arise in
monolithic and microkernel designs and how they solved the problems:
http://en.wikipedia.org/wiki/XNU


Rod Pemberton

Alexei A. Frounze

unread,
Nov 21, 2007, 7:38:38 AM11/21/07
to
On Nov 21, 12:41 am, Siva <svp.s...@gmail.com> wrote:
> What i feel is, micro kernel reduces your kernel size and reduces the
> chance of bugs....more security...

I doubt there's any more security in microkernels-based OSes than in
monolithic. If one can crash or hang a module, make it execute some
other code, mangle its data or steel some information from it, it's
still insecure. There're other security bugs than simply making one
module/application damage the other one or the kernel by directly
writing to its memory. If a module doesn't check parameters (messages
and network data packets), is susceptible to buffer overflows, fails
miserably under big loads, is easy to break into, then there's no
security. Read something like the book Writing Secure Code by Howard
and LeBlanc to have a better idea of security-related bugs.

Alex

Maxim S. Shatskih

unread,
Nov 21, 2007, 10:46:17 AM11/21/07
to
Traditionally, message passing requires high overhead, mainly due to the
need in TLB invalidation on address space switches.

--
Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
ma...@storagecraft.com
http://www.storagecraft.com

"Siva" <svp....@gmail.com> wrote in message
news:9f41c060-99e1-4b2b...@b15g2000hsa.googlegroups.com...

Siva

unread,
Nov 21, 2007, 12:36:42 PM11/21/07
to
On Nov 21, 4:18 pm, "Rod Pemberton" <do_not_h...@nohavenot.cmm> wrote:
> "Siva" <svp.s...@gmail.com> wrote in message

thnx the links were helpful......

kol...@gmail.com

unread,
Nov 21, 2007, 2:46:11 PM11/21/07
to
On Nov 21, 4:38 am, "Alexei A. Frounze" <alexfrun...@gmail.com> wrote:
> I doubt there's any more security in microkernels-based OSes than in
> monolithic. If one can crash or hang a module, make it execute some
> other code, mangle its data or steel some information from it, it's
> still insecure.

That's a fair point, but a microkernel still does provide some coarse-
grained fault isolation. For instance, even if my buggy graphics
server crashes, it will not affect other processes that are not using
graphics. Additionally, a microkernel provides some hope of
recovering subsystems, if you can clean up the old crashed one.

Another possible approach to building more reliable or secure systems
is to put more code into user-level libraries. The difference from a
traditional microkernel approach is that instead of having one file
system or Unix-process server, that functionality is implemented by
each thread independently in its own protection domain. In this case,
the kernel just needs to provide abstractions for controlled sharing
of data between these protection domains.

In this case, even if an attacker compromises his library that
implements Unix process semantics or file system semantics, he will
not gain any additional privileges. The trick is to come up with
kernel abstractions that allow you to implement traditional
functionality in user-level libraries with minimal kernel support.
One example of this design is the HiStar operating system (http://
www.scs.stanford.edu/histar/).

Nickolai.

Alexei A. Frounze

unread,
Nov 21, 2007, 5:26:12 PM11/21/07
to
On Nov 21, 11:46 am, kol...@gmail.com wrote:
> On Nov 21, 4:38 am, "Alexei A. Frounze" <alexfrun...@gmail.com> wrote:
>
> > I doubt there's any more security in microkernels-based OSes than in
> > monolithic. If one can crash or hang a module, make it execute some
> > other code, mangle its data or steel some information from it, it's
> > still insecure.
>
> That's a fair point, but a microkernel still does provide some coarse-
> grained fault isolation. For instance, even if my buggy graphics
> server crashes, it will not affect other processes that are not using
> graphics.

But if it's something more serious like the file system or network, it
may be very hard to recover or simply gain access to the system in
order to fix it.

> Additionally, a microkernel provides some hope of
> recovering subsystems, if you can clean up the old crashed one.

Hope is a good word. Unfortunately, that's the only thing we've got.
The vast majority of software is designed and implemented overly
optimistically and fails when things start wronging differently from
what's expected. Unless the particular code is proven to work in such
and such situations or be able to recover (formally or through
rigorous testing or both), it's only the hope we have. It's damn hard
to test the code really really well. It's very hard to foresee all
problems and implement good enough and broad enough error checking and
recovery mechanisms. I don't think formally proving correctness is any
easier.

> Another possible approach to building more reliable or secure systems
> is to put more code into user-level libraries. The difference from a
> traditional microkernel approach is that instead of having one file
> system or Unix-process server, that functionality is implemented by
> each thread independently in its own protection domain. In this case,
> the kernel just needs to provide abstractions for controlled sharing
> of data between these protection domains.

I don't see how that helps with bugs like unchecked params, buffer
overflows, poor passwords, etc. E.g. a module doesn't need to overflow
somebody else's buffer, it's own is good enough. If the file system or
network module gets compromised, even though there might be no direct
impact on other modules (which is doubtful), sensitive information may
be leaked from the system or damaged.

> In this case, even if an attacker compromises his library that
> implements Unix process semantics or file system semantics, he will
> not gain any additional privileges. The trick is to come up with
> kernel abstractions that allow you to implement traditional
> functionality in user-level libraries with minimal kernel support.
> One example of this design is the HiStar operating system (http://www.scs.stanford.edu/histar/).
>
> Nickolai.

The most secure computer is the one that's disconnected from all
networks, shut down, contained in a safe in a bunker and guarded by
armed men. :) I mean, if there's a chance to misuse and abuse a
computer system, and that system doesn't have enough protection of its
own, then its days are counted.

Alex

Rod Pemberton

unread,
Nov 21, 2007, 8:37:37 PM11/21/07
to

<kol...@gmail.com> wrote in message
news:7b401927-b09d-4801...@w28g2000hsf.googlegroups.com...

> Another possible approach to building more reliable or secure systems
> is to put more code into user-level libraries.

The question is how far can one go? UML Linux abstracts the kernel over the
syscall interface. However, the syscall interface has bloated from 40
(implemented) syscalls in Linux v0.01 to 290 syscalls in Linux 2.6.17.

A secondary question is how far can one go without significantly impacting
performance?

> The difference from a
> traditional microkernel approach is that instead of having one file
> system or Unix-process server, that functionality is implemented by
> each thread independently in its own protection domain.

How is this different from UML, Adeos, etc.?

http://user-mode-linux.sourceforge.net/
http://home.gna.org/adeos/

> In this case,
> the kernel just needs to provide abstractions for controlled sharing
> of data between these protection domains.
>

Description of a microkernel...

> In this case, even if an attacker compromises his library that
> implements Unix process semantics or file system semantics, he will
> not gain any additional privileges.

By pushing as much code into the user space as possible, doesn't that give
the user some additional privileges or functionality that the user wouldn't
have had the code remained in a privileged space? By providing the user
with more access to a larger percentage of compiled code base, it exposes
more potential coding errors that the user could abuse which were previously
hidden behind the kernel protections.

> The trick is to come up with
> kernel abstractions that allow you to implement traditional
> functionality in user-level libraries with minimal kernel support.
> One example of this design is the HiStar operating system (http://
> www.scs.stanford.edu/histar/).
>

I didn't see a simple technical overview, say comparing UML, CoLinux, Adeos.


Rod Pemberton

kol...@gmail.com

unread,
Nov 21, 2007, 8:39:20 PM11/21/07
to
On Nov 21, 2:26 pm, "Alexei A. Frounze" <alexfrun...@gmail.com> wrote:
> But if it's something more serious like the file system or network, it
> may be very hard to recover or simply gain access to the system in
> order to fix it.

I suspect you're right in that it would be harder. But if it worked,
even in some cases (e.g. for fail-stop errors), it might result in a
more reliable system. For errors that are easily detectable, it might
not be necessary to gain access to the system, or to even have a file
system for recovering the component. The microkernel could keep a
copy of the subsystem's binary in memory, and reload it automatically
on certain errors (e.g. fatal page faults).

There's been some work on this idea in Minix3 (mostly restarting
device drivers) and at UIUC (e.g. http://choices.cs.uiuc.edu/selfhealing.pdf),
among others.

> > Another possible approach to building more reliable or secure systems
> > is to put more code into user-level libraries. The difference from a
> > traditional microkernel approach is that instead of having one file
> > system or Unix-process server, that functionality is implemented by
> > each thread independently in its own protection domain. In this case,
> > the kernel just needs to provide abstractions for controlled sharing
> > of data between these protection domains.
>
> I don't see how that helps with bugs like unchecked params, buffer
> overflows, poor passwords, etc. E.g. a module doesn't need to overflow
> somebody else's buffer, it's own is good enough. If the file system or
> network module gets compromised, even though there might be no direct
> impact on other modules (which is doubtful), sensitive information may
> be leaked from the system or damaged.

At a high level, it helps in much the same way that it helps to have
any buggy code running in user-space with a different user-ID rather
than running in the kernel. Suppose that an application like the pine
mail reader has a buffer overflow in its email parsing code. If an
attacker sends a specially-crafted email message to some user, the
vulnerability will give the attacker access to one user account on
that machine, but if the OS implemented email parsing in the kernel,
the attacker would compromise the entire system.

In a complex application, security can be similarly improved by having
the kernel enforce security. For instance, in a web server, one could
assign different kernel protection domains (e.g. Unix user IDs) to
different web site users. If an attacker finds a bug in some server-
side PHP code, the code is still running with his user ID on the
server, and he can't read anyone else's files. Today's web servers
don't do this, unfortunately, and the effect is in some ways similar
to running email parsing in the kernel -- any bug in code that has no
business being security-critical results in a complete compromise.

Moving code from the kernel to user-space makes it possible to enforce
the security of that code without having to rely on the code to
actually be correct. One way to think of this is that Unix systems,
the libc library, running in user-space, is not critical to enforcing
security. If an attacker overflows some buffer in his own libc
library, he can take over his libc, but it doesn't gain him much in
terms of privileges, since it's the kernel that is enforcing
security. On the other hand, if that code was in the kernel, and an
attacker overflowed a buffer there, he's won.

Nickolai.

kol...@gmail.com

unread,
Nov 21, 2007, 9:26:16 PM11/21/07
to
On Nov 21, 5:37 pm, "Rod Pemberton" <do_not_h...@nohavenot.cmm> wrote:
>
> > Another possible approach to building more reliable or secure systems
> > is to put more code into user-level libraries.
>
> The question is how far can one go? UML Linux abstracts the kernel over the
> syscall interface. However, the syscall interface has bloated from 40
> (implemented) syscalls in Linux v0.01 to 290 syscalls in Linux 2.6.17.

I think it's possible to go further than UML. One shortcoming of UML
is that it doesn't actually make the kernel any smaller, and thus any
less likely to contain bugs. The other issue with UML, to my
understanding (correct me if I'm wrong, I don't have intimate
knowledge of it), is that if you want have sharing between processes
(e.g. pipes, shared memory, file system, etc), you need to share a UML
kernel as well. This UML kernel is about as large as the regular
Linux kernel running underneath, and if you find a bug in your UML
kernel, then you can compromise any other process that's using your
UML kernel as well. So, UML doesn't actually reduce your exposure to
bugs if you need to share state. (On the other hand, if you don't
need to share things, UML and many VM-like solutions are a good fit.)

> A secondary question is how far can one go without significantly impacting
> performance?

I think it's possible to engineer such systems to have low performance
overhead. Moreover, I think security is becoming more important over
time, so that it may make sense to trade off some performance for
better security (and even if not now, then perhaps later).

> > The difference from a
> > traditional microkernel approach is that instead of having one file
> > system or Unix-process server, that functionality is implemented by
> > each thread independently in its own protection domain.
>
> How is this different from UML, Adeos, etc.?

The difference is in the amount of sharing and the kinds of security
policies you can enforce on these shared objects. In UML and Adeos
there appears to be little sharing and little fine-grained security
policy enforcement between the different kernels. Within the
protection domain, each kernel is free to implement security policies
any way it wants, but then all processes running on top of that kernel
are vulnerable to bugs in that kernel.

> > In this case, even if an attacker compromises his library that
> > implements Unix process semantics or file system semantics, he will
> > not gain any additional privileges.
>
> By pushing as much code into the user space as possible, doesn't that give
> the user some additional privileges or functionality that the user wouldn't
> have had the code remained in a privileged space? By providing the user
> with more access to a larger percentage of compiled code base, it exposes
> more potential coding errors that the user could abuse which were previously
> hidden behind the kernel protections.

You're right that it is not possible to blindly move any code from the
kernel to user-space. After all, the code was in the kernel for a
reason; usually the kernel relies on that code not being tampered with
to enforce security. (But then again, there's some code that already
has little reason to be in the kernel, such as all the Unix tty
handling gunk like line disciplines and so on.)

However, I believe it is possible to architect the system in a
different way, in which the enforcement of security need not be the
job of every line of code that's currently in a Unix kernel. For
example, to move file system code to user-space, the kernel could
enforce access control to disk blocks, and allocate disk blocks to
users. Bugs in file system code would then be about as dangerous as
bugs in libc. Admittedly, it may be difficult to enforce exactly the
same semantics as Unix (in this particular example, it may be tricky
to implement the sticky bit on directories), but the result is still a
system with less complex kernel code.

Nickolai.

Alexei A. Frounze

unread,
Nov 22, 2007, 3:00:21 AM11/22/07
to
On Nov 21, 5:39 pm, kol...@gmail.com wrote:
> On Nov 21, 2:26 pm, "Alexei A. Frounze" <alexfrun...@gmail.com> wrote:
>
> > But if it's something more serious like the file system or network, it
> > may be very hard to recover or simply gain access to the system in
> > order to fix it.
>
> I suspect you're right in that it would be harder. But if it worked,
> even in some cases (e.g. for fail-stop errors), it might result in a
> more reliable system. For errors that are easily detectable, it might
> not be necessary to gain access to the system, or to even have a file
> system for recovering the component. The microkernel could keep a
> copy of the subsystem's binary in memory, and reload it automatically
> on certain errors (e.g. fatal page faults).
>
> There's been some work on this idea in Minix3 (mostly restarting
> device drivers) and at UIUC (e.g.http://choices.cs.uiuc.edu/selfhealing.pdf),
> among others.

Yep, heard about such an attempt. Now, what does one do if the service/
driver crashes (or is forcedly terminated) while being in some
intermediate and not very well defined state (files, system
information, hardware state)? The next time it starts it needs to find
out what the last state was and fix it, not crashing/being terminated
again due to the same bug/whatever. Now, suppose it was sending a
message to some other computer/device asking to do some work and the
termination could occur right before sending the message or right
after. How does it know next time whether the work has been done or
not? If the message is send no matter what, the work can be repeated.
If it's not, it may have never been done. And that may not be the
right thing (think of charging $20 to some bank account: you can
charge twice or give away something for free; yeah, I know, banking
systems must use transactions, but there're a lot of other
applications that need them but are poorly designed and don't have
transactions and therefore have this problem).

Well, user IDs, access descriptors/tokens and whatnot can be used in
both monolithic and microkernel-based systems. And it's the right idea
to give as much access as necessary and only for as long as necessary,
but not a single bit more. We've heard for a long time that we
shouldn't be normally running our Windows OSes as administrators.
Unfortunately, we haven't had good support for that idea in the OS and
3rd party applications. Only now we seem to be fixing the problem.
Now, for this to be supported by the OS, pretty much every single
resource and function in the OS must ask for credentials and check if
the access/operation is allowed for the user. Someone needs to take
care of storing the passwords and permissions securely and provide a
way to temporarily elevate process privileges while doing benign
things (e.g. disk defragmentation) and stuff like that. Which means,
security can't be an afterthought, it must be in the design from the
very beginning and it must be throughly validated before the product
release. Simply building a microkernel wouldn't make the OS more
secure. It's not enough to be microkernel alone to be secure. That's
my point. Obviously, one can and should combine several protection
mechanisms.

> Moving code from the kernel to user-space makes it possible to enforce
> the security of that code without having to rely on the code to
> actually be correct. One way to think of this is that Unix systems,
> the libc library, running in user-space, is not critical to enforcing
> security. If an attacker overflows some buffer in his own libc
> library, he can take over his libc, but it doesn't gain him much in
> terms of privileges, since it's the kernel that is enforcing
> security. On the other hand, if that code was in the kernel, and an
> attacker overflowed a buffer there, he's won.

I agree on libc. However, it naturally owns no system resources, it
only gets some memory for itself and some CPU cycles. The rest is
elsewhere and so it's OK to have multiple copies of libc whether buggy
or not, so long as the memory protection mechanisms guard apps using
it from each other.

I'm a bit worried about one thing. We're putting more and more stuff
into our cellphones and PDAs and moving more and more sensitive
information there (e-mails, documents, internet browser cookies, etc).
At the same time we blindly say OK to have these devices insecure,
without proper memory protection, without different accounts and
privilege levels. I don't know if it's that nobody so far seriously
thought of hacking these devices in the same way as our PCs get hacked
or it's that there're a bit too many different devices so everyone
thinks "it's OK, nobody will be hacking all of them at the same time
(unlike hacking something like Windows XP SP2)".

Alex

Rod Pemberton

unread,
Nov 22, 2007, 4:30:54 AM11/22/07
to

<kol...@gmail.com> wrote in message
news:36230d9d-ea5c-4093...@e6g2000prf.googlegroups.com...

> One shortcoming of UML
> is that it doesn't actually make the kernel any smaller, and thus any
> less likely to contain bugs.

Would that imply that it's time to write a new compacted Linux kernel? I
believe the basics of it could be done using two other GPL'd projects:
"DrAcOnUx's Linux 0.01 remake" and LinuxBIOS' FILO project. Obviously,
DrAcOnUx's project is an attempt to build Linux 0.01 on modern Linux.
LinuxBIOS' FILO project has extracted most of the filesystem related code
from Linux to have x86 protected mode BIOS functions, i.e, IDE, USB, SATA,
ext2, FAT, ISO 9660, MultiBoot and ELF. A full Linux kernel, perhaps based
on UML, might be able to run on top of the combination.

http://draconux.free.fr/os_dev/linux0.01.html
http://www.linuxbios.org/FILO

> The other issue with UML, to my
> understanding (correct me if I'm wrong, I don't have intimate
> knowledge of it), is that if you want have sharing between processes
> (e.g. pipes, shared memory, file system, etc), you need to share a UML
> kernel as well.

Sorry, I wouldn't know. I just read about the basic technical aspects to
get an idea of potential usefulness for some crazy ideas I've had about the
development of alternate OSes.

> > A secondary question is how far can one go without significantly
impacting
> > performance?
>
> I think it's possible to engineer such systems to have low performance
> overhead. Moreover, I think security is becoming more important over
> time, so that it may make sense to trade off some performance for
> better security (and even if not now, then perhaps later).
>

True, but a large portion of security of an OS isn't in the OS or even the
system hardware (yes, VMS had much, and x86 has a bit), but the application
code generated by C compiler. The most exploited security problems have
been due to the code generated by the C compiler such as buffer underflow,
buffer overflow, and unbounded string formats. These require a redesign of
the code generated by a compiler, e.g., more robust stack frames and
multiple stacks for local data, parameters or returns, and flow control.
It's almost been twenty years since the "Morris worm," yet we still suffer
the same problems (buffer underflow, denial of service).

> However, I believe it is possible to architect the system in a
> different way, in which the enforcement of security need not be the
> job of every line of code that's currently in a Unix kernel.

I'd like to believe that too. In part, I'd like to believe that because of
the novelty and simplicity I've seen in small kernel designs. In part, I'd
like to believe that because I'm working on my own OS at a glacially slow
speed, not following any real design plan, and will need to patch in some
security in the future... (basic working OS functionality, not security, is
my first concern).

However, the three years I worked on a 5Mloc real-time online transaction
processing system for a brokerage, is always in the back of my mind. We ran
multiple processes, maybe three dozen total, which shared data on fault
tolerant Stratus Continuum's using a ramdisk software package that
implemented transaction protection. The transaction protection seemed to be
similar to the description in the "Software transactional memory" article on
Wikipedia. We also ran many simulations and tests per month for every code
change. This was all setup long before I was hired. Despite this, the
system would typically crash once or twice a year for unknown reasons. Yes,
_crash_ on tens of millions of dollars worth of fault tolerant hardware
where _every_ piece of hardware in the miniframe was duplicated and when
running software based transaction protection. As a secondary issue, there
was a single 'if' conditional that had been "cut 'n' pasted" well over 500
times to help prevent wrongful interactions between the processes due to the
sharing of data. This conditional was needed in addition to the fault
tolerance and transaction protection or else data would end up in the wrong
process or wrong part of a process... So, I have some serious doubts about
maintaining security, when multitasking is involved with shared data, even
though the amount that the Linux kernel would be doing this is much smaller.
This system was very close to being fault tolerant and secure (from a data
perspective, not privilege), but it wasn't. I'd think that keeping all
processes completely separate and reintegrating any shared data after the
fact, as long as the changes are independent of each other, would be better.
Even if the changes are dependent, it might be possible to determine a
method to map many processes into a single process, i.e., the reverse of the
parallel computing programming problem. In which case, the resulting
artificially created single process could be used to order the data events
allowing safe reintegration of the multitasking data. I'm from a mostly EE
type of background and not too familiar with the CS theory on this.


Rod Pemberton

Philip Homburg

unread,
Nov 22, 2007, 4:16:15 AM11/22/07
to
In article <4639819d-4bd4-4771...@s36g2000prg.googlegroups.com>,

Alexei A. Frounze <alexf...@gmail.com> wrote:
>Yep, heard about such an attempt. Now, what does one do if the service/
>driver crashes (or is forcedly terminated) while being in some
>intermediate and not very well defined state (files, system
>information, hardware state)?

Extensive fault injection tests suggests that for ethernet drivers:
- DP8390 based ISA cards and Realtek RTL8139 cards can always be restarted.
- RTL8029 cards can hang the PCI bus in some cases (so the whole system hangs)
- The Intel PRO/100 may get in a funny state where a hardware reset is needed.
However, the rest of the system is unaffected.

Ethernet drivers are an easy case because the Internet is supposed to lose
packets anyhow.

Experiments with just killing harddisk drivers (but no fault injection)
suggest that harddisk driver restarts are completely recoverable.


--
That was it. Done. The faulty Monk was turned out into the desert where it
could believe what it liked, including the idea that it had been hard done
by. It was allowed to keep its horse, since horses were so cheap to make.
-- Douglas Adams in Dirk Gently's Holistic Detective Agency

Alexei A. Frounze

unread,
Nov 22, 2007, 6:08:12 AM11/22/07
to
On Nov 22, 1:16 am, phi...@ue.aioy.eu (Philip Homburg) wrote:
> In article <4639819d-4bd4-4771-904a-db18f03a0...@s36g2000prg.googlegroups.com>,

> Alexei A. Frounze <alexfrun...@gmail.com> wrote:
>
> >Yep, heard about such an attempt. Now, what does one do if the service/
> >driver crashes (or is forcedly terminated) while being in some
> >intermediate and not very well defined state (files, system
> >information, hardware state)?
>
> Extensive fault injection tests suggests that for ethernet drivers:
> - DP8390 based ISA cards and Realtek RTL8139 cards can always be restarted.
> - RTL8029 cards can hang the PCI bus in some cases (so the whole system hangs)
> - The Intel PRO/100 may get in a funny state where a hardware reset is needed.
> However, the rest of the system is unaffected.
>
> Ethernet drivers are an easy case because the Internet is supposed to lose
> packets anyhow.
>
> Experiments with just killing harddisk drivers (but no fault injection)
> suggest that harddisk driver restarts are completely recoverable.

What experiments, in what OSes? Where, when? Most importantly, do some
successful examples extend onto the entire industry?

Alex

Alexei A. Frounze

unread,
Nov 22, 2007, 6:22:50 AM11/22/07
to
On Nov 22, 1:30 am, "Rod Pemberton" <do_not_h...@nohavenot.cmm> wrote:
> True, but a large portion of security of an OS isn't in the OS or even the
> system hardware (yes, VMS had much, and x86 has a bit), but the application
> code generated by C compiler. The most exploited security problems have
> been due to the code generated by the C compiler such as buffer underflow,
> buffer overflow, and unbounded string formats. These require a redesign of
> the code generated by a compiler, e.g., more robust stack frames and
> multiple stacks for local data, parameters or returns, and flow control.
> It's almost been twenty years since the "Morris worm," yet we still suffer
> the same problems (buffer underflow, denial of service).

You know what? We simply need to fix our tools. Speaking of C,
outlawing certain unsafe functions (like scanf, sprintf, strtok and so
on), which some clever companies do recognizing the potential
problems, isn't enough. There's a huge pain in the ass coming from
brain-dead type conversions nobody truly knows by heart. There's more
pain from the compiler reordering floating point operations freely. If
one fixed these and a few other things and promoted that new language/
compiler, everybody would only benefit. Why should everyone be
stepping on the same rake and hitting themselves on the head years
after years? We know what to fix (and even how), why not fix it?

Alex

Philip Homburg

unread,
Nov 22, 2007, 7:16:46 AM11/22/07
to
In article <7e8691fc-ee23-46cf...@e10g2000prf.googlegroups.com>,

Alexei A. Frounze <alexf...@gmail.com> wrote:
>What experiments, in what OSes?

Jorrit Herder's research in the context of Minix3. The fault injection
experiments are not unpublished at the moment.

>Most importantly, do some
>successful examples extend onto the entire industry?

You have to create a user mode driver framework that is strong enough to
support this. In theory it should be doable.

0 new messages