Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Why is MS copying Sun???

12 views
Skip to first unread message

Paul Wallich

unread,
Oct 8, 2000, 3:00:00 AM10/8/00
to
In article <39DFF266...@cmc.com>, Lars Poulsen <la...@cmc.com> wrote:

>Lew Pitcher <lpit...@yesic.com> wrote:
>> >Actually, IBM went unannounced to Digital Research (the CP/M and CP/M
>> >86 guys), but Gary K. was out of the office (flying his plane, IIRC),
>> >and IBM got miffed.
>
>Paul Wallich wrote:
>> According to the DR folks I talked to at the time, that wasn't quite
>> accurate either. There was also the matter of an old-style IBM NDA
>> that Big Blue's lawyers wanted Kildall to sign (along the lines of "you
>> agree to an irrevocable option on your firstborn child before we even
>> tell you what our names are").
>
>My group once got a visit from IBM to discuss a product under
>development, and they pulled out that draft agreement. It was
>really gross; it basically boiled down to:
>
>- anything we tell you today, you must treat as our trade secrets.
> In particular, any information we disclose about how our
> products work, is probably incorrect, but if you use it
> in any way we will sue you and bankrupt you.
>
>- anything you tell us, we will NOT hold in confidence; you
> should assume we are planning to tell your competitor this
> afternoon.
>
>I was rather shocked, but my boss had seen it before and happily
>signed it. Later he explained the necessity of it:
>
>IBM is such a large and diverse company, that what ever you are
>doing, there's a good chance that someone, somewhere within IBM
>is working on the same thing, although the people we are talking
>to don't know of it and would have no way to find out. If they
>did not make us sign this, we could sue them later if some such
>group put out a product that happened to look a lot like ours.
>While such a suit would have no merit, it would be costly to
>defend, and some screwy jury might even award us a bunch of
>money. This way, IBM is protected.


Of course IBM is protected, but if you have significant intellectual
property, you're not. It's possible they might be independently developing
something and be innocent of taking your trade secrets, but it's also
possible that they could just take everything you've got...

The other part of it sounds like what I remember from some of the military
rules governing classified information, which is essentially "if your clearance
allowed you access to a particular fact, you're not allowed to talk about it
regardless of how you actually learned about it." So even if it's published in
a newspaper, if you have a clearance you have to behave publically as if you
never heard about it.

(As a young reporter I had the dubious pleasure of telling pentagon security
folks that one chunk of ostensibly classified information they wanted me to
destroy was contained in an undersecretary's congressional testimony, and
another had been obtained by taking DoD-published information and employing
the arcane art of division.)

paul

Al Kossow

unread,
Oct 8, 2000, 3:00:00 AM10/8/00
to
In article <vr8E5.628$m75.1...@nnrp1.sbc.net>, pl...@NO.SPAM.PLEASE
(Caveman) wrote:

> In article <aek-081000...@haxrus.apple.com>,
> Al Kossow <a...@spies.com> wrote:
> >In article <%E7E5.342$D81.1...@nnrp2.sbc.net>, pl...@NO.SPAM.PLEASE
> >(Caveman) wrote:
> >> Linux got created, like PASCAL, as a teaching tool, not as
> >> a marketable product.
> >
> >No, that was Minix.
> >
> >Linus had a long-running 'discussion' over the pros and cons
> >of microkernels with Andy Tannenbaum, and implemented the Linux
> >kernel as a result.
>
> Excuse me while I gack at this revisionism, but as I recall
> Linus essentially reimplemented what Bach published.

The point I was making was that Minix was the teaching tool,
not Linux. It was sold by Prentice Hall to go along with
the Minix book, and Minix is not structured as a monolythic
kernel.

The arguments between Andy and Linus about the 'correct' way
to write a kernel were posted comp.os.minix.

Followups trimmed to comp.arch and alt.folklore.computers.

--
The eBay Curse:
"May you find everything you're looking for.."

R.E.Ballard

unread,
Oct 9, 2000, 9:35:50 PM10/9/00
to
In article <aek-081000...@haxrus.apple.com>,

a...@spies.com (Al Kossow) wrote:
> In article <vr8E5.628$m75.1...@nnrp1.sbc.net>, pl...@NO.SPAM.PLEASE
> (Caveman) wrote:
>
> > In article <aek-081000...@haxrus.apple.com>,
> > Al Kossow <a...@spies.com> wrote:
> > >In article %E7E5.342$D81.1...@nnrp2.sbc.net
> > >, pl...@NO.SPAM.PLEASE
> > >(Caveman) wrote:
> > >> Linux got created, like PASCAL, as a teaching tool, not as
> > >> a marketable product.
> > >
> > >No, that was Minix.
> > >
> > >Linus had a long-running 'discussion' over the pros and cons
> > >of microkernels with Andy Tannenbaum, and implemented the Linux
> > >kernel as a result.
> >
> > Excuse me while I gack at this revisionism, but as I recall
> > Linus essentially reimplemented what Bach published.

Linus originally just wanted to learn about the 80386 MMU, and figured
that he could use the principles he'd learned it the Minix class to
implement a kernel that supported the MMU.

> The point I was making was that Minix was the teaching tool,
> not Linux. It was sold by Prentice Hall to go along with
> the Minix book, and Minix is not structured as a monolythic
> kernel.

Correct. In fact, Minix was used for a course called "Operating
System Fundamentals" which was once a required course for all BSCS
and BSEE students.

> The arguments between Andy and Linus about the 'correct' way
> to write a kernel were posted comp.os.minix.

I was a participant in those early arguments (re...@panix.du.edu,
re...@softronics.com). I took Tannenbaum's side initially, having
been enchanted by the Mach Microkernel. Linus' best argument was
simply that he needed to be able to debug the thing without expensive
logic analysers. The bottom line is that Linus could compile using gcc,
and could debug using panics messages.

Much later, around the time of 0.90, there was less need for
in-kernel debugging tools. But it was an interesting approach
to operating system design.

> Followups trimmed to comp.arch and alt.folklore.computers.
>
> --
> The eBay Curse:
> "May you find everything you're looking for.."
>

--
Rex Ballard - I/T Architect, MIS Director
Linux Advocate, Internet Pioneer
http://www.open4success.com
Linux - 50 million satisfied users worldwide
and growing at over 5%/month! (recalibrated 8/2/00)


Sent via Deja.com http://www.deja.com/
Before you buy.

Ketil Z Malde

unread,
Oct 10, 2000, 2:24:44 AM10/10/00
to
R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> writes:

>> The arguments between Andy and Linus about the 'correct' way
>> to write a kernel were posted comp.os.minix.

> I was a participant in those early arguments (re...@panix.du.edu,
> re...@softronics.com). I took Tannenbaum's side initially, having
> been enchanted by the Mach Microkernel. Linus' best argument was
> simply that he needed to be able to debug the thing without expensive
> logic analysers. The bottom line is that Linus could compile using gcc,
> and could debug using panics messages.

And now, he seems to completely have changed his mind (see e.g. recent
discussions on Kernel Traffic (http://kt.linuxcare.com/ should do))
about debugging the kernel.

Would Linus have written a Mach clone if he started out today? :-)

-kzm
--
If I haven't seen further, it is by standing in the footprints of giants

R.E.Ballard

unread,
Oct 10, 2000, 8:00:11 PM10/10/00
to
In article <KETIL-vk1d...@eris.bgo.nera.no>,

I don't think Linux would have been possible on a microckernel
implementation as the "initial" implementation. Remember, Linux was
writing Linux for an 80386/16 with about 2 meg of ram, and initially
only supported one IDE hard drive, one serial port, a keyboard and
an ANSII interface (using BIOS calls).

Now, you have enough horsepower in the AMD K7 Duron and T-bird
processors to provide your own logic analyser. Furthermore, much
of the code has been really hardened.

Ironically, Linux did end up implementing a form of threaded kernel
when he started supporting modules. The kernel was still pretty
sophisticated, and most of the drivers were still compiled in, but
eventually, nearly all of the drivers were implemented as modules,
complete with hardware detection logic and configuration logic to
reduce or eliminate conflicts between modules. In many cases,
multiple modules can even share the same interrupt. This has been
one of my chief frustrations with the earlier versions of Linux
(which allowed drivers to directly handle interrupts instead of having
the interrup trigger a signal or semaphore to the driver(s) waiting for
that semaphore). Most modern Linux drivers now have this capability.
The only time things get dicey is when two "dumb" devices (which provide
no means of identifying which of the two triggered the interrupt,
requiring potentially disruptive interactions with the device chips.
Classic examples include sharing multiple serial ports on a single
interrupt, or sharing your IRQ3 between your cua1 and eth0 devices.

Unfortunately, with mice, keyboards, sound cards, multiple IDE
controllers, and winmodems, it's not unusual to have only 2-3
available interrupts available, and often only 2 that are
supported by that card.

> -kzm
> --
> If I haven't seen further, it is by standing in
> the footprints of giants

--

Larry Elmore

unread,
Oct 12, 2000, 1:43:46 AM10/12/00
to
R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> wrote in message
news:8s0ai9$28h$1...@nnrp1.deja.com...

> In article <KETIL-vk1d...@eris.bgo.nera.no>,
> Ketil Z Malde <ke...@ii.uib.no> wrote:
> >
> > Would Linus have written a Mach clone if he started out today? :-)
>
> I don't think Linux would have been possible on a microckernel
> implementation as the "initial" implementation. Remember, Linux was
> writing Linux for an 80386/16 with about 2 meg of ram, and initially
> only supported one IDE hard drive, one serial port, a keyboard and
> an ANSII interface (using BIOS calls).

I remember reading a fairly lengthy interview with Linus where he went into
detail on why he believed microkernels to be a mistake. Has he changed his
mind recently?

> Unfortunately, with mice, keyboards, sound cards, multiple IDE
> controllers, and winmodems, it's not unusual to have only 2-3
> available interrupts available, and often only 2 that are
> supported by that card.
> >

> > If I haven't seen further, it is by standing in
> > the footprints of giants

The sig seems quite apt when paired with the preceding statement -- it's
ridiculous that we a re still hobbled today by architectural decisions made
20 years ago for a machine that is quite primitive by today's standards (and
was only mediocre at best by contemporary standards).

Larry


Ben Hutchings

unread,
Oct 12, 2000, 3:00:00 AM10/12/00
to
R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> writes:
<snip>

> I don't think Linux would have been possible on a microckernel
> implementation as the "initial" implementation. Remember, Linux was
> writing Linux for an 80386/16 with about 2 meg of ram, and initially
> only supported one IDE hard drive, one serial port, a keyboard and
> an ANSII interface

Yes, but Minix was written for a basic IBM PC with a floppy drive
and a minimum of, I think, 256K of RAM.

> (using BIOS calls).

Only CP/M and its knock-offs use BIOS calls for I/O.

<snip>

> Unfortunately, with mice, keyboards, sound cards, multiple IDE
> controllers, and winmodems, it's not unusual to have only 2-3
> available interrupts available, and often only 2 that are
> supported by that card.

PCI interrupts can be mapped to any available ISA interrupt, and can
easily be traced back to the originating device, so this is not much
of a problem any more. If you have an APIC (as Intel SMP systems do)
then you don't even have to worry about ISA interrupts.

--
Any opinions expressed are my own and not necessarily those of Roundpoint.

j1...@sfsu.redirect.edu

unread,
Oct 12, 2000, 3:00:00 AM10/12/00
to
In alt.folklore.computers R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> wrote:

> I don't think Linux would have been possible on a microckernel
> implementation as the "initial" implementation. Remember, Linux was
> writing Linux for an 80386/16 with about 2 meg of ram, and initially
> only supported one IDE hard drive, one serial port, a keyboard and
> an ANSII interface (using BIOS calls).

< K7 paragraph cut >

> Ironically, Linux did end up implementing a form of threaded kernel
> when he started supporting modules. The kernel was still pretty
> sophisticated, and most of the drivers were still compiled in, but
> eventually, nearly all of the drivers were implemented as modules,

And thus the confusion is wrapped up. A micro kernel is not a modular
kernel. As I suspect most anyone reading this at least suspects, the
UNIX model has two modes of operation, user and kernel. Kernel mode
code has access to the full machine, user does not. (NT example upon
properly accompanied request)
Code executing in kernel mode is kernel code, regardless of whether
it was loaded as part of a file called /vmunix or not. Loading and
unloading bits and pieces does not make the kernel micro in the mach
sense, but merely, potentially, smaller on / .

The micro kernel module exports items that one would expect to run in
kernel mode out to user mode. Substitute kernel/user space for
kernel/user mode if you like. Anyway, rather than, for example,
1) the system trapping a page fault and then looking up the the info
to page in using kernel mode, swapping, and reinserting in the run queue
2) the kernel would trap, and then pass control to a user mode program
to do the lookup and perhaps even swap. upon completion, the program
would indicate that it was done, and the kernel would reinsert the
faulting program into the run queue.

Hence the kernel is micro, handling only tasks which the designer could/
would not concieve of handling outside of kernel mode. Like 'fast', it's
a relative term.

> Unfortunately, with mice, keyboards, sound cards, multiple IDE
> controllers, and winmodems, it's not unusual to have only 2-3
> available interrupts available, and often only 2 that are
> supported by that card.

Yup, I know the problem well. But decent hardware should indicate that
it really did trigger it's interrupt line. And decent drivers should
check this rather than assume. The same thing happens with support
personell. We don't say that the problem lies (at least primarily) with
support answering telephones to help users and (straining for an example)
confirm a pizza order. Now if the pizza company says 'I have a problem
with ...'

Insert 2 AM disclaimer to suit. Send me a copy if it's good.

jeremy

Alexander Viro

unread,
Oct 12, 2000, 3:00:00 AM10/12/00
to
In article <lccF5.1729$%13.4...@dfiatx1-snr1.gtei.net>,

Larry Elmore <lj.e...@gte.net> wrote:
>R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> wrote in message
>news:8s0ai9$28h$1...@nnrp1.deja.com...
>> In article <KETIL-vk1d...@eris.bgo.nera.no>,
>> Ketil Z Malde <ke...@ii.uib.no> wrote:
>> >
>> > Would Linus have written a Mach clone if he started out today? :-)
>>
>> I don't think Linux would have been possible on a microckernel
>> implementation as the "initial" implementation. Remember, Linux was
>> writing Linux for an 80386/16 with about 2 meg of ram, and initially
>> only supported one IDE hard drive, one serial port, a keyboard and
>> an ANSII interface (using BIOS calls).
>
>I remember reading a fairly lengthy interview with Linus where he went into
>detail on why he believed microkernels to be a mistake. Has he changed his
>mind recently?

He didn't. Rex has no fscking idea about the things he's talking
about. Kernel is still monolithic - modules have nothing to microkernels.
Besides, whatever this "ANSII" thing is, kernel didn't use the BIOS for
very obvious reasons - too much pain setting the things up for calls of
real mode code.

Moreover, microkernels (their suckitude aside) could be (and had been - see
Minix for example) implemented on worse hardware - e.g. 8086 with 640Kb
and no harddisk at all. So it's a pure bullshit.

--
"You're one of those condescending Unix computer users!"
"Here's a nickel, kid. Get yourself a better computer" - Dilbert.

Ketil Z Malde

unread,
Oct 12, 2000, 3:00:00 AM10/12/00
to
vi...@weyl.math.psu.edu (Alexander Viro) writes:

> Kernel is still monolithic - modules have nothing to microkernels.

Well, yes, but a benefit of microkernels is supposedly modularity.
Linux achieves modularity in kernel mode, which I suppose you
could say makes it less monolithic, but not a microkernel.

> Besides, whatever this "ANSII" thing is

ASCII on an ANSI terminal, I suppose :-)

-kzm
--

Peter N. M. Hansteen

unread,
Oct 12, 2000, 3:00:00 AM10/12/00
to
vi...@weyl.math.psu.edu (Alexander Viro) writes:

> Besides, whatever this "ANSII" thing is,

It's the sound of somebody sneezing, digitized.

--
SPECIAL OFFER! I proofread unsolicited commercial email sent to this
address at a rate of US $500.00 per incident! Include billing address
in your message and save US $500.00 per hour off ordinary address
resolution and tracking charge!

Martijn van Buul

unread,
Oct 12, 2000, 3:00:00 AM10/12/00
to
It occurred to me that Ben Hutchings wrote in alt.folklore.computers:

> R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> writes:
> <snip>

> > I don't think Linux would have been possible on a microckernel
> > implementation as the "initial" implementation. Remember, Linux was
> > writing Linux for an 80386/16 with about 2 meg of ram, and initially
> > only supported one IDE hard drive, one serial port, a keyboard and
> > an ANSII interface
>
> Yes, but Minix was written for a basic IBM PC with a floppy drive
> and a minimum of, I think, 256K of RAM.

Current versions of Minix (2.0.2, 2.0.3 coming shortly) do use a microkernel-
ish approach (implementing device drivers and servers as seperate tasks).
Unless I'm completely mistaken, Minix has done so from the beginning.

256K wasn't really usefull, by the way - current releases require 640K for
a (comfortably) working system.

Ah well. The whole story about "Linux vs. Minix" (or to be more precise:
"Torvalds vs. Tanenbaum" has been discussed at length before.

See the Minix-L archives at http://listserv.nodak.edu/archives/minix-l.html

The "fun" starts at January 1992, week 5, by Tanenbaum starting the
"LINUX is obsolete" thread, which *still* pops up on comp.os.minix
every now and then.

--
Martijn van Buul - Pi...@dohd.org - http://www.stack.nl/~martijnb/
Geek code: G-- - Visit OuterSpace: mud.stack.nl 3333
Kees J. Bot: The sum of CPU power and user brain power is a constant.

Kees J Bot

unread,
Oct 12, 2000, 3:00:00 AM10/12/00
to
In article <KETIL-vk1p...@eris.bgo.nera.no>,

Ketil Z Malde <ke...@ii.uib.no> wrote:
>vi...@weyl.math.psu.edu (Alexander Viro) writes:
>
>> Kernel is still monolithic - modules have nothing to microkernels.
>
>Well, yes, but a benefit of microkernels is supposedly modularity.
>Linux achieves modularity in kernel mode, which I suppose you
>could say makes it less monolithic, but not a microkernel.

Minix-vmd, an offshoot of Minix that has virtual memory, allows that
the memory manager, file system server, and TCP/IP server are paged out
to disk, just like any normal process. You do not really want this to
happen, and it doesn't happen in practice because that code is needed
constantly, but why make an exception?

(Ok, it wasn't entirely trivial. FS has to lock its buffers in
memory. The disk drivers don't appreciate it if the memory they have
to transfer a disk block to/from isn't there.)

This is just an example of how different a microkernel system is from
a monolithic.
--
Kees J. Bot, Systems Programmer, Sciences dept., Vrije Universiteit Amsterdam

Jeff Sturm

unread,
Oct 12, 2000, 3:00:00 AM10/12/00
to
Alexander Viro wrote:
> Moreover, microkernels (their suckitude aside) could be (and had been - see
> Minix for example) implemented on worse hardware - e.g. 8086 with 640Kb
> and no harddisk at all. So it's a pure bullshit.

Even worse. I had only 256KB to run Minix. But it worked, and could even
recompile itself (cc never needed more than 128KB with a separate I&D space).

--
Jeff Sturm
jeff....@commerceone.com

Jeff Sturm

unread,
Oct 12, 2000, 3:00:00 AM10/12/00
to
Kees J Bot wrote:
> Minix-vmd, an offshoot of Minix that has virtual memory, allows that
> the memory manager, file system server, and TCP/IP server are paged out
> to disk, just like any normal process. You do not really want this to
> happen, and it doesn't happen in practice because that code is needed
> constantly, but why make an exception?

A microkernel isn't a necessary condition for a pageable kernel.
I remember an OS (AIX?) with a pageable sparse process table that
never needed to be dynamically resized.

Anyway, why would you try to second-guess the VM? If I am trying
to compile something on a 4MB machine (god forbid), I sure would like
that TCP code paged to disk. Even though e.g. the Linux kernel is
relatively small, you can bet it contains a goodly amount of
infrequently accessed code. On the other hand, if I am frequently
accessing the network a LRU policy should keep it resident.

This is actually something I considered this week, as I was trying
to trim down a kernel for an old machine with 8MB. The kernel uses
a precious 25% of real memory. Of course, on any reasonable hardware
the kernel footprint is such a small fraction that any potential
gain of paging the kernel probably isn't worthwhile.

--
Jeff Sturm
jeff....@commerceone.com

Alan Barclay

unread,
Oct 12, 2000, 3:00:00 AM10/12/00
to
In article <15f4s8...@jetsam.cs.vu.nl>,

Kees J Bot <kjb=732...@cs.vu.nl> wrote:
>Minix-vmd, an offshoot of Minix that has virtual memory, allows that
>the memory manager, file system server, and TCP/IP server are paged out
>to disk, just like any normal process. You do not really want this to
>happen, and it doesn't happen in practice because that code is needed
>constantly, but why make an exception?

A long time ago, I was on a version of Unix which swapped processes,
and had a 'swapper' process which brought stuff back into memory when
it could. Worked great, until swapper was swapped.

Joe Pfeiffer

unread,
Oct 12, 2000, 3:00:00 AM10/12/00
to
R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> writes:
>
> Correct. But the original Tannenbaum/Torvalds debates often focused
> on the issues and advantages of microkernels vs macrokernels. Over
> the following 2 years, many of the advantages of microkernel were
> integrated into the Linux infrastructure. The Linux kernel isn't
> a true microkernel, but Linus did take some of the best ideas and
> incorporated them into Linux.

This is a point that has to be emphasized -- clean programming
interfaces and modules give nearly all of the benefits of microkernels
without taking the efficiency hit of multiple context switches.

> Actually, Minux could be started from MS-DOS. I don't remember
> if it took over everything (wiping out DOS) or simply ran as the
> key process until it was terminated (it's been almost 10 years).

It takes over everything. It is a true OS, not a process running
under DOS>

> In many cases, Microkernels were the only means of dealing with
> deficient hardware. The extended memory models used on 80286 machines
> required software management of the memory management systems. This
> was where the microkernel (which managed the MMU and timeslices, but
> very little more) really had an advantage.

No. No. No. And again, no. Microkernels had nothing to do with
deficient hardware; they were a different way of constructing the OS.
I don't quite know what you mean when you say the software had to
manage the 286 memory management; that's true, but it's equally true
that software has to manage the 386 (and up) memory management.

> The more sophisticated hardware of the 80386, 68020, PPC, and SPArC
> chips made a microkernel less critical. In a sense, many of the
> functions the microkernel had been designed to implement were being
> implemented in hardware microcode. More important, the hardware could
> do it faster and in parallel.

I'm sorry, at this point you've lost me completely. What you say in
this paragraph has absolutely nothing to do with anything relevant to
microkernels or macrokernels.
--
Joseph J. Pfeiffer, Jr., Ph.D. Phone -- (505) 646-1605
Department of Computer Science FAX -- (505) 646-1002
New Mexico State University http://www.cs.nmsu.edu/~pfeiffer
VL 2000 Homepage: http://www.cs.orst.edu/~burnett/vl2000/

Ron Hunsinger

unread,
Oct 12, 2000, 10:00:10 PM10/12/00
to
In article <8s425d$r...@weyl.math.psu.edu>, vi...@weyl.math.psu.edu
(Alexander Viro) wrote:

> Besides, whatever this "ANSII" thing is

Aboriginal Native Speech Interpreted Ideographically -- the written form of
an unwritten language. (Saves a lot of ink, but burns extra phosphors on
the normal black-on-white display.)

-Ron Hunsinger

R.E.Ballard

unread,
Oct 12, 2000, 11:06:36 PM10/12/00
to
In article <8s425d$r...@weyl.math.psu.edu>,
vi...@weyl.math.psu.edu (Alexander Viro) wrote:
> In article <lccF5.1729$%13.4...@dfiatx1-snr1.gtei.net>,
> Larry Elmore <lj.e...@gte.net> wrote:
> >R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> wrote in message
> >news:8s0ai9$28h$1...@nnrp1.deja.com...
> >> In article <KETIL-vk1d...@eris.bgo.nera.no>,
> >> Ketil Z Malde <ke...@ii.uib.no> wrote:
> >> >
> >> > Would Linus have written a Mach clone if he started out today?
:-)
> >>
> >> I don't think Linux would have been possible on a microckernel
> >> implementation as the "initial" implementation. Remember, Linux
was
> >> writing Linux for an 80386/16 with about 2 meg of ram, and
initially
> >> only supported one IDE hard drive, one serial port, a keyboard and
> >> an ANSII interface (using BIOS calls).
> >
> >I remember reading a fairly lengthy
> > interview with Linus where he went into
> > detail on why he believed microkernels
> > to be a mistake. Has he changed his
> > mind recently?
>
> He didn't. Rex has no fscking idea about
> the things he's talking
> about. Kernel is still monolithic

> - modules have nothing to microkernels.

Correct. But the original Tannenbaum/Torvalds debates often focused


on the issues and advantages of microkernels vs macrokernels. Over
the following 2 years, many of the advantages of microkernel were
integrated into the Linux infrastructure. The Linux kernel isn't
a true microkernel, but Linus did take some of the best ideas and
incorporated them into Linux.

> Besides, whatever this "ANSII" thing is,


> kernel didn't use the BIOS for
> very obvious reasons - too much pain
> setting the things up for calls of
> real mode code.

You are correct. Thanks for correcting this. Linux 0.10 only
supported text mode using a simple vga interface to character
mode. Even the different resolutions came after the initial
0.10 version. I was posting from mememory again (2:00 A.M. and all
that).

> Moreover, microkernels (their suckitude aside)
> could be (and had been - see
> Minix for example) implemented on worse hardware
> - e.g. 8086 with 640Kb
> and no harddisk at all. So it's a pure bullshit.

Actually, Minux could be started from MS-DOS. I don't remember


if it took over everything (wiping out DOS) or simply ran as the
key process until it was terminated (it's been almost 10 years).

In many cases, Microkernels were the only means of dealing with


deficient hardware. The extended memory models used on 80286 machines
required software management of the memory management systems. This
was where the microkernel (which managed the MMU and timeslices, but
very little more) really had an advantage.

The more sophisticated hardware of the 80386, 68020, PPC, and SPArC


chips made a microkernel less critical. In a sense, many of the
functions the microkernel had been designed to implement were being
implemented in hardware microcode. More important, the hardware could
do it faster and in parallel.

> --


> "You're one of those condescending Unix computer users!"
> "Here's a nickel, kid. Get yourself a better computer" - Dilbert.
>

--

R.E.Ballard

unread,
Oct 13, 2000, 1:09:14 AM10/13/00
to
In article <uhf6iv...@roundpoint.com>,

Ben Hutchings <ben.hu...@roundpoint.com> wrote:
> R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> writes:
> <snip>

> > I don't think Linux would have been possible on a microckernel
> > implementation as the "initial" implementation. Remember, Linux was
> > writing Linux for an 80386/16 with about 2 meg of ram, and initially
> > only supported one IDE hard drive, one serial port, a keyboard and
> > an ANSII interface
>
> Yes, but Minix was written for a basic IBM PC with a floppy drive
> and a minimum of, I think, 256K of RAM.
>
> > (using BIOS calls).

Oops. I was wrong about that. Sorry!
>
> <snip>


> > Unfortunately, with mice, keyboards, sound cards, multiple IDE
> > controllers, and winmodems, it's not unusual to have only 2-3
> > available interrupts available, and often only 2 that are
> > supported by that card.
>

> PCI interrupts can be mapped to any available ISA interrupt, and can
> easily be traced back to the originating device, so this is not much
> of a problem any more. If you have an APIC (as Intel SMP systems do)
> then you don't even have to worry about ISA interrupts.

Does this mean that I could put ALL of my PCI devices on a single
interrupt, like INT 11 and no longer have to worry about where to
put my two ethernet cards, two serial ports, ISA modem, and parallel
port?

Has anybody tried this with Windows? With Linux?

I've always said that it's in the drivers. It just seems like both
Windows and Linux drivers seem to be very stupid about assuming that
their device the only possible source of that interrupt.

Is this a new feature of 2.4?
Or has it just been around since 2.2 (or 2.0)
and I just wasn't paying attention :-)

> --
> Any opinions expressed are my own and not
> necessarily those of Roundpoint.

Likewise with IBM. They know that I post, but they don't tell
me what to say, or what not to say.

j1...@sfsu.redirect.edu

unread,
Oct 13, 2000, 3:00:00 AM10/13/00
to
In alt.folklore.computers Ketil Z Malde <ke...@ii.uib.no> wrote:
> vi...@weyl.math.psu.edu (Alexander Viro) writes:

>> Kernel is still monolithic - modules have nothing to microkernels.

> Well, yes, but a benefit of microkernels is supposedly modularity.


> Linux achieves modularity in kernel mode, which I suppose you
> could say makes it less monolithic, but not a microkernel.

Probably a bigger benefit (no literature to back this up) is that a
network stack crash, or the like, does not equal a system crash.
Which I suppose is modularity, but might better be put as seperation
of powers.

jeremy

jmfb...@aol.com

unread,
Oct 13, 2000, 3:00:00 AM10/13/00
to
In article <wksnq2n...@datadok.no>,

Peter N. M. Hansteen <pe...@datadok.no> wrote:
>vi...@weyl.math.psu.edu (Alexander Viro) writes:
>
>> Besides, whatever this "ANSII" thing is,
>
>It's the sound of somebody sneezing, digitized.
>
Oh, I thought it was another type of sound effect. <grin>

/BAH

Subtract a hundred and four for e-mail.

Casper H.S. Dik - Network Security Engineer

unread,
Oct 13, 2000, 3:00:00 AM10/13/00
to
[[ Reply by email or post, don't do both ]]

j1...@sfsu.redirect.edu writes:

>Probably a bigger benefit (no literature to back this up) is that a
>network stack crash, or the like, does not equal a system crash.
>Which I suppose is modularity, but might better be put as seperation
>of powers.

Which, of course, is completely untrue.

Typically, networks and filesystems keep state. If your network
crashes, everything that needs a bit of network state needs to
be restarted (e.g., established TCP connections, listening
sockets, etc)

Similarly, what can you continue to do when your filesystem has crashed?

Casper
--
Expressed in this posting are my opinions. They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.

jmfb...@aol.com

unread,
Oct 13, 2000, 3:00:00 AM10/13/00
to
In article <8s6u4u$bbp$1...@dfw-ixnews3.ix.netcom.com>,

j1...@sfsu.redirect.edu wrote:
>In alt.folklore.computers Ketil Z Malde <ke...@ii.uib.no> wrote:
>> vi...@weyl.math.psu.edu (Alexander Viro) writes:
>
>>> Kernel is still monolithic - modules have nothing to microkernels.
>
>> Well, yes, but a benefit of microkernels is supposedly modularity.
>> Linux achieves modularity in kernel mode, which I suppose you
>> could say makes it less monolithic, but not a microkernel.
>
>Probably a bigger benefit (no literature to back this up) is that a
>network stack crash, or the like, does not equal a system crash.

If all users are connected via the network _and_ there is no
way to reattach to the job or processes that the users were
doing at the time of the network crash, is sure is equal to a
system crash _from the users' point of view_. And the users'
point of view w.r.t. computer is all that counts....<grumble>.

/BAH

>Which I suppose is modularity, but might better be put as seperation
>of powers.
>

>jeremy

Joe Pfeiffer

unread,
Oct 13, 2000, 3:00:00 AM10/13/00
to
R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> writes:
>
> Does this mean that I could put ALL of my PCI devices on a single
> interrupt, like INT 11 and no longer have to worry about where to
> put my two ethernet cards, two serial ports, ISA modem, and parallel
> port?

It ``oughta'' work in Linux.

> Has anybody tried this with Windows? With Linux?

No; I'd have to check my /proc/pci at home to see if my system even
wound up sharing any interrupts.

> I've always said that it's in the drivers. It just seems like both
> Windows and Linux drivers seem to be very stupid about assuming that
> their device the only possible source of that interrupt.
>
> Is this a new feature of 2.4?
> Or has it just been around since 2.2 (or 2.0)
> and I just wasn't paying attention :-)

At least as of 2.0 it was possible to write a driver that could do
interrupt sharing. I *think* it's now required.

Michael Wojcik

unread,
Oct 13, 2000, 3:00:00 AM10/13/00
to

In article <39E5E9E1...@appnet.com>, Jeff Sturm <jeff....@appnet.com> writes:

> Kees J Bot wrote:

> > Minix-vmd, an offshoot of Minix that has virtual memory, allows that
> > the memory manager, file system server, and TCP/IP server are paged out
> > to disk, just like any normal process. You do not really want this to
> > happen, and it doesn't happen in practice because that code is needed
> > constantly, but why make an exception?

> A microkernel isn't a necessary condition for a pageable kernel.


> I remember an OS (AIX?) with a pageable sparse process table that
> never needed to be dynamically resized.

AIX 3 and later indeed has a monolithic kernel that is largely pageable.
It's largely pre-emptible, too. AIX 2 (for the PC RT) also had a
partially pageable kernel, because most of the kernel ran on top of an
abstraction layer called the VRM (Virtual Resource Manager). AIX 3
improved on that design significantly by incorporating the important
features of the VRM into the rest of the kernel, removing a lot of
overhead.

AIX 3 pinned interrupt handlers in physical memory, and allowed kernel
modules (AIX 3 and later does dynamic kernel module loading, too) to
pin additional text and/or data pages as needed. In practice, little
gets pinned.

I don't know if the AIX process table is pageable (I suspect it is),
but the file locks table, for example, definitely is; it's just a
great big sparse array of lock entries, nearly all of which is usually
unused and swapped out.

(ObRef: See the IBM _RISC System/6000 Technology_ collection, IBM doc
GA23-2619.)

> Anyway, why would you try to second-guess the VM?

Indeed. What I "don't want" is to fill physical memory with a bunch of
code and data I'm not using, if the system can dump it to disk and still
perform correctly and without incurring an unreasonable penalty. Which,
experience shows, it can.

> Of course, on any reasonable hardware
> the kernel footprint is such a small fraction that any potential
> gain of paging the kernel probably isn't worthwhile.

If it's not worthwhile, the VM won't do it. If there's room in physical
memory for those kernel pages, they'll stay around. If the VM has
something better to do with that RAM, they'll get the boot.

Anyway, as you pointed out yourself, pageable kernels let you use sparse
arrays rather than data structures with more overhead in some cases.


--
Michael Wojcik michael...@merant.com
AAI Development, MERANT (block capitals are a company mandate)
Department of English, Miami University

The penance was not building the field and bringing back Shoeless Joe
Jackson, but rather tossing on the field with his father. -- Kevin Aug

jmfb...@aol.com

unread,
Oct 13, 2000, 3:00:00 AM10/13/00
to
In article <1b66mwn...@viper.cs.nmsu.edu>,

Joe Pfeiffer <pfei...@cs.nmsu.edu> wrote:
>R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> writes:
>>
>> Does this mean that I could put ALL of my PCI devices on a single
>> interrupt, like INT 11 and no longer have to worry about where to
>> put my two ethernet cards, two serial ports, ISA modem, and parallel
>> port?
>
>It ``oughta'' work in Linux.
>
>> Has anybody tried this with Windows? With Linux?
>
>No; I'd have to check my /proc/pci at home to see if my system even
>wound up sharing any interrupts.
>
>> I've always said that it's in the drivers. It just seems like both
>> Windows and Linux drivers seem to be very stupid about assuming that
>> their device the only possible source of that interrupt.
>>
>> Is this a new feature of 2.4?
>> Or has it just been around since 2.2 (or 2.0)
>> and I just wasn't paying attention :-)
>
>At least as of 2.0 it was possible to write a driver that could do
>interrupt sharing. I *think* it's now required.

I should hope so. Are telling me that there are systems
that limit 1 interrupt type per channel???!!???? Don't
tell me...I don't wanna know. Actually, this PC configuration
is now beginning to make sense to me. I don't think I'm a
stoopid luser anymore <boo, hoo>

/BAH

sw...@nol.net

unread,
Oct 13, 2000, 3:00:00 AM10/13/00
to
In alt.folklore.computers Casper H.S. Dik - Network Security Engineer <Caspe...@holland.sun.com> wrote:
> Similarly, what can you continue to do when your filesystem has crashed?

Had that happen on a diskless Sun 3 as an undergrad. (Server crashed
during crunch time.)

The clock still worked.

--
Mike Swaim, Avatar of Chaos: Disclaimer:I sometimes lie.
Home: swaim at nol * net Quote: "Boingie"^4 Y,W&D

R.E.Ballard

unread,
Oct 13, 2000, 3:00:00 AM10/13/00
to
In article <8s75p6$csb$9...@bob.news.rcn.net>,
> Joe Pfeiffer <pfei...@cs.nmsu.edu> wrote:
> >R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> writes:
> >>
> >> Does this mean that I could put ALL of my PCI devices on a single
> >> interrupt, like INT 11 and no longer have to worry about where to
> >> put my two ethernet cards, two serial ports, ISA modem, and
parallel
> >> port?
> >
> >It ``oughta'' work in Linux.
> >
> >> Has anybody tried this with Windows? With Linux?
> >
> >No; I'd have to check my /proc/pci at home to see if my system even
> >wound up sharing any interrupts.
> >
> >> I've always said that it's in the drivers. It just seems like both
> >> Windows and Linux drivers seem to be very stupid about assuming
that
> >> their device the only possible source of that interrupt.
> >>
> >> Is this a new feature of 2.4?
> >> Or has it just been around since 2.2 (or 2.0)
> >> and I just wasn't paying attention :-)
> >
> >At least as of 2.0 it was possible to write a driver that could do
> >interrupt sharing. I *think* it's now required.

I guess I was unclear. I was asking if ALL DRIVERS had been converted
to support interrupt sharing. The only one that seems risky is UARTS.
The problem there is that if you put two of them on the same interrupt,
they could BOTH have outstanding data, which means you would have to
check each during each interrupt.

> I should hope so. Are telling me that there are systems
> that limit 1 interrupt type per channel???!!???? Don't
> tell me...I don't wanna know. Actually, this PC configuration
> is now beginning to make sense to me. I don't think I'm a
> stoopid luser anymore <boo, hoo>
>
> /BAH
>
> Subtract a hundred and four for e-mail.
>

--

Charlie Gibbs

unread,
Oct 13, 2000, 3:00:00 AM10/13/00
to
In article <hnsngr-ya0231800...@news.flash.net>
hns...@sirius.com (Ron Hunsinger) writes:

Is that anything like the silly little hieroglyphics that are
displacing real language these days? (They're intended to eliminate
discrimination by replacing words that are only understandable by
English speakers with symbols that are not understandable by anyone.)

ASCII stupid question, get a stupid ANSI.

--
cgi...@sky.bus.com (Charlie Gibbs)
Remove the first period after the "at" sign to reply.


Adam Sampson

unread,
Oct 13, 2000, 3:00:00 AM10/13/00
to
Joe Pfeiffer <pfei...@cs.nmsu.edu> writes:

> R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> writes:
> >
> > Does this mean that I could put ALL of my PCI devices on a single
> > interrupt, like INT 11 and no longer have to worry about where to
> > put my two ethernet cards, two serial ports, ISA modem, and parallel
> > port?
>

> It ``oughta'' work in Linux.

It does. From my /proc/pci:

Bus 0, device 9, function 1:
Multimedia controller: Brooktree Corporation Bt878 (rev 2).
IRQ 5.
Master Capable. Latency=64. Min Gnt=4.Max Lat=255.
Prefetchable 32 bit memory at 0xed001000 [0xed001fff].
Bus 0, device 11, function 0:
Ethernet controller: 3Com Corporation 3c900 Combo [Boomerang] (rev 0).
IRQ 5.
Master Capable. Latency=64. Min Gnt=3.Max Lat=8.
I/O at 0xe000 [0xe03f].
Bus 1, device 0, function 0:
VGA compatible controller: Matrox Graphics, Inc. MGA G200 AGP (rev 3).
IRQ 5.
Master Capable. Latency=64. Min Gnt=16.Max Lat=32.
Prefetchable 32 bit memory at 0xeb000000 [0xebffffff].
Non-prefetchable 32 bit memory at 0xe7000000 [0xe7003fff].
Non-prefetchable 32 bit memory at 0xe8000000 [0xe87fffff].

--

Adam Sampson
at...@ukc.ac.uk

jmfb...@aol.com

unread,
Oct 14, 2000, 3:00:00 AM10/14/00
to
In article <yhFF5.24751$UP5.4...@news6.giganews.com>,

sw...@nol.net wrote:
>In alt.folklore.computers Casper H.S. Dik - Network Security Engineer
<Caspe...@holland.sun.com> wrote:
>> Similarly, what can you continue to do when your filesystem has crashed?
>
> Had that happen on a diskless Sun 3 as an undergrad. (Server crashed
>during crunch time.)
>
> The clock still worked.

<grin>

jmfb...@aol.com

unread,
Oct 14, 2000, 3:00:00 AM10/14/00
to
In article <836.321T8...@sky.bus.com>,

"Charlie Gibbs" <cgi...@sky.bus.com> wrote:
>In article <hnsngr-ya0231800...@news.flash.net>
>hns...@sirius.com (Ron Hunsinger) writes:
>
>Is that anything like the silly little hieroglyphics that are
>displacing real language these days? (They're intended to eliminate
>discrimination by replacing words that are only understandable by
>English speakers with symbols that are not understandable by anyone.)

Yup. Has to be. I'm still trying to figure out how to associate
on/off w.r.t. the power button on my terminal...thank some bright
kid that he included a power light next to the glyph. I just
bought a snow blower. It has a choke. I still haven't figured
out which way the choke is "on".

>
>ASCII stupid question, get a stupid ANSI.
>

Yeah. Boy! Did you hit a hot button with that one. ;-)

jmfb...@aol.com

unread,
Oct 14, 2000, 3:00:00 AM10/14/00
to
In article <87k8bch...@cartman.azz.net>,
Adam Sampson <at...@ukc.ac.uk> wrote:

>Joe Pfeiffer <pfei...@cs.nmsu.edu> writes:
>
>> R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> writes:
>> >
>> > Does this mean that I could put ALL of my PCI devices on a single
>> > interrupt, like INT 11 and no longer have to worry about where to
>> > put my two ethernet cards, two serial ports, ISA modem, and parallel
>> > port?
>>
>> It ``oughta'' work in Linux.
>
>It does. From my /proc/pci:
>
> Bus 0, device 9, function 1:
> Multimedia controller: Brooktree Corporation Bt878 (rev 2).
> IRQ 5.
> Master Capable. Latency=64. Min Gnt=4.Max Lat=255.
> Prefetchable 32 bit memory at 0xed001000 [0xed001fff].
> Bus 0, device 11, function 0:
> Ethernet controller: 3Com Corporation 3c900 Combo [Boomerang] (rev 0).
> IRQ 5.
> Master Capable. Latency=64. Min Gnt=3.Max Lat=8.
> I/O at 0xe000 [0xe03f].
> Bus 1, device 0, function 0:
> VGA compatible controller: Matrox Graphics, Inc. MGA G200 AGP (rev 3).
> IRQ 5.
> Master Capable. Latency=64. Min Gnt=16.Max Lat=32.
> Prefetchable 32 bit memory at 0xeb000000 [0xebffffff].
> Non-prefetchable 32 bit memory at 0xe7000000 [0xe7003fff].
> Non-prefetchable 32 bit memory at 0xe8000000 [0xe87fffff].
>

Ah, but you missed his misunderstanding. An interrupt doesn't
call a device driver. And interrupt is handled by the
interrupt handler whose sole job is to squirrel away the
necessary information for that interrupt and then dismiss
it for the next interrupt. Part of the info storage is
to mark an attention so that the device handler knows it
has work to do. The device service rountine doesn't
run at interrupt level.

And that's why one can service different devices on the
same interrupt level. The interrupt levels are supposed
to be a heirarchy of placing importance of servicing
interrupts. Some interrupt data wonn't hang around
very long, so it should be allowed to interrupt an
interrupt. (I think I've got that right--I'm not strong
on hardware stuff.)

jmfb...@aol.com

unread,
Oct 14, 2000, 3:00:00 AM10/14/00
to
In article <8s7pe6$6b1$1...@nnrp1.deja.com>,

R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> wrote:
>In article <8s75p6$csb$9...@bob.news.rcn.net>,
> jmfb...@aol.com wrote:
>> In article <1b66mwn...@viper.cs.nmsu.edu>,
>> Joe Pfeiffer <pfei...@cs.nmsu.edu> wrote:
>> >R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> writes:
>> >>
>> >> Does this mean that I could put ALL of my PCI devices on a single
>> >> interrupt, like INT 11 and no longer have to worry about where to
>> >> put my two ethernet cards, two serial ports, ISA modem, and
>parallel
>> >> port?
>> >
>> >It ``oughta'' work in Linux.
>> >
>> >> Has anybody tried this with Windows? With Linux?
>> >
>> >No; I'd have to check my /proc/pci at home to see if my system even
>> >wound up sharing any interrupts.
>> >
>> >> I've always said that it's in the drivers. It just seems like both
>> >> Windows and Linux drivers seem to be very stupid about assuming
>that
>> >> their device the only possible source of that interrupt.
>> >>
>> >> Is this a new feature of 2.4?
>> >> Or has it just been around since 2.2 (or 2.0)
>> >> and I just wasn't paying attention :-)
>> >
>> >At least as of 2.0 it was possible to write a driver that could do
>> >interrupt sharing. I *think* it's now required.
>
>I guess I was unclear. I was asking if ALL DRIVERS had been converted
>to support interrupt sharing.

It's not the drivers that should be doing the
interrupt handling.

> The only one that seems risky is UARTS.
>The problem there is that if you put two of them on the same interrupt,
>they could BOTH have outstanding data, which means you would have to
>check each during each interrupt.

Sure, that's the job of the interrupt handler, not the device
service routine. The device service routine will take a long
time. An interrupt service routine can't take a long time
because it has to be ready to get interrupted (a higher
interrupt happened) or handle the next interrupt.

<snip>

jmfb...@aol.com

unread,
Oct 14, 2000, 3:00:00 AM10/14/00
to
In article <yhFF5.24751$UP5.4...@news6.giganews.com>,
sw...@nol.net wrote:
>In alt.folklore.computers Casper H.S. Dik - Network Security Engineer
<Caspe...@holland.sun.com> wrote:
>> Similarly, what can you continue to do when your filesystem has crashed?

Errrmmmm....it would be nice if you could restore the file system?

<snip reply>

Andrew Gabriel

unread,
Oct 14, 2000, 3:00:00 AM10/14/00
to
In article <8s72hg$3rf$1...@new-usenet.uk.sun.com>,
Caspe...@Holland.Sun.Com (Casper H.S. Dik - Network Security Engineer) writes:

>j1...@sfsu.redirect.edu writes:
>
>>Probably a bigger benefit (no literature to back this up) is that a
>>network stack crash, or the like, does not equal a system crash.
>>Which I suppose is modularity, but might better be put as seperation
>>of powers.
>
>Which, of course, is completely untrue.
>
>Typically, networks and filesystems keep state. If your network
>crashes, everything that needs a bit of network state needs to
>be restarted (e.g., established TCP connections, listening
>sockets, etc)
>
>Similarly, what can you continue to do when your filesystem has crashed?

Reload it?
OK, Unix can't, but I spent many years working on an operating system
which could reload crashed modules such as network, filesystem, NFS,
etc. They ran with the same protection you have between processes in
Unix, so crashes were confined. Each module could be configured to
be system fatal or not, if it crashed. For different applications,
these settings could be set differently. For example, when the
application was a network switch/router, the filesystem was normally
configured non-system fatal, as the computer could carry on it's main
role of switching packets even if the disks head-crashed, or the disk
driver or filesystem modules crashed. Each module could also be debugged,
deleted, and reloaded just like a unix process, which made development
quite easy. The communication between modules was almost exactly the
same as Solaris doors IPC mechanism (except it predated it by 25 years ;-).

--
Andrew Gabriel
Consultant Software Engineer


Ian Stirling

unread,
Oct 14, 2000, 3:00:00 AM10/14/00
to
Andrew Gabriel <and...@cucumber.demon.co.uk> wrote:
>these settings could be set differently. For example, when the
>application was a network switch/router, the filesystem was normally
>configured non-system fatal, as the computer could carry on it's main
>role of switching packets even if the disks head-crashed, or the disk
>driver or filesystem modules crashed. Each module could also be debugged,

As can linux, I've both had crashed disks on a firewall machine (that
kept firewalling, and masquerading, though not logging) and actually
swapped out the main hard drive (with mounted root filesystem) (It was
a laptop, and this was the easiest way to upgrade the disk.)
Note, it does work better, if you turn swapping off first :)

For a while some people were using routers made with linux systems,
that used a network configurator as init, and simply exited, to leave
the machine with no mounted filesystems, running no programs but still
routing.

--
http://inquisitor.i.am/ | mailto:inqui...@i.am | Ian Stirling.
---------------------------+-------------------------+--------------------------
To do is to be
To be is to do
Do be do be do do

Charlie Gibbs

unread,
Oct 14, 2000, 3:00:00 AM10/14/00
to
In article <8s9g4b$jkd$1...@bob.news.rcn.net> jmfb...@aol.com
(jmfbahciv) writes:

>In article <836.321T8...@sky.bus.com>,
>"Charlie Gibbs" <cgi...@sky.bus.com> wrote:
>
>>Is that anything like the silly little hieroglyphics that are
>>displacing real language these days? (They're intended to eliminate
>>discrimination by replacing words that are only understandable by
>>English speakers with symbols that are not understandable by anyone.)
>
>Yup. Has to be. I'm still trying to figure out how to associate
>on/off w.r.t. the power button on my terminal...

That's easy. The O represents an open pipe through which current
can flow. The | represents a pipe which has been pinched closed,
stopping the flow of current.

Or at least that's what swam into my head late one night after
I had been hacking way too long.

>thank some bright kid that he included a power light next to the
>glyph. I just bought a snow blower. It has a choke. I still
>haven't figured out which way the choke is "on".

Does the throttle have the tortoise and hare glyphs next to it?
I always wondered how universal they'd be in a culture that didn't
have the legend of the tortoise and the hare - not to mention which
one actually made it across the finish line first.

Lars Poulsen

unread,
Oct 14, 2000, 3:00:00 AM10/14/00
to
j1...@sfsu.redirect.edu wrote:
> ... decent hardware should indicate that it really did trigger
> it's interrupt line. And decent drivers should
> check this rather than assume.

Unfortunately the interrupt specification for the ISA bus is
indecent, using a feature called "edge-triggered interrupts",
even though the interrupt controller chip is perfectly capable
of doing the correct "level-triggered" interrupts.

The PCI bus is designed and implemented correctly.
--
/ Lars Poulsen - http://www.cmc.com/lars - la...@cmc.com
125 South Ontare Road, Santa Barbara, CA 93105 - +1-805-569-5277

Joe Pfeiffer

unread,
Oct 15, 2000, 3:00:00 AM10/15/00
to
jmfb...@aol.com writes:
> >>
> >> It ``oughta'' work in Linux.
> >
> >It does. From my /proc/pci:
>
> Ah, but you missed his misunderstanding.

What misunderstanding? Mine? I said what he was suggesting would
work, not that it was a good idea.

> An interrupt doesn't
> call a device driver. And interrupt is handled by the
> interrupt handler whose sole job is to squirrel away the
> necessary information for that interrupt and then dismiss
> it for the next interrupt. Part of the info storage is
> to mark an attention so that the device handler knows it
> has work to do. The device service rountine doesn't
> run at interrupt level.

I'll take a guess that that's what the DEC-10 did -- in Unix, most
device drivers do all their device servicing within the interrupt
handler. A driver that needs to do a lot of processing for an
interrupt will be split up as you say, but with different
terminology.

> And that's why one can service different devices on the
> same interrupt level. The interrupt levels are supposed
> to be a heirarchy of placing importance of servicing
> interrupts. Some interrupt data wonn't hang around
> very long, so it should be allowed to interrupt an
> interrupt. (I think I've got that right--I'm not strong
> on hardware stuff.)

You can do it anyway, the low-priority device just has to make sure
it's read its stuff before it re-enables interrupts. The
high-priority interrupt is serviced, and returns to the low-priority
interrupt handler.

phil

unread,
Oct 15, 2000, 8:35:56 PM10/15/00
to
On 14 Oct 00 15:54:35 -0800, "Charlie Gibbs" <cgi...@sky.bus.com>
wrote:

Interesting. I think that this applies to a lot of stuff. For example,
i'm currently working on a mapping system that allows previous
positions of a vehicle to be marked so that the route taken is
obvious. This feature is called breadcrumbs. No matter how it is
translated, i suspect that anyone who isn't aware of Hansel and Gretel
wouldn't understand the reference, especially since a limiited number
of crumbs are maintained and these are "eaten up" as more positions
are added.

Once the connection is established, i suppose the moniker doesn't
matter; i click lots of buttons where i've no idea what the graphic
represents, just know what it does. Unix contractions probably work
this way too: For example, who in Spain cares what ls -l actually
represents (cue lots of Spanish people saying that ls -l is the
standard Spanish way of asking for a detailed directory listing, and
everybody knows that. All i can say is es de Basingstoke, no me
gusta).


Felipe

jmfb...@aol.com

unread,
Oct 16, 2000, 3:00:00 AM10/16/00
to
In article <1b1yxhz...@viper.cs.nmsu.edu>,

Joe Pfeiffer <pfei...@cs.nmsu.edu> wrote:
>jmfb...@aol.com writes:
>> >>
>> >> It ``oughta'' work in Linux.
>> >
>> >It does. From my /proc/pci:
>>
>> Ah, but you missed his misunderstanding.
>
>What misunderstanding? Mine?

No. His (whoever he was :-)).

> I said what he was suggesting would
>work, not that it was a good idea.

But it won't work on systems that have hetero gear w.r.t
speed.

>
>> An interrupt doesn't
>> call a device driver. And interrupt is handled by the
>> interrupt handler whose sole job is to squirrel away the
>> necessary information for that interrupt and then dismiss
>> it for the next interrupt. Part of the info storage is
>> to mark an attention so that the device handler knows it
>> has work to do. The device service rountine doesn't
>> run at interrupt level.
>

>I'll take a guess that that's what the DEC-10 did ...

Yup.

> ...-- in Unix, most


>device drivers do all their device servicing within the interrupt
>handler. A driver that needs to do a lot of processing for an
>interrupt will be split up as you say, but with different
>terminology.

The faster the machine is the less time one has to ensure
that those interrupts can be serviced. A device at a
higher interrupt level that goes berserk can't be dealt
with elegantly if it keeps interrupting itself before
the interrupt is dismissed. And I really don't like
an error handler to be at a higher interrupt level
than ...say... a realtime clock (I went to an extreme
with this choice of hardware).


>
>> And that's why one can service different devices on the
>> same interrupt level. The interrupt levels are supposed
>> to be a heirarchy of placing importance of servicing
>> interrupts. Some interrupt data wonn't hang around
>> very long, so it should be allowed to interrupt an
>> interrupt. (I think I've got that right--I'm not strong
>> on hardware stuff.)
>
>You can do it anyway, the low-priority device just has to make sure
>it's read its stuff before it re-enables interrupts. The
>high-priority interrupt is serviced, and returns to the low-priority
>interrupt handler.

Think about error handling or a device that's gone berserk or
the cases where there's more than one device assigned to an
interrupt level.

/BAH

Kees J Bot

unread,
Oct 16, 2000, 3:00:00 AM10/16/00
to
In article <1bd7h51...@viper.cs.nmsu.edu>,
Joe Pfeiffer <pfei...@cs.nmsu.edu> wrote:

>R.E.Ballard ( Rex Ballard ) <r.e.b...@usa.net> writes:
>
>> Actually, Minux could be started from MS-DOS. I don't remember
>> if it took over everything (wiping out DOS) or simply ran as the
>> key process until it was terminated (it's been almost 10 years).
>
>It takes over everything. It is a true OS, not a process running
>under DOS>

The current version of Minix can be run from a large DOS file that is
seen by Minix as its disk. The bootstrap, a DOS program, claims all
the memory that DOS has available, puts Minix in it, switches to
protected mode and calls the Minix kernel.

The Minix "dosfile" disk driver makes a call to the bootstrap program
that switches back to real mode, makes a DOS lseek, read or write call
to move the data, switches back to protected mode and returns to Minix.
If you shut Minix down you return to the prompt of the bootstrap, from
where you can exit to DOS.

By implementing this I've allowed a UNIX-like system to be run on top
of DOS, like Windows 3.11. If I believed in hell I would be worried.
--
Kees J. Bot, Systems Programmer, Sciences dept., Vrije Universiteit Amsterdam

Don Chiasson

unread,
Oct 16, 2000, 3:00:00 AM10/16/00
to

jmfb...@aol.com wrote in message
<8sekg4$ov9$3...@bob.news.rcn.net>...
>In article <1b1yxhz...@viper.cs.nmsu.edu>,
> Joe Pfeiffer <pfei...@cs.nmsu.edu> wrote:
>>jmfb...@aol.com writes:


<.............>


>Think about error handling or a device that's gone berserk
or
>the cases where there's more than one device assigned to an
>interrupt level.
>

The other one to watch out for is errors within errors.
These can lead to some race conditions. TOPS-20 had
a bug (it was fixed) that hung the system royally: detection
of a defective block on disk called a routine to move the
block
to the a special file, a list of bad blocks (the BAT
block?).
Once, our system it happened that there was a defective
block,
the error handler trapped and tried to add the block to the
BAT
block list. Unfortunately, when it needed a new block for
the
file directory information, the requested block was also
defective. This called the routine to move the block to the
defective list, ad infinitum. The system froze to users and
the console kept pounding out error messages.
Two defective blocks in a row is highly unlikely (unless
the
drive is undergoing catastrophic failure), but not
impossible.

Don


Heinz W. Wiggeshoff

unread,
Oct 16, 2000, 11:50:07 PM10/16/00
to
"Don Chiasson" (don_ch...@earthling.net) writes:

> The other one to watch out for is errors within errors.
> These can lead to some race conditions.

...
I wonder if the designers forgot transitive relations.

If a depends on b and b depends on c then a depends on c

or something along that line. Of course, this sort of analysis would
be impossible in the mad scramble to deliver product promised with
stupid deadlines, few resources and entry level (read: cheap) staff.
Oops, there I go showing my age again.

jmfb...@aol.com

unread,
Oct 17, 2000, 3:00:00 AM10/17/00
to
In article <8sgi9f$ipa$1...@freenet9.carleton.ca>,

ab...@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) wrote:
>"Don Chiasson" (don_ch...@earthling.net) writes:
>
>> The other one to watch out for is errors within errors.
>> These can lead to some race conditions.
>....

> I wonder if the designers forgot transitive relations.
>
> If a depends on b and b depends on c then a depends on c
> or something along that line.

Then add the little notion that c may happen before a and you
might get an idea of how a timesharing system runs.

>Of course, this sort of analysis would
> be impossible in the mad scramble to deliver product promised with
> stupid deadlines, few resources and entry level (read: cheap) staff.
> Oops, there I go showing my age again.

Nope. Not your age, just your inexperience. We managed.

Phil Gustafson

unread,
Oct 17, 2000, 3:00:00 AM10/17/00
to
Because it's a far, far, finer thing than Sun copying Microsoft.

Phil
--
))
(( Phil Gustafson Urban Legends FAQ: http://www.urbanlegends.com
C|~~| Java FAQ: http://www.afu.com
`--' <ph...@panix.com>

jmfb...@aol.com

unread,
Oct 18, 2000, 3:00:00 AM10/18/00
to
In article <sunfqv7...@corp.supernews.com>,

"Don Chiasson" <don_ch...@earthling.net> wrote:
>
>jmfb...@aol.com wrote in message
><8sekg4$ov9$3...@bob.news.rcn.net>...
>>In article <1b1yxhz...@viper.cs.nmsu.edu>,
>> Joe Pfeiffer <pfei...@cs.nmsu.edu> wrote:
>>>jmfb...@aol.com writes:
>
>
><.............>
>>Think about error handling or a device that's gone berserk or
>>the cases where there's more than one device assigned to an
>>interrupt level.
>>
>The other one to watch out for is errors within errors.
>These can lead to some race conditions. TOPS-20 had
>a bug (it was fixed) that hung the system royally: detection
>of a defective block on disk called a routine to move the
>block
>to the a special file, a list of bad blocks (the BAT
>block?).

That was TOPS-10's name for it. I don't remember if it
was the -20's.

> Once, our system it happened that there was a defective
>block, the error handler trapped and tried to add the block
>to the BAT block list. Unfortunately, when it needed a new
>block for the file directory information, the requested block
>was also defective. This called the routine to move the block
>to the defective list, ad infinitum. The system froze to users
>and the console kept pounding out error messages.
> Two defective blocks in a row is highly unlikely (unless
>the drive is undergoing catastrophic failure), but not
>impossible.

Handling catastophies gracefully is the job of a good OS,
even when it is the BAT block that's going bad. It used
to be called reliability, even when Murphy is painting the
town red.

Jim Thomas

unread,
Oct 18, 2000, 3:00:00 AM10/18/00
to
>>>>> "/BAH" == jmfbahciv <jmfb...@aol.com> writes:

/BAH> Handling catastophies gracefully is the job of a good OS,
/BAH> even when it is the BAT block that's going bad. It used
/BAH> to be called reliability, even when Murphy is painting the
/BAH> town red.

Murphy was too busy fixing bugs to paint any towns :-)

Nothead

Charlie Gibbs

unread,
Oct 18, 2000, 3:00:00 AM10/18/00
to
In article <8sk3o8$esa$1...@bob.news.rcn.net> jmfb...@aol.com
(jmfbahciv) writes:

>Handling catastophies gracefully is the job of a good OS,

>even when it is the BAT block that's going bad. It used

>to be called reliability, even when Murphy is painting the

>town red.

I call it robustness. Maybe that's my problem - nobody else
seems to have heard of that word these days.

jmfb...@aol.com

unread,
Oct 20, 2000, 3:00:00 AM10/20/00
to
In article <471.326T25...@sky.bus.com>,

"Charlie Gibbs" <cgi...@sky.bus.com> wrote:
>In article <8sk3o8$esa$1...@bob.news.rcn.net> jmfb...@aol.com
>(jmfbahciv) writes:
>
>>Handling catastophies gracefully is the job of a good OS,
>>even when it is the BAT block that's going bad. It used
>>to be called reliability, even when Murphy is painting the
>>town red.
>
>I call it robustness. Maybe that's my problem - nobody else
>seems to have heard of that word these days.
>
We used that word, too. Hmmm...now I'm wondering if there was
a difference in the meanings.

Charlie Gibbs

unread,
Oct 20, 2000, 3:00:00 AM10/20/00
to
In article <8spblg$rfu$2...@bob.news.rcn.net> jmfb...@aol.com (jmfbahciv)
writes:

I think they're pretty close. I pulled out my Funk & Wagnalls
and it fell open to the page containing "robust" (probably a good
omen): "Possessing or characterized by great strength or endurance;
rugged; healthy." The dictionary defines "reliable" as "that which
may be relied upon; worthy of confidence; trustworthy."

I like to think of a product as reliable if it can be counted on to
do the job for which it was intended. "Robust" suggests to me that
it's particularly resistant to bad input data or interference from
outside sources. For instance, I consider it unacceptable that a
program should crash due to invalid data or weird operator actions;
it might spit out nasty messages, but it should stay up. (Microsoft
obviously feels differently about these things. :-)

Perhaps we could say that a robust program resists what the Firesign
Theatre defined as Fudd's First Law of Opposition: "If you push
something hard enough, it will fall over."

jmfb...@aol.com

unread,
Oct 22, 2000, 3:00:00 AM10/22/00
to
In article <1725.328T1...@sky.bus.com>,

I think it's part of their business plan. Nobody could fuck up
that consistently by accident.


>
>Perhaps we could say that a robust program resists what the Firesign
>Theatre defined as Fudd's First Law of Opposition: "If you push
>something hard enough, it will fall over."
>

A robust program is something that gets back up after it
falls over and continues with no limp.

A reliable program always gets up and limps in the same way.

Robert Harley

unread,
Oct 23, 2000, 3:00:00 AM10/23/00
to

pl...@NO.SPAM.PLEASE (Caveman) writes:
> That's six months without a reboot.
>
> I'd say that's about as reliable as it gets these days, unless you
> demand Enterprise class reliability, and then you're in the multi
> $Million USD arena.

Get real. My three-year old workstation is worth under 1000 bucks.
It's running an ancient version of Alpha Linux (namely Redhat 4.1):

cor: uptime
5:18pm up 394 days, 23:03, 2 users, load average: 2.16, 2.09, 2.03


You should get out more often, Cavedude.

Rob.
.-. Robert...@polytechnique.org .-.
/ \ .-. .-. / \
/ \ / \ .-. _ .-. / \ / \
/ \ / \ / \ / \ / \ / \ / \
/ \ / \ / `-' `-' \ / \ / \
\ / `-' `-' \ /
`-' http://www.xent.com/~harley/ `-'

Jonathan Thornburg

unread,
Oct 23, 2000, 3:00:00 AM10/23/00
to
pl...@NO.SPAM.PLEASE (Caveman) writes:
[[on an old pentium 100]]
# That's six months without a reboot.
#
# I'd say that's about as reliable as it gets these days, unless you
# demand Enterprise class reliability, and then you're in the multi
# $Million USD arena.

No, that's about as reliable as ordinary off-the-shelf el-cheapo PCs
running ordinary off-the-shelf $50 Red Hat Linux get. We've got lots
of em, and they tend to go 6-12 months between reboots. About 40% of
the reboots are voluntary (kernel upgrades & suchlike), 40% are to
correct badly hung software (eg X or gdm) if a sysadmin with the root
password isn't handy, and maybe 20% are genuine crashes. Running any
of the major BSD variants (NetBSD, OpenBSD, FreeBSD, BSDI) you'd
probably do at least a factor of 2 better in reliability.


In article <rz7u2a3...@corton.inria.fr>,
Robert Harley <har...@corton.inria.fr> wrote:
[[3-year-old workstation]]


>cor: uptime
> 5:18pm up 394 days, 23:03, 2 users, load average: 2.16, 2.09, 2.03

Another similar report: My 7-year-old workstation (probably worth
around $100 by now, but I keep it for the nice monitor and some pricey
commercial software), currently has an uptime of only just over 6 months...
but that was because I manually shut it down as a precaution during a
severe thunderstorm this past spring. Apart from a disk that died
shortly before that thunderstorm, I haven't had any actual _crashes_
in a couple of years now.

--
-- Jonathan Thornburg <jth...@thp.univie.ac.at>
http://www.thp.univie.ac.at/~jthorn/home.html
Universitaet Wien (Vienna, Austria) / Institut fuer Theoretische Physik
Q: Only 7 countries have the death penalty for children. Which are they?
A: Congo, Iran, Nigeria, Pakistan[*], Saudi Arabia, United States, Yemen
[*] Pakistan moved to end this in July 2000. -- Amnesty International,
http://www.amnesty.org/ailib/aipub/2000/AMR/25113900.htm

Bill Broadley

unread,
Oct 23, 2000, 3:00:00 AM10/23/00
to
In comp.arch Caveman <pl...@no.spam.please> wrote:
> That's six months without a reboot.

> I'd say that's about as reliable as it gets these days, unless you


> demand Enterprise class reliability, and then you're in the multi

> $Million USD arena.

*chuckle*, thats the first time I've ever heard "reliable as it
gets" with microsoft.

I have a server that does mostly raid-5 fileserving (in software btw),
as well as some cpu jobs.

[root@be /root]# uptime
10:00am up 387 days, 20:27, 3 users, load average: 1.17, 1.04, 1.01
[root@be /root]# uname -a
Linux cube.math.ucdavis.edu 2.2.12 #3 Fri Oct 1 13:57:15 PDT 1999 i686 unknown

Of course this wouldn't be possible around here without a UPS capable
of a > 1 hour uptime. Just barely squeaked through the last outage.

If I built the same thing today I would have bought a machine with a
redundant power supply, probably $300 more then the usual. ECC memory,
quality components, and redundant power supplies seem to hit these
kind of uptimes easily.

--
Bill

Heinz W. Wiggeshoff

unread,
Oct 23, 2000, 3:00:00 AM10/23/00
to
Jonathan Thornburg (jth...@mach.thp.univie.ac.at) writes:

> Q: Only 7 countries have the death penalty for children.

Come on, you other slacker countries - lets pick up the pace,
get with the program, etc.

Jay Maynard

unread,
Oct 23, 2000, 3:00:00 AM10/23/00
to
On 23 Oct 2000 18:02:23 +0200, Jonathan Thornburg
<jth...@mach.thp.univie.ac.at> wrote:
>pl...@NO.SPAM.PLEASE (Caveman) writes:
># I'd say that's about as reliable as it gets these days, unless you
># demand Enterprise class reliability, and then you're in the multi
># $Million USD arena.

>In article <rz7u2a3...@corton.inria.fr>,
>Robert Harley <har...@corton.inria.fr> wrote:
>>cor: uptime
>> 5:18pm up 394 days, 23:03, 2 users, load average: 2.16, 2.09, 2.03
>Another similar report: My 7-year-old workstation (probably worth
>around $100 by now, but I keep it for the nice monitor and some pricey
>commercial software), currently has an uptime of only just over 6 months...
>but that was because I manually shut it down as a precaution during a
>severe thunderstorm this past spring. Apart from a disk that died
>shortly before that thunderstorm, I haven't had any actual _crashes_
>in a couple of years now.

Yeah, and my Alpha has been running continuously since the last power hit
here.

Still, I won't argue with Caveman's statement.

Enterprise class reliability demands the same kind of uptimes as well as
withstanding a transaction load that would make my poor little Alpha 500
roll over and kick its pins in the air. Any system can last for long periods
of time servicing just one user. (That M$ systems typically don't is yet
another indictment.) Doing it while servicing thousands, if not millions, is
a problem of a whole nuther class.

Thomas Womack

unread,
Oct 23, 2000, 3:00:00 AM10/23/00
to
"Robert Harley" <har...@corton.inria.fr> wrote

>
> pl...@NO.SPAM.PLEASE (Caveman) writes:
> > That's six months without a reboot.
> >
> > I'd say that's about as reliable as it gets these days, unless you
> > demand Enterprise class reliability, and then you're in the multi
> > $Million USD arena.
>
> Get real. My three-year old workstation is worth under 1000 bucks.
> It's running an ancient version of Alpha Linux (namely Redhat 4.1):

> cor: uptime


> 5:18pm up 394 days, 23:03, 2 users, load average: 2.16, 2.09, 2.03

The difference between six months and just-over-a-year is small compared to
uptimes of <1 week, which are as much as I've managed from Win98 boxes; and
the NT box died of hardware failure.

Generally, I'd be startled if a system which stayed up six months died of
software failure. The characteristic feature I notice of uptime wars is that
the uptime is limited either by power failure (whether of the PSU or of the
line input), or by required upgrade of the kernel, or by hardware (generally
hard disc) collapse.

Any protected-memory OS (IE anything but Windows 9x and MacOS <X, really)
ought to stay up forever; that's what protected memory is *for*. If you
leave a machine on overnight and come back to a blue screen, it's either a
hardware or a driver problem, and you should give Dell a rude phone call ...
this laptop became vastly more reliable after the motherboard failed and got
replaced under warranty, without any change to the Windows installation. The
argument "oh, but Windows just crashes" has probably saved Dell enough money
that, were I of a conspiratorial frame of mind, I'd claim that they
propagated it.

[would be interesting to know how reliable laptops are seen to be, where
you've got a fixed hardware platform and less potential for driver chaos]

_Word_ has a nasty habit of crashing if you try to work with documents from
floppy disc; accordingly, I tell people not to do that.

Tom

Joe Sixpack

unread,
Oct 23, 2000, 3:00:00 AM10/23/00
to
In article
<1079066EC9D2343F.9CAE333A...@lp.airnews.net>,
jmay...@conmicro.cx wrote:

>On 23 Oct 2000 18:02:23 +0200, Jonathan Thornburg
><jth...@mach.thp.univie.ac.at> wrote:
>>pl...@NO.SPAM.PLEASE (Caveman) writes:
>># I'd say that's about as reliable as it gets these days, unless you
>># demand Enterprise class reliability, and then you're in the multi
>># $Million USD arena.
>>In article <rz7u2a3...@corton.inria.fr>,
>>Robert Harley <har...@corton.inria.fr> wrote:

>>>cor: uptime
>>> 5:18pm up 394 days, 23:03, 2 users, load average: 2.16, 2.09, 2.03

>>Another similar report: My 7-year-old workstation (probably worth
>>around $100 by now, but I keep it for the nice monitor and some pricey
>>commercial software), currently has an uptime of only just over 6 months...
>>but that was because I manually shut it down as a precaution during a
>>severe thunderstorm this past spring. Apart from a disk that died
>>shortly before that thunderstorm, I haven't had any actual _crashes_
>>in a couple of years now.
>
>Yeah, and my Alpha has been running continuously since the last power hit
>here.
>
>Still, I won't argue with Caveman's statement.
>
>Enterprise class reliability demands the same kind of uptimes as well as
>withstanding a transaction load that would make my poor little Alpha 500
>roll over and kick its pins in the air. Any system can last for long periods
>of time servicing just one user. (That M$ systems typically don't is yet
>another indictment.) Doing it while servicing thousands, if not millions, is
>a problem of a whole nuther class.

There are reasons why Microsoft has chosen not to build this kind of reliability
into NT. I strongly suspect that the driving one is that they want to
maintain maximum environment commonality with their game machine version.
(Which is what Win95/98/ME is.) They want the same exact programs to run
on both systems. Buyers of Microsoft's "home use" "operating systems" are
buying software which
_intentionally_ sacrifices reliability so that games work well. Home users
should really configure their systems with two OS's. One for games (if
desired) and the other to support applications they actually rely on.

Sixpack

jmfb...@aol.com

unread,
Oct 24, 2000, 3:00:00 AM10/24/00
to
In article <sixpack-ya0240800...@news.axs4u.net>,

It wasn't a choice.


>I strongly suspect that the driving one is that they want to
>maintain maximum environment commonality with their game machine version.
>(Which is what Win95/98/ME is.) They want the same exact programs to run
>on both systems. Buyers of Microsoft's "home use" "operating systems" are
>buying software which
>_intentionally_ sacrifices reliability so that games work well.

Go over to that rpg newsgroup. Those gamers are the most refreshing
users. _They demand reliability_ and are able to yell a lot when
they don't get it. Your speculation is just plain nonsense.


> Home users
>should really configure their systems with two OS's. One for games (if
>desired) and the other to support applications they actually rely on.

Oh, bullshit. One OS should be able to service all members of
the family using one system at all times.

jmfb...@aol.com

unread,
Oct 24, 2000, 3:00:00 AM10/24/00
to
In article <rz7u2a3...@corton.inria.fr>,
Robert Harley <har...@corton.inria.fr> wrote:
>
>pl...@NO.SPAM.PLEASE (Caveman) writes:
>> That's six months without a reboot.
>>
>> I'd say that's about as reliable as it gets these days, unless you
>> demand Enterprise class reliability, and then you're in the multi
>> $Million USD arena.
>
>Get real. My three-year old workstation is worth under 1000 bucks.
>It's running an ancient version of Alpha Linux (namely Redhat 4.1):
>
>cor: uptime
> 5:18pm up 394 days, 23:03, 2 users, load average: 2.16, 2.09, 2.03
>
Why don't we report years? I know why TOPS-10 never did...hmmm..
I wonder if Conklin included that?

sw...@nol.net

unread,
Oct 24, 2000, 3:00:00 AM10/24/00
to
In alt.folklore.computers jmfb...@aol.com wrote:
> Go over to that rpg newsgroup. Those gamers are the most refreshing
> users. _They demand reliability_ and are able to yell a lot when
> they don't get it. Your speculation is just plain nonsense.

On the other hand, the people on the Action newsgroup demand
performance. They regularly run CPUs and video cards out of spec.

--
Mike Swaim, Avatar of Chaos: Disclaimer:I sometimes lie.
Home: swaim at nol * net Quote: "Boingie"^4 Y,W&D

barnacle

unread,
Oct 24, 2000, 3:00:00 AM10/24/00
to
In article <471.326T25...@sky.bus.com>, "Charlie Gibbs" <cgi...@sky.bus.com> wrote:
>In article <8sk3o8$esa$1...@bob.news.rcn.net> jmfb...@aol.com
>(jmfbahciv) writes:
>
>>Handling catastophies gracefully is the job of a good OS,
>>even when it is the BAT block that's going bad. It used
>>to be called reliability, even when Murphy is painting the
>>town red.
>
>I call it robustness. Maybe that's my problem - nobody else
>seems to have heard of that word these days.
>

Graceful failure is also important. Some time ago I had to assess a telecoms
unit designed to shove audio at 'better' quality than an analogue line by
using modems and celp(?) coding. The designer had carefully ensured that as
the line quality dropped, the codecs could still keep working with fewer and
fewer bits, until they reached their base limit. Once this point was reached,
the units simply killed the line. They neither tried to re-establish the link
as a modem call, nor set up the faster analogue call - even though (a) all the
necessary hardware and operating modes were already in the unit and (b) the
target audience was radio broadcasters. Broadcasters do *not* like dead air...


--
barnacle

http://www.nailed-barnacle.co.uk

Joe Sixpack

unread,
Oct 24, 2000, 3:00:00 AM10/24/00
to
In article <8t3nr7$9gn$6...@bob.news.rcn.net>, jmfb...@aol.com wrote:

>In article <sixpack-ya0240800...@news.axs4u.net>,
> six...@beerhall.RealLife.edu (Joe Sixpack) wrote:
>>In article
>><1079066EC9D2343F.9CAE333A...@lp.airnews.net>,
>>jmay...@conmicro.cx wrote:
>>
>>>On 23 Oct 2000 18:02:23 +0200, Jonathan Thornburg
>>><jth...@mach.thp.univie.ac.at> wrote:
>>>>pl...@NO.SPAM.PLEASE (Caveman) writes:

>>>># I'd say that's about as reliable as it gets these days, unless you
>>>># demand Enterprise class reliability, and then you're in the multi
>>>># $Million USD arena.


>>>>In article <rz7u2a3...@corton.inria.fr>,
>>>>Robert Harley <har...@corton.inria.fr> wrote:
>>>>>cor: uptime
>>>>> 5:18pm up 394 days, 23:03, 2 users, load average: 2.16, 2.09, 2.03

>Go over to that rpg newsgroup. Those gamers are the most refreshing
>users. _They demand reliability_ and are able to yell a lot when
>they don't get it. Your speculation is just plain nonsense.
>
>

>> Home users
>>should really configure their systems with two OS's. One for games (if
>>desired) and the other to support applications they actually rely on.
>
>Oh, bullshit. One OS should be able to service all members of
>the family using one system at all times.

I agree that it should be so, but not in Microsoft's worldview. A system
that would be suitable for "all members of the family" would have to be
reliable and stable. That implies that it limits access to the hardware to
tried-and-true reliable methods: it would really manage the hardware. It
would have to provide protected memory to each application, etc. The closest
thing Microsoft has to this is Windows NT, which is not a _terrible_ operating
system, at least in those aspects, although it is also not the best. But
you know what? A lot of games don't work with NT, precisely because it
doesn't allow them unfettered access to the audio and video hardware and
memory. That is the only reason why Windows 95, 98 and ME exist.

Sixpack

Ketil Z Malde

unread,
Oct 25, 2000, 1:54:16 AM10/25/00
to
jth...@mach.thp.univie.ac.at (Jonathan Thornburg) writes:

> No, that's about as reliable as ordinary off-the-shelf el-cheapo PCs
> running ordinary off-the-shelf $50 Red Hat Linux get.

(Shrug) It's all anecdotal, of course, but I haven't *needed* to
reboot any of my Linux (el cheapo, mind you) boxes for years. OTOH, a
reboot was the only way (that I found) to fix network hangs in my
brand new non-cheapo NT workstation - until I replaced the fancy
100Mbit card with an old 10Mbit I had lying about.

> tend to go 6-12 months between reboots. 40% (kernel upgrades &
> suchlike), 40% [...] if a sysadmin with the root password isn't


> handy, and maybe 20% are genuine crashes.

> Running any of the major BSD variants (NetBSD, OpenBSD, FreeBSD,
> BSDI) you'd probably do at least a factor of 2 better in
> reliability.

Well, you'd almost get a factor of two by not having the kernel
upgraded, and cut it down to 20% if you have a root person more
available! I think it's a strange case for BSD vs. Linux.

-kzm
--
If I haven't seen further, it is by standing in the footprints of giants

Jan Vorbrueggen

unread,
Oct 25, 2000, 3:00:00 AM10/25/00
to
pl...@NO.SPAM.PLEASE (Caveman) writes:

> As far as I am aware, the record for uptime may well be held by a Nova-1
> that used to sit in DG's MS front office.
> Of course it never did anything useful, but as I recall it ran practically
> forever.

The Irish railroad system (IIRC) apparently has a VAX that has been running
continuously for thirteen (or so) years. DEC^H^H^HCompaq had to update the
SHOW SYSTEM code and a few other odds and ends because of such uptimes.

Jan

David Evans

unread,
Oct 25, 2000, 3:00:00 AM10/25/00
to
Joe Sixpack wrote:
> >
> >Oh, bullshit. One OS should be able to service all members of
> >the family using one system at all times.
>
> I agree that it should be so, but not in Microsoft's worldview. A
> system that would be suitable for "all members of the family" would > > have to be reliable and stable. That implies that it limits access to > the hardware to tried-and-true reliable methods: it would really > manage the hardware. It would have to provide protected memory to each > application, etc. The closest thing Microsoft has to this is Windows > NT, which is not a _terrible_ operating system, at least in those > aspects, although it is also not the best. But you know what? A lot > of games don't work with NT, precisely because it doesn't allow them > unfettered access to the audio and video hardware and memory. That is > the only reason why Windows 95, 98 and ME exist.
>
> Sixpack

I'm currently running a triple boot OS system (NT, Linux-Mandrake 7.0 &
Win 98 - soon to be Me) and I aggree that most games don't run under
NT.
I was, however, under the impression that this is mostly because Me is
seen as a "Gamer's Platform" (whatever that might mean) as opposed to
NT/Win 2k. Now, with NT, and certainly with Win 2k, both are certainly
_*capable*_ of running happily most games, even if written for Win 98/Me
were it not for that fact that most *games* are programmed such that the
installation or main engine software takes one look at the OS structure
and says, "OHMYGOD, thou art an OS of the evil Windows NT ilk, and I
shall not, will not, NAY cannot run under thy strictures!" pureply due
to the way that programs designed to work under Win 98/Me won't run
under Win NT/2k and vice versa, despite the fact that 2k Me probably
could were it not for incompatible coding on the part of the program.

My twopenneth...

David.


David Evans

unread,
Oct 25, 2000, 3:00:00 AM10/25/00
to
Caveman wrote:
> >
> The only point I was trying to make is that a properly maintained
> NT4.0 box can be about as reliable as anything short of ACP/TPF.
>

The important words here are "properly maintained". ANY Operating
System if "properly maintained" will last for a long time - even Win
95/98/Me - although some are more time intensive than others. :-)

> Not that I even like NT, it's just that it's absurd to deny the fact.
> And unlike any present *NIX it can be maintained by a trained monkey.

I'm not too fond of it either; I find doing simple things very difficult
on it, although I have no formal training or experience with it and that
might explain a few things. :-) A Trained Monkey? I doubt that though,
although I've heard that Win 2k if *far* more user and SysAdmin friendly
than NT is. While *NIX systems are have a steeper learning curve, they
tend to take care of themselves better once you've ironed out the
initial creases I've found. Swing and roudabouts I suppose...

> Simply trying to patch most present versions of UNIX is an exercise
> on the level of masturbating with a cheese grater.
>

Now THAT has just thrown me into a *very* scary visual place that I
really, *really*, REALLY didn't want to go to...

*sheesh!*


See you...

David.


Nikita V. Belenki

unread,
Oct 25, 2000, 3:00:00 AM10/25/00
to
"Jan Vorbrueggen" <j...@mailhost.neuroinformatik.ruhr-uni-bochum.de> wrote in
message news:y4pukpz...@mailhost.neuroinformatik.ruhr-uni-bochum.de...

> pl...@NO.SPAM.PLEASE (Caveman) writes:
>
> > As far as I am aware, the record for uptime may well be held by a Nova-1
> > that used to sit in DG's MS front office.
> > Of course it never did anything useful, but as I recall it ran
practically
> > forever.
> The Irish railroad system (IIRC) apparently has a VAX that has been
running
> continuously for thirteen (or so) years.

Eighteen (according to the "OpenVMS [...] for Dummies").

Kit.
kit # kits.net

jmfb...@aol.com

unread,
Oct 25, 2000, 3:00:00 AM10/25/00
to
In article <8t4l1i$26a$1...@uranium.btinternet.com>,

nailed_barn...@hotmail.com (barnacle) wrote:
>In article <471.326T25...@sky.bus.com>, "Charlie Gibbs"
<cgi...@sky.bus.com> wrote:
>>In article <8sk3o8$esa$1...@bob.news.rcn.net> jmfb...@aol.com
>>(jmfbahciv) writes:
>>
>>>Handling catastophies gracefully is the job of a good OS,
>>>even when it is the BAT block that's going bad. It used
>>>to be called reliability, even when Murphy is painting the
>>>town red.
>>
>>I call it robustness. Maybe that's my problem - nobody else
>>seems to have heard of that word these days.
>>
>
>Graceful failure is also important.

Oh, yes! One of reasons Billy's on my hit list is because, every
time this damn program displays that BSOD it takes a minute
out to tromp all over the FAT...I presume because the idiots
think they still can write to disk with no error even though
the OS has taken the error route.
<snip dead air aversion>

AndyC

unread,
Oct 25, 2000, 3:00:00 AM10/25/00
to

On Wed, 25 Oct 2000, David Evans wrote:

> I was, however, under the impression that this is mostly because Me is
> seen as a "Gamer's Platform" (whatever that might mean) as opposed to
> NT/Win 2k. Now, with NT, and certainly with Win 2k, both are certainly
> _*capable*_ of running happily most games, even if written for Win 98/Me
> were it not for that fact that most *games* are programmed such that the
> installation or main engine software takes one look at the OS structure
> and says, "OHMYGOD, thou art an OS of the evil Windows NT ilk, and I
> shall not, will not, NAY cannot run under thy strictures!" pureply due
> to the way that programs designed to work under Win 98/Me won't run
> under Win NT/2k and vice versa, despite the fact that 2k Me probably
> could were it not for incompatible coding on the part of the program.

Not quite true. NT can't run versions of DirectX above v3, so lots of
games simply can't run on it. 2000 *should* run most of them but not all,
either because the installers complain or, in some cases, because things
just don't quite work the same as 9x. Of course, given time these sorts of
issues will become less relevant (which is what M$ want, to finally kill
off 9x).

AndyC


jmfb...@aol.com

unread,
Oct 25, 2000, 3:00:00 AM10/25/00
to
In article <39F6ADEC...@student.paisley.ac.uk>,

David Evans <de00...@student.paisley.ac.uk> wrote:
>Joe Sixpack wrote:
>> >
>> >Oh, bullshit. One OS should be able to service all members of
>> >the family using one system at all times.
>>

Somebody's got a bug....I'm not going to reformat.

>> I agree that it should be so, but not in Microsoft's worldview. A
>> system that would be suitable for "all members of the family" would > >
have to be reliable and stable. That implies that it limits access to >
the hardware to tried-and-true reliable methods: it would really > manage
the hardware. It would have to provide protected memory to each >
application, etc. The closest thing Microsoft has to this is Windows > NT,
which is not a _terrible_ operating system, at least in those > aspects,
although it is also not the best. But you know what? A lot > of games
don't work with NT, precisely because it doesn't allow them > unfettered
access to the audio and video hardware and memory. That is > the only
reason why Windows 95, 98 and ME exist.
>>
>> Sixpack
>
>I'm currently running a triple boot OS system (NT, Linux-Mandrake 7.0 &
>Win 98 - soon to be Me) and I aggree that most games don't run under
>NT.

>I was, however, under the impression that this is mostly because Me is
>seen as a "Gamer's Platform" (whatever that might mean)

It's a hint about the way Misoft will split up if it ever does.

> ... as opposed to


>NT/Win 2k. Now, with NT, and certainly with Win 2k, both are certainly
>_*capable*_ of running happily most games, even if written for Win 98/Me
>were it not for that fact that most *games* are programmed such that the
>installation or main engine software takes one look at the OS structure
>and says, "OHMYGOD, thou art an OS of the evil Windows NT ilk, and I
>shall not, will not, NAY cannot run under thy strictures!" pureply due
>to the way that programs designed to work under Win 98/Me won't run
>under Win NT/2k and vice versa, despite the fact that 2k Me probably
>could were it not for incompatible coding on the part of the program.

<grin> Those are famous last words "...probably could if it were not
for the incompatible coding.."
>
>My twopenneth...

I do admit I've never seen an OS say OHMYGOD.

jmfb...@aol.com

unread,
Oct 25, 2000, 3:00:00 AM10/25/00
to
In article <KETIL-vk1d...@eris.bgo.nera.no>,
Ketil Z Malde <ke...@ii.uib.no> wrote:
>jth...@mach.thp.univie.ac.at (Jonathan Thornburg) writes:

<snip>

>> Running any of the major BSD variants (NetBSD, OpenBSD, FreeBSD,
>> BSDI) you'd probably do at least a factor of 2 better in
>> reliability.
>
>Well, you'd almost get a factor of two by not having the kernel
>upgraded, and cut it down to 20% if you have a root person more
>available! I think it's a strange case for BSD vs. Linux.

Why do you think it's strange? That is the history of
Unix. JMF had to do an evaluation of BSD and System V before
they chose the one to use to SMP Unix. BSD, IIRC, could
do networking very, very well and disk throughput so-so.
System V could do disk throughput very, very well and
networking so-so. I had always presumed that one of
the reasons Unix diverged was because a group picked up
a set of sources and the concentrated their work on a
particular area to the point that the changes couldn't
be merged back into the set of sources that traveled
down another branch of expertise.

Source management is a full time job and requires
engineering expertise.

Mike Meredith at home

unread,
Oct 25, 2000, 3:00:00 AM10/25/00
to
In article <8t6mdi$e21$4...@bob.news.rcn.net>,

jmfb...@aol.com writes:
> In article <KETIL-vk1d...@eris.bgo.nera.no>,
> Ketil Z Malde <ke...@ii.uib.no> wrote:
>>jth...@mach.thp.univie.ac.at (Jonathan Thornburg) writes:
>
> <snip>
>
>>> Running any of the major BSD variants (NetBSD, OpenBSD, FreeBSD,
>>> BSDI) you'd probably do at least a factor of 2 better in
>>> reliability.
>>
>>Well, you'd almost get a factor of two by not having the kernel
>>upgraded, and cut it down to 20% if you have a root person more
>>available! I think it's a strange case for BSD vs. Linux.
>
> Why do you think it's strange? That is the history of
> Unix. JMF had to do an evaluation of BSD and System V before
> they chose the one to use to SMP Unix. BSD, IIRC, could
> do networking very, very well and disk throughput so-so.
> System V could do disk throughput very, very well and
> networking so-so.

Note that Linux isn't based on either System V code or BSD code
(although it could have picked up some BSD code along the way).
Most Unix o/ses have acquired their networking code to a greater
or lesser extent from the BSD networking code (sometimes by way
of Lachmann & Associates), whereas the Linux code has been
written from scratch.

> I had always presumed that one of
> the reasons Unix diverged was because a group picked up
> a set of sources and the concentrated their work on a
> particular area to the point that the changes couldn't
> be merged back into the set of sources that traveled
> down another branch of expertise.

Source management could be one of the reasons why SystemIII/V and
BSD diverged, but there's a number of other possible reasons too.
For instance the management at AT&T (or whatever they were
calling themselves at the time) may have been reluctant to use
BSD code (although some code made it); they were also very
interested in pushing the business uses of Unix.

And of course source code management across a minimum of 2 groups
on seperate coasts would have been ... interesting.

I wonder how much BSD code made it into Research Unix.

> Source management is a full time job and requires
> engineering expertise.

Judging by the number of source code management software packages
used by the crew on the floor above me (0), you can't say that
often enough.

Jim Thomas

unread,
Oct 25, 2000, 3:00:00 AM10/25/00
to
>>>>> "Caveman" == Caveman <pl...@NO.SPAM.PLEASE> writes:

Caveman> Well, it ain't ACP/TPF, not even OS/390, but my only point in
Caveman> posting it is that it's ridiculous for people to claim that you
Caveman> can't keep a well administered NT4.0 box up just as long as you
Caveman> can any other protected memory system, as I recall one other
Caveman> poster to this thread stated better than I did. So long as the
Caveman> hardware works and the power stays on, it should run, and if you
Caveman> have an errant process, you should be able to kill and restart it
Caveman> without rebooting the OS.

Maybe it should, but ... We just had a virus exercise, and I was running
Norton's scan on SP4, using Hummingbird Maestro to look at disks mounted
from Solaris 2.7 and HP-UX 10.20. For reasons of various bugs I could hang
Norton (task not responding). When I killed the process it would not
restart without a reboot first. I could also get it to crash NT itself

:-(

ob a.f.c: I have an HP 835 that I reboot once a year when I clean the
filters :-)

Joe Sixpack

unread,
Oct 25, 2000, 3:00:00 AM10/25/00
to
In article <39F6B12C...@student.paisley.ac.uk>, David Evans
<de00...@student.paisley.ac.uk> wrote:

>Caveman wrote:
>> >
>> The only point I was trying to make is that a properly maintained
>> NT4.0 box can be about as reliable as anything short of ACP/TPF.
>>
>
>The important words here are "properly maintained". ANY Operating
>System if "properly maintained" will last for a long time - even Win
>95/98/Me - although some are more time intensive than others. :-)

I dispute this. Windows 95/98/ME, and Mac OS's through OS9 do not use
protected memory and therefore cannot protect the system from application
errors. They are inherently incapable of providing a reliable platform
for general use computing, much less the development of new code.

Sixpack

Joe Sixpack

unread,
Oct 25, 2000, 3:00:00 AM10/25/00
to

It does seem like displaying the screen and then halting would be a
safer response, eh?

Sixpack

Joe Sixpack

unread,
Oct 25, 2000, 3:00:00 AM10/25/00
to
In article <Pine.OSF.4.05.100102...@cpca7.uea.ac.uk>,
AndyC <a96...@uea.ac.uk> wrote:

>On Wed, 25 Oct 2000, David Evans wrote:
>

>> I was, however, under the impression that this is mostly because Me is

>> seen as a "Gamer's Platform" (whatever that might mean) as opposed to


>> NT/Win 2k. Now, with NT, and certainly with Win 2k, both are certainly
>> _*capable*_ of running happily most games, even if written for Win 98/Me
>> were it not for that fact that most *games* are programmed such that the
>> installation or main engine software takes one look at the OS structure
>> and says, "OHMYGOD, thou art an OS of the evil Windows NT ilk, and I
>> shall not, will not, NAY cannot run under thy strictures!" pureply due
>> to the way that programs designed to work under Win 98/Me won't run
>> under Win NT/2k and vice versa, despite the fact that 2k Me probably
>> could were it not for incompatible coding on the part of the program.
>

>Not quite true. NT can't run versions of DirectX above v3, so lots of
>games simply can't run on it. 2000 *should* run most of them but not all,
>either because the installers complain or, in some cases, because things
>just don't quite work the same as 9x. Of course, given time these sorts of
>issues will become less relevant (which is what M$ want, to finally kill
>off 9x).

If that's what they wanted, they wouldn't have introduced ME. Or or Win98 for
that matter. Heck, they had NT first, which is a better system. Why
continue to upgrade an operating system they don't want to continue???

Sixpack

Dennis Ritchie

unread,
Oct 25, 2000, 10:28:50 PM10/25/00
to

Mike Meredith at home wrote (I snip much):
> ...


> Source management could be one of the reasons why SystemIII/V and
> BSD diverged, but there's a number of other possible reasons too.
> For instance the management at AT&T (or whatever they were
> calling themselves at the time) may have been reluctant to use
> BSD code (although some code made it); they were also very
> interested in pushing the business uses of Unix.
>

No, source management had nothing to do with this divergence,
it was politics and technical goals. AT&T's USG and then its computer
business were suspicious of incorporating things from college
students in a business product pushed mainly in the Bell System
prior to 1984, then fuller-out as a commercial thing after divestiture.
UCB's CSRG wanted to do their own things, some of which in the
end were more important, like TCP/IP, and earlier adoption of
virtual memory on the VAX. They also really wanted to stay with
the Unix 32V license for their clients (universities, ARPA contractors,
then the workstation companies)--the cost of System III and V was going
up. In the early 80's they would (at least officially) refuse even
to read System III manuals: they were on their own path.

> And of course source code management across a minimum of 2 groups
> on seperate coasts would have been ... interesting.

True, but that wasn't the issue.

> I wonder how much BSD code made it into Research Unix.

Research Unix 8th Edition started from (I think) BSD 4.1c, but
with enormous amounts scooped out and replaced by our own stuff.
This continued with 9th and 10th. The ordinary user command-set
was, I guess, a bit more BSD-flavored than SysVish, but it
was pretty eclectic.

There's an enormously complicated story about what transpired between
then and now, but suffice it to say that lack of source code
control fails to explain the existence of the current *BSDs, Linux,
Solaris, IRIX, AIX, HP/UX, the SCO/Caldera offerings and on and on.
Unix in the 80s wasn't open software in the modern sense, but it
was an approximation, and its history shows some of the difficulties
attending thereto.

Dennis


Dennis

Alexander Viro

unread,
Oct 26, 2000, 1:06:05 AM10/26/00
to
In article <39F796E2...@bell-labs.com>,
Dennis Ritchie <d...@bell-labs.com> wrote:

>Research Unix 8th Edition started from (I think) BSD 4.1c, but
>with enormous amounts scooped out and replaced by our own stuff.
>This continued with 9th and 10th. The ordinary user command-set
>was, I guess, a bit more BSD-flavored than SysVish, but it
>was pretty eclectic.
>
>There's an enormously complicated story about what transpired between
>then and now, but suffice it to say that lack of source code
>control fails to explain the existence of the current *BSDs, Linux,
>Solaris, IRIX, AIX, HP/UX, the SCO/Caldera offerings and on and on.
>Unix in the 80s wasn't open software in the modern sense, but it
>was an approximation, and its history shows some of the difficulties
>attending thereto.
>
> Dennis

Is there any chance to see the v8--v10 source? Reading v5 and pristine v7
was wonderful, how about the sequels? ;-) Seriously, I would certainly love
to see that branch and I'm pretty sure that I'm not alone at that.

BTW, what happened with the nasty stuff? Specifically, rename()/mmap()/
truncate()/symlink() and their semantics. They didn't make it into
Plan 9 (rename did, but in much, erm, sanitised version), but then
link() also didn't... Did any of those go into v8?

--
"You're one of those condescending Unix computer users!"
"Here's a nickel, kid. Get yourself a better computer" - Dilbert.

Ketil Z Malde

unread,
Oct 26, 2000, 2:01:30 AM10/26/00
to
pl...@NO.SPAM.PLEASE (Caveman) writes:

> The only reason I ever take down either my ancient Solaris 2.4 box that
> I'm typing this on, or that old NT box, is to patch it up to date once
> in a while to try to stay ahead of the script kiddies.

IME, with NT you must be careful what hardware you choose, and what
applications you run. The stablest NT I've ever run, was under VMWare
(which simply provides a very standardized network and VGA card), my
workstation BSODs once in a while. Perhaps I need a trained monkey
instead of an IT department? :-)

Of course, if uptimes are meaningful to you, you might look at systems
you can upgrade without a reboot. After all, most script kiddies
protection comes from applications and libraries, not from the kernel.

And with some more modularity of the Linux kernel (stable interfaces,
really, the rest is there already), you should be able to upgrade
components like drivers without rebooting, too.

-kzm (who just upgraded from Debian 2.1 to 2.2 - without rebooting)

Enrico Badella

unread,
Oct 26, 2000, 3:00:00 AM10/26/00
to

Jim Thomas wrote:
>
> Maybe it should, but ... We just had a virus exercise, and I was running
> Norton's scan on SP4, using Hummingbird Maestro to look at disks mounted
> from Solaris 2.7 and HP-UX 10.20. For reasons of various bugs I could hang
> Norton (task not responding). When I killed the process it would not
> restart without a reboot first. I could also get it to crash NT itself

But if NT is e real OS, why is there a need for a virus scanner? 8-)

e.

========================================================================
Enrico Badella email: enrico....@softstar.it
Soft*Star srl e...@vax.cnuce.cnr.it
InterNetworking Specialists tel: +39-011-746092
Via Camburzano 9 fax: +39-011-746487
10143 Torino, Italy

Wanted, for hobbyist use, any type of PDP and microVAX hardware,software,
manuals,schematics,etc. and DEC-10 docs or manuals
==========================================================================

dls2

unread,
Oct 26, 2000, 3:00:00 AM10/26/00
to
"Joe Sixpack" <six...@beerhall.RealLife.edu> wrote:
> >Not quite true. NT can't run versions of DirectX above v3, so lots of
> >games simply can't run on it. 2000 *should* run most of them but not all,
> >either because the installers complain or, in some cases, because things
> >just don't quite work the same as 9x. Of course, given time these sorts
of
> >issues will become less relevant (which is what M$ want, to finally kill
> >off 9x).
>
> If that's what they wanted, they wouldn't have introduced ME. Or or Win98
> for that matter. Heck, they had NT first, which is a better system. Why
> continue to upgrade an operating system they don't want to continue???

Because before NT, there was DOS, which was, and, in its Win95/98
incarnations, continued to be, a cash cow.


-- Derrick Shearer

Enrico Badella

unread,
Oct 26, 2000, 3:00:00 AM10/26/00
to

Alexander Viro wrote:
>
> Is there any chance to see the v8--v10 source? Reading v5 and pristine v7
> was wonderful, how about the sequels? ;-) Seriously, I would certainly love

It would be nice to have them with a commentary like the Lions book. Long
time ago I worked with S5R2 source and it wasn't that fun to read; however
I really appreciated reading my bootleg copy of Lions book while sick with
chicken pocks!...

AndyC

unread,
Oct 26, 2000, 3:00:00 AM10/26/00
to

On Wed, 25 Oct 2000, Joe Sixpack wrote:

> If that's what they wanted, they wouldn't have introduced ME. Or or Win98 for
> that matter. Heck, they had NT first, which is a better system. Why
> continue to upgrade an operating system they don't want to continue???

Because NT wouldn't run most DOS (and later DirectX) software, most
notably games which lots of people were interested in. Since the
release of 95, the aim has been to converge the two operating systems
into a single entity.

2000 was supposed to finally see the end of the 9x line but after delaying
it for ages M$ finally decided to release ME as a stopgap until the
"consumer" version of 2000 was ready.

It is highly unlikely that there will be any further versions of Windows
based on the 9x core.

AndyC


Nikita V. Belenki

unread,
Oct 26, 2000, 3:00:00 AM10/26/00
to
"AndyC" <a96...@uea.ac.uk> wrote in message
news:Pine.OSF.4.05.100102...@cpca7.uea.ac.uk...

> It is highly unlikely that there will be any further versions of Windows
> based on the 9x core.

It was "highly unlikely" after the release of *each* version of Win9X.

Kit.
kit # kits.net

jmfb...@aol.com

unread,
Oct 26, 2000, 4:31:42 AM10/26/00
to

I interpret it as arrogance. This is an attitude typical of
those COBOL programmers who wrote a 100-line program and declared
themselves to be an expert of all computing.

AndyC

unread,
Oct 26, 2000, 7:48:42 AM10/26/00
to

Well, yes because M$ didn't think it'd be so difficult to bring them
together. The difference (as far as compatability goes) between ME and
2000 is so small that I would be surprised to see another 9x.

AndyC

sw...@nol.net

unread,
Oct 26, 2000, 10:09:03 AM10/26/00
to
In alt.folklore.computers David Evans <de00...@student.paisley.ac.uk> wrote:
> I was, however, under the impression that this is mostly because Me is
> seen as a "Gamer's Platform" (whatever that might mean) as opposed to
> NT/Win 2k. Now, with NT, and certainly with Win 2k, both are certainly
> _*capable*_ of running happily most games,

You're assuming that the NT version of Win32 is the same as the Win98
version. (or at least that it's a superset.) A decent percentage of recent
games require later versions of ActiveX than NT supports. Win2k does a
better job at supporting ActiveX, and is actually semiuseful as a gaming
platform.

--
Mike Swaim, Avatar of Chaos: Disclaimer:I sometimes lie.
Home: swaim at nol * net Quote: "Boingie"^4 Y,W&D

sw...@nol.net

unread,
Oct 26, 2000, 10:20:11 AM10/26/00
to
In alt.folklore.computers Joe Sixpack <six...@beerhall.reallife.edu> wrote:
> If that's what they wanted, they wouldn't have introduced ME. Or or Win98 for
> that matter. Heck, they had NT first, which is a better system. Why
> continue to upgrade an operating system they don't want to continue???

1) Revenue, They aren't giving Win98/ME away for free.
2) NT ran poorly on many home machines. (Not enough RAM or CPU
horsepower.)
3) NT hated DOS games.
4) NT couldn't handle plug n' play. (PCMCIA/USB)
5) Upgrading ActiveX was neigh impossible.

Much of this is fixed on Windows 2000.

Eric Smith

unread,
Oct 26, 2000, 6:23:54 PM10/26/00
to
six...@beerhall.RealLife.edu (Joe Sixpack) writes:
> I dispute this. Windows 95/98/ME, and Mac OS's through OS9 do not use
> protected memory and therefore cannot protect the system from application
> errors.

Windows 95, 98, and ME most certainly *do* use protected memory. They're
buggy despite that. For instance, they don't validate many of the
arguments passed to system calls. The "debug" builds they make available
to developers do. C.A.R. Hoare in his 1980 Turing Award Lecture said
something about leaving argument and bounds checking out of production
software being like practicing with a life jacket on shore, then not taking
the jacket on the boat because it's too heavy. (Anyone have the exact
quote?)

The other problem is that even if you *do* have protected memory, it
doesn't help you if you're willing to install device drivers, VxDs, etc.
from any random source. Every time someone asks me to help them figure
out Windows problems, I'm astonished at the amount of extra crap they've
installed. And yet they wonder why the system isn't stable.

I've very nearly eliminated the use of MS Windows from my life. I
use it at work to read my email, since that's done via MS Exchange and
often has Word and Excel attachements. But I'll probably switch to
Staroffice for that.

Joe Sixpack

unread,
Oct 27, 2000, 12:14:34 AM10/27/00
to
In article <39F7E137...@softstar.it>, Enrico Badella
<enrico....@softstar.it> wrote:

>Jim Thomas wrote:
>>
>> Maybe it should, but ... We just had a virus exercise, and I was running
>> Norton's scan on SP4, using Hummingbird Maestro to look at disks mounted
>> from Solaris 2.7 and HP-UX 10.20. For reasons of various bugs I could hang
>> Norton (task not responding). When I killed the process it would not
>> restart without a reboot first. I could also get it to crash NT itself
>
>But if NT is e real OS, why is there a need for a virus scanner? 8-)

Because there is no such thing as an operating system that cannot be
damaged by a user who does not take security precautions.

Sixpack


Joe Sixpack

unread,
Oct 27, 2000, 12:18:11 AM10/27/00
to
In article <STSJ5.1821$LX4....@sjc-read.news.verio.net>, "Nikita V.
Belenki" <k...@nospam.net> wrote:

>"AndyC" <a96...@uea.ac.uk> wrote in message
>news:Pine.OSF.4.05.100102...@cpca7.uea.ac.uk...
>
>> It is highly unlikely that there will be any further versions of Windows
>> based on the 9x core.
>
>It was "highly unlikely" after the release of *each* version of Win9X.

Yep. Every couple years Microsoft feels the need to publish a new version
so they can rake in a few more billion. Besides, they really *can't*
make NT into a platform for high-performance games because they can't keep
ahead of the game technology. They just provide a consumer system that
allows direct access to the hardware.

Sixpack


Enrico Badella

unread,
Oct 27, 2000, 4:11:09 AM10/27/00
to

Eric Smith wrote:
>
> Windows 95, 98, and ME most certainly *do* use protected memory. They're
> buggy despite that. For instance, they don't validate many of the
> arguments passed to system calls. The "debug" builds they make available
> to developers do. C.A.R. Hoare in his 1980 Turing Award Lecture said
> something about leaving argument and bounds checking out of production
> software being like practicing with a life jacket on shore, then not taking
> the jacket on the boat because it's too heavy. (Anyone have the exact

Great quote! Got to remember it

> The other problem is that even if you *do* have protected memory, it
> doesn't help you if you're willing to install device drivers, VxDs, etc.
> from any random source. Every time someone asks me to help them figure
> out Windows problems, I'm astonished at the amount of extra crap they've
> installed. And yet they wonder why the system isn't stable.

It is not only the crap it is the basic buggyness of m$ products. I never
install junk except needed stuff. Lately I added a soundbalster to my
NT 4 Sp5 using drivers from the m$ CD. Now I randomly get BSOD and Acrobat
won't create good PostScript anymore, unbelievable; got to use Acrobat from
my Sparc box.

You can probably understand the quality of the OS and company by looking
at how the installed stuff is organized. in NT you have a single
directory c:\winNT filled with all sorts of files, a complete mess. Haven't
these guys ever looked at real operating systems.

cheers

Nikita V. Belenki

unread,
Oct 27, 2000, 7:26:18 AM10/27/00
to

The main difference (AFAIK) between ME and 2000 is in the:
1. license prices;
2. hardware requirements;
3. drivers compatibility
(http://www.microsoft.com/hwdev/driver/wdmtruth.htm).

Yes, it is smaller than between Win95 and NT 4.0, but it still makes sense
for the home user. See there for example:
http://www.3dfxgamers.com/drivers/latest_drivers2.stm

Kit.
kit # kits.net

sw...@nol.net

unread,
Oct 27, 2000, 7:42:27 AM10/27/00
to
Enrico Badella <enrico....@softstar.it> wrote:

> But if NT is e real OS, why is there a need for a virus scanner? 8-)

Word/Outlook virii.

jmfb...@aol.com

unread,
Oct 27, 2000, 5:57:42 AM10/27/00
to
In article <39F9389D...@softstar.it>,

Yes. But it blinded them.

jmfb...@aol.com

unread,
Oct 27, 2000, 5:59:44 AM10/27/00
to
In article <DSdK5.95949$bI6.3...@news1.giganews.com>,

sw...@nol.net wrote:
>Enrico Badella <enrico....@softstar.it> wrote:
>
>> But if NT is e real OS, why is there a need for a virus scanner? 8-)
>
>Word/Outlook virii.
>
I always got the sense that the backdoors were put there
on purpose for data gathering.
It is loading more messages.
0 new messages