Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Coding style - a non-issue

2 views
Skip to first unread message

Peter Waltenberg

unread,
Nov 28, 2001, 6:40:10 PM11/28/01
to
The problem was solved years ago.

"man indent"

Someone who cares, come up with an indentrc for the kernel code, and get it
into Documentation/CodingStyle
If the maintainers run all new code through indent with that indentrc
before checkin, the problem goes away.
The only one who'll incur any pain then is a code submitter who didn't
follow the rules. (Exactly the person we want to be in pain ;)).


Then we can all get on with doing useful things.

Cheers
Peter

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

Matthias Andree

unread,
Nov 28, 2001, 9:00:17 PM11/28/01
to
On Wed, 28 Nov 2001, Alexander Viro wrote:

> Al, -><- close to setting up a Linux Kernel Hall of Shame - one with names of
> wankers (both individual and coprorat ones) responsible, their code and
> commentary on said code...

Oh, can I vote for someone spoiling outside the Kernel? I have a
candidate for that one.

Seriously, don't waste your *p_time on that except people resist any
hints to CodingStyleIsForLameAssHackersIWantMyEDIT.COM for an extended
amount of *p_time.

--
Matthias Andree

"They that can give up essential liberty to obtain a little temporary
safety deserve neither liberty nor safety." Benjamin Franklin

Henning Schmiedehausen

unread,
Nov 30, 2001, 12:20:08 PM11/30/01
to
On Fri, 2001-11-30 at 17:47, Jeff Garzik wrote:

Hi,

> The security community has shown us time and again that public shaming
> is often the only way to motivate vendors into fixing security
> problems. Yes, even BSD security guys do this :)
>
> A "Top 10 ugliest Linux kernel drivers" list would probably provide
> similar motivation.

A security issue is an universal accepted problem that most of the time
has a reason and a solution.

Coding style, however, is a very personal thing that start with "shall
we use TABs or not? (Jakarta: No. Linux: Yes ...) and goes on to "Is a
preprocessor macro a good thing or not" until variable names (Al Viro:
Names with more than five letters suck. :-) Java: Non-selfdescriptive
names suck. Microsoft: Non-hungarian names suck) and so on.

And you really want to judge code just because someone likes to wrap
code in preprocessor macros or use UPPERCASE variable names?

Come on. That's a _fundamental_ different issue than dipping vendors in
their own shit if they messed up and their box/program has a security
issue. Code that you consider ugly as hell may be seen as "easily
understandable and maintainable" by the author. If it works and has no
bugs, so what? Just because it is hard for you and me to understand (cf.
"mindboggling unwind routines in the NTFS" (I thing Jeff Merkey stated
it like this). It still seems to work quite well.

Are you willing to judge "ugliness" of kernel drivers? What is ugly? Are
Donald Beckers' drivers ugly just because they use (at least on 2.2)
their own pci helper library? Is the aic7xxx driver ugly because it
needs libdb ? Or is ugly defined as "Larry and Al don't like them"? :-)

Flaming about coding style is about as pointless as flaming someone
because he supports another sports team. There is no universal accepted
coding style. Not even in C.

Regards
Henning


--
Dipl.-Inf. (Univ.) Henning P. Schmiedehausen -- Geschaeftsfuehrer
INTERMETA - Gesellschaft fuer Mehrwertdienste mbH h...@intermeta.de

Am Schwabachgrund 22 Fon.: 09131 / 50654-0 in...@intermeta.de
D-91054 Buckenhof Fax.: 09131 / 50654-20

Larry McVoy

unread,
Nov 30, 2001, 12:30:11 PM11/30/01
to
On Fri, Nov 30, 2001 at 06:15:28PM +0100, Henning Schmiedehausen wrote:
> On Fri, 2001-11-30 at 17:47, Jeff Garzik wrote:
> > The security community has shown us time and again that public shaming
> > is often the only way to motivate vendors into fixing security
> > problems. Yes, even BSD security guys do this :)
> >
> > A "Top 10 ugliest Linux kernel drivers" list would probably provide
> > similar motivation.
>
> A security issue is an universal accepted problem that most of the time
> has a reason and a solution.
>
> And you really want to judge code just because someone likes to wrap
> code in preprocessor macros or use UPPERCASE variable names?

Henning, in any long lived source base, coding style is crucial. People
who think that coding style is personal are just wrong. Let's compare,
shall we?

Professional: the coding style at this job looks like XYZ, ok, I will now
make my code look like XYZ.

Amateur: my coding style is better than yours.

I think that if you ask around, you'll find that the pros use a coding
style that isn't theirs, even when writing new code. They have evolved
to use the prevailing style in their environment. I know that's true for
me, my original style was 4 space tabs, curly braces on their own line,
etc. I now code the way Bill Joy codes, fondly known as Bill Joy normal
form.

Anyway, if you think any coding style is better than another, you completely
miss the point. The existing style, whatever it is, is the style you use.
I personally despise the GNU coding style but when I make changes there,
that's what I use because it is their source base, not mine.
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

Alan Cox

unread,
Nov 30, 2001, 12:50:09 PM11/30/01
to
> irritate the oftes so called "maintainer". Two expierences:
> ftape and mcd I'm through....

I timed the mcd maintainer out and tidied it anyway. I figured since it
wasnt being maintained nobody would scream too loudly - nobody has

> BTW.> ftape (for the pascal emulation) and DAC960

ftape is an awkward one. Really the newer ftape4 wants merging into the
kernel but that should have happened a long time ago

> serial.c is another one for the whole multiport support which
> may be used by maybe 0.1% of the Linux users thrown on them all
> and some "magic" number silliness as well...

serial.c is a good example of the "ugly" that actually matters more, as is
floppy.c. Clean well formatted code that is stil opaque.

Alexander Viro

unread,
Nov 30, 2001, 1:00:18 PM11/30/01
to

On 30 Nov 2001, Henning Schmiedehausen wrote:

> issue. Code that you consider ugly as hell may be seen as "easily
> understandable and maintainable" by the author. If it works and has no
> bugs, so what? Just because it is hard for you and me to understand (cf.

... it goes without peer review for years. And that means bugs.

Fact of life: we all suck at reviewing our own code. You, me, Ken Thompson,
anybody - we tend to overlook bugs in the code we'd written. Depending on
the skill we can compensate - there are technics for that, but it doesn't
change the fact that review by clued people who didn't write the thing
tends to show bugs we'd missed for years.

If you really don't know that by your own experience - you don't _have_
experience. There is a damn good reason for uniform style within a
project: peer review helps. I've lost the count of bugs in the drivers
that I'd found just grepping the tree. Even on that level review catches
tons of bugs. And I have no reason to doubt that authors of respective
drivers would fix them as soon as they'd see said bugs.

"It's my code and I don't care if nobody else can read it" is an immediate
firing offense in any sane place. It may be OK in academentia, but in the
real life it's simply unacceptable.

It's all nice and dandy to shed tears for poor, abused, well-meaning company
that had made everyone happy by correct but unreadable code and now gets
humiliated by mean ingrates. Nice image, but in reality the picture is
quite different. Code _is_ buggy. That much is a given, regardless of
the origin of that code. The only question is how soon are these bugs
fixed. And that directly depends on the amount of efforts required to
read through that code.

Sigh... Ironic that _you_ recommend somebody to grow up - I would expect
the level of naivety you'd demonstrated from a CS grad who'd never worked
on anything beyond his toy project. Not from somebody adult.

Martin Dalecki

unread,
Nov 30, 2001, 1:10:13 PM11/30/01
to
Russell King wrote:
>
> On Fri, Nov 30, 2001 at 06:42:17PM +0100, Martin Dalecki wrote:
> > serial.c should be hooked at the misc char device interface sooner or
> > later.
>
> Please explain. Especially contentrate on justifing why serial interfaces
> aren't a tty device.

No problem ;-).

There is the hardware: in esp. the serial controller itself - this
belongs
to misc, becouse a mouse for example doesn't have to interpret any tty
stuff
This animal belongs to the same cage as the PS/2 variant of it.
And then there is one abstraction level above it: the tty interface -
this belongs to a line discipline.

We have this split anyway already there /dev/ttyS0 and /dev/cua0 somehow
emulated on one level.

Understood?

Paul G. Allen

unread,
Nov 30, 2001, 1:20:18 PM11/30/01
to
Peter Waltenberg wrote:
>
> The problem was solved years ago.
>
> "man indent"
>
> Someone who cares, come up with an indentrc for the kernel code, and get it
> into Documentation/CodingStyle
> If the maintainers run all new code through indent with that indentrc
> before checkin, the problem goes away.
> The only one who'll incur any pain then is a code submitter who didn't
> follow the rules. (Exactly the person we want to be in pain ;)).
>
> Then we can all get on with doing useful things.
>

IMEO, there is but one source as reference for coding style: A book by
the name of "Code Complete". (Sorry, I can't remember the author and I
no longer have a copy. Maybe my Brother will chime in here and fill in
the blanks since he still has his copy.)

Outside of that, every place I have worked as a programmer, with a team
of programmers, had a style that was adhered to almost religiously. In
many cases the style closely followed "Code Complete". In the case of
the kernel, as Alan and others have mentioned, there IS a Linux kernel
coding style.

In 99% of the Linux code I have seen, the style does indeed "suck". Why?
Consider a new coder coming in for any given task. S/he knows nothing
about the kernel and needs to get up to speed quickly. S/he starts
browsing the source - the ONLY definitive explanation of what it does
and how it works - and finds:

- Single letter variable names that aren't simple loop counters and
must ask "What the h*** are these for?"
- No function/file comment headers explaining what the purpose of the
function/file is.
- Very few comments at all, which is not necessarily bad except...
- The code is not self documenting and without comments it takes an
hour to figure out what function Foo() does.
- Opening curly braces at the end of a the first line of a large code
block making it extremely difficult to find where the code block begins
or ends.
- Short variable/function names that someone thinks is descriptive but
really isn't.
- Inconsistent coding style from one file to the next.
- Other problems.

After all, the kernel must be maintained by a number of people and those
people will come and go. The only real way to keep bugs at a minimum,
efficiency at a maximum, and the learning curve for new coders
acceptable is consistent coding style and code that is easily
maintained. The things I note above are not a means to that end. Sure,
maybe Bob, the designer and coder of bobsdriver.o knows the code inside
and out without need of a single comment or descriptive
function/variable name, but what happens when Bob can no longer maintain
it? It's 10,000 lines of code, the kernel is useless without it, it
broke with kernel 2.6.0, and Joe, the new maintainer of bobsdriver.o, is
having a hell of a time figuring out what the damn thing does.

An extreme case? Maybe, but how many times does someone come in to
development and have to spend more hours than necessary trying to figure
out how things work (or are supposed to work) instead of actually
writing useful code?

PGA
--
Paul G. Allen
UNIX Admin II ('til Dec. 3)/FlUnKy At LaRgE (forever!)
Akamai Technologies, Inc.
www.akamai.com

Mohammad A. Haque

unread,
Nov 30, 2001, 1:40:14 PM11/30/01
to
On Friday, November 30, 2001, at 01:19 , Dana Lacoste wrote:

> This issue has gone beyond productivity to personal name calling
> and spurious defence. Can we all just move on a bit maybe?

Heh, are you kidding, This is LKML. It's gonna be beaten into the ground
and then some.

For what it's worth, most professional programmers I've interacted with
always adhere to the programming style of where they are working,
regardless of the way they personally program.
--

=====================================================================
Mohammad A. Haque http://www.haque.net/
mha...@haque.net

"Alcohol and calculus don't mix. Developer/Project Lead
Don't drink and derive." --Unknown http://www.themes.org/
batm...@themes.org
=====================================================================

Jeff Garzik

unread,
Nov 30, 2001, 1:40:14 PM11/30/01
to
Diverse coding styles in the Linux kernel create long term maintenance
problems. End of story.

Jeff


--
Jeff Garzik | Only so many songs can be sung
Building 1024 | with two lips, two lungs, and one tongue.
MandrakeSoft | - nomeansno

Daniel Phillips

unread,
Nov 30, 2001, 1:50:09 PM11/30/01
to
On November 30, 2001 07:13 pm, Larry McVoy wrote:
> On Fri, Nov 30, 2001 at 06:49:11PM +0100, Daniel Phillips wrote:
> > On the other hand, the idea of a coding style hall of shame - publicly
> > humiliating kernel contributers - is immature and just plain silly. It's
> > good to have a giggle thinking about it, but that's where it should stop.
>
> If you've got a more effective way of getting people to do the right thing,
> lets hear it. Remember, the goal is to protect the source base, not your,
> my, or another's ego.

Yes, lead by example, it's at least as effective. Maybe humiliation works at
Sun, when you're getting a paycheck, but in the world of volunteer
development it just makes people walk.

--
Daniel

Nestor Florez

unread,
Nov 30, 2001, 1:50:12 PM11/30/01
to
Book : Code Complete
Author : Steve McConnell
Publisher: Microsoft Press

Nestor :-)

Paul G. Allen

unread,
Nov 30, 2001, 1:50:13 PM11/30/01
to
"John H. Robinson, IV" wrote:

>
> On Fri, Nov 30, 2001 at 10:15:41AM -0800, Paul G. Allen wrote:
> >
> > IMEO, there is but one source as reference for coding style: A book by
> > the name of "Code Complete". (Sorry, I can't remember the author and I
> > no longer have a copy. Maybe my Brother will chime in here and fill in
> > the blanks since he still has his copy.)
>
> Code Complete: A Practical Handbook of Software Construction.
> Redmond, Wa.: Microsoft Press, 880 pages, 1993.
> Retail price: $35.
> ISBN: 1-55615-484-4.
>

Thanks John. You beat my bro. to it. Of course, he's probably still in
bed since it's not even noon yet. :)

(Note to self: Order a new copy of the book. I should have done it last
night when I ordered 3 other programming books. :/)

Maciej W. Rozycki

unread,
Nov 30, 2001, 1:50:15 PM11/30/01
to
On Fri, 30 Nov 2001, Martin Dalecki wrote:

> > Please explain. Especially contentrate on justifing why serial interfaces
> > aren't a tty device.
>
> No problem ;-).
>
> There is the hardware: in esp. the serial controller itself - this
> belongs
> to misc, becouse a mouse for example doesn't have to interpret any tty
> stuff

The same applies to the console keyboard, which is hooked to a standard
UART on certain systems as well.

--
+ Maciej W. Rozycki, Technical University of Gdansk, Poland +
+--------------------------------------------------------------+
+ e-mail: ma...@ds2.pg.gda.pl, PGP key available +

Martin Dalecki

unread,
Nov 30, 2001, 2:00:07 PM11/30/01
to
Russell King wrote:
>
> On Fri, Nov 30, 2001 at 06:49:01PM +0100, Martin Dalecki wrote:
> > Well sombeody really cares apparently! Thank's.
>
> Currently its a restructuring of serial.c to allow different uart type
> ports to be driven without duplicating serial.c many times over. (the
> generic_serial didn't hack it either).
>
> > Any pointers where to ahve a look at it?
>
> I should probably put this on a web page somewhere 8)
>
> :pserver:c...@pubcvs.arm.linux.org.uk:/mnt/src/cvsroot, module serial
> (no password)
>
> > BTW> Did you consider ther misc device idea? (Hooking serial at to the
> > misc device stuff).
>
> I just caught that comment - I'll await your reply.

I'm just right now looking at the code found above.
First of all: GREAT WORK! And then you are right a bit, I just don't
see too much code duplication between the particular drivers there.
However I still don't see the need to carrige the whole tty stuff just
to drive a mouse... but I don't see a solution right now.
I wouldn't be good to introduce a scatter heap of "micro"
driver modules like the ALSA people did as well too..

However in serial/linux/drivers/serial/serial_clps711x.c
the following wonders me a bit:

#ifdef CONFIG_DEVFS_FS
normal_name: SERIAL_CLPS711X_NAME,
callout_name: CALLOUT_CLPS711X_NAME,
#else
normal_name: SERIAL_CLPS711X_NAME,
callout_name: CALLOUT_CLPS711X_NAME,
#endif

What is the ifdef supposed to be good for?


One other suggestion serial_code.c could be just serial.c or code.c
or main.c, since _xxxx.c should distinguish between particulart devices.
It was a bit clumy to find the entry-point for me...
And then we have in uart_register_driver:

normal->open = uart_open;
normal->close = uart_close;
normal->write = uart_write;
normal->put_char = uart_put_char;
normal->flush_chars = uart_flush_chars;
normal->write_room = uart_write_room;
normal->chars_in_buffer = uart_chars_in_buffer;
normal->flush_buffer = uart_flush_buffer;
normal->ioctl = uart_ioctl;
normal->throttle = uart_throttle;
normal->unthrottle = uart_unthrottle;
normal->send_xchar = uart_send_xchar;
normal->set_termios = uart_set_termios;
normal->stop = uart_stop;
normal->start = uart_start;
normal->hangup = uart_hangup;
normal->break_ctl = uart_break_ctl;
normal->wait_until_sent = uart_wait_until_sent;

And so on....

Why not do:

{
static strcut tty_driver _normal = {
open: uart_open,
close: uart_close,
...
};

...

*normal = _normal;
}
...
for the static stuff first and only initialize the
dynamically calculated addresses dynamically later, like
the double refferences...
This would save already *quite a bit* of .text ;-).

Yeah I know I'm a bit perverse about optimizations....


You already do it for the callout device nearly this
way:

/*
* The callout device is just like the normal device except for
* the major number and the subtype code.
*/
*callout = *normal;
callout->name = drv->callout_name;
callout->major = drv->callout_major;
callout->subtype = SERIAL_TYPE_CALLOUT;
callout->read_proc = NULL;
callout->proc_entry = NULL;

Reagrds.

Henning Schmiedehausen

unread,
Nov 30, 2001, 2:00:11 PM11/30/01
to
On Fri, 2001-11-30 at 19:13, Larry McVoy wrote:

> But I haven't found anything else which works as well. I don't use that
> technique at BitMover, instead I rewrite code that I find offensive. That's

Sounds like the thing that Mr. Gates did when Microsoft was small. Maybe
there _is_ a point in this.

Regards
Henning


--
Dipl.-Inf. (Univ.) Henning P. Schmiedehausen -- Geschaeftsfuehrer
INTERMETA - Gesellschaft fuer Mehrwertdienste mbH h...@intermeta.de

Am Schwabachgrund 22 Fon.: 09131 / 50654-0 in...@intermeta.de
D-91054 Buckenhof Fax.: 09131 / 50654-20

-

Russell King

unread,
Nov 30, 2001, 2:00:13 PM11/30/01
to
On Fri, Nov 30, 2001 at 07:40:29PM +0100, Maciej W. Rozycki wrote:
> The same applies to the console keyboard, which is hooked to a standard
> UART on certain systems as well.

This particular point is up for discussion between myself and James Simmons
(and other interested parties). We're getting there...

--
Russell King (r...@arm.linux.org.uk) The developer of ARM Linux
http://www.arm.linux.org.uk/personal/aboutme.html

antirez

unread,
Nov 30, 2001, 2:00:14 PM11/30/01
to
On Fri, Nov 30, 2001 at 10:20:43AM -0800, Paul G. Allen wrote:
> antirez wrote:
> A variable/function name should ALWAYS be descriptive of the
> variable/function purpose. If it takes a long name, so be it. At least
> the next guy looking at it will know what it is for.

I agree, but descriptive != long

for (mydearcountr = 0; mydearcounter < n; mydearcounter++)

and it was just an example. Read it as "bad coding style".

--
Salvatore Sanfilippo <ant...@invece.org>
http://www.kyuzz.org/antirez
finger ant...@tella.alicom.com for PGP key
28 52 F5 4A 49 65 34 29 - 1D 1B F6 DA 24 C7 12 BF

Jeff Garzik

unread,
Nov 30, 2001, 2:00:17 PM11/30/01
to
"Paul G. Allen" wrote:
> IMEO, there is but one source as reference for coding style: A book by
> the name of "Code Complete". (Sorry, I can't remember the author and I
> no longer have a copy. Maybe my Brother will chime in here and fill in
> the blanks since he still has his copy.)

Hungarian notation???

That was developed by programmers with apparently no skill to
see/remember how a variable is defined. IMHO in the Linux community
it's widely considered one of the worst coding styles possible.


> Outside of that, every place I have worked as a programmer, with a team
> of programmers, had a style that was adhered to almost religiously.

yes

> In 99% of the Linux code I have seen, the style does indeed "suck". Why?
> Consider a new coder coming in for any given task. S/he knows nothing
> about the kernel and needs to get up to speed quickly. S/he starts
> browsing the source - the ONLY definitive explanation of what it does
> and how it works - and finds:

99% is far and above the level of suck defined by most :)


> - Single letter variable names that aren't simple loop counters and
> must ask "What the h*** are these for?"
> - No function/file comment headers explaining what the purpose of the
> function/file is.
> - Very few comments at all, which is not necessarily bad except...
> - The code is not self documenting and without comments it takes an
> hour to figure out what function Foo() does.

We could definitely use a ton more comments... patches accepted.


> - Opening curly braces at the end of a the first line of a large code
> block making it extremely difficult to find where the code block begins
> or ends.

use a decent editor


> - Short variable/function names that someone thinks is descriptive but
> really isn't.

not all variable names need their purpose obvious to complete newbies.
sometimes it takes time to understand the code's purpose, in which case
the variable names become incredibly descriptive.


> After all, the kernel must be maintained by a number of people and those
> people will come and go. The only real way to keep bugs at a minimum,
> efficiency at a maximum, and the learning curve for new coders
> acceptable is consistent coding style and code that is easily
> maintained. The things I note above are not a means to that end. Sure,
> maybe Bob, the designer and coder of bobsdriver.o knows the code inside
> and out without need of a single comment or descriptive
> function/variable name, but what happens when Bob can no longer maintain
> it? It's 10,000 lines of code, the kernel is useless without it, it
> broke with kernel 2.6.0, and Joe, the new maintainer of bobsdriver.o, is
> having a hell of a time figuring out what the damn thing does.

yes

Jeff


--
Jeff Garzik | Only so many songs can be sung
Building 1024 | with two lips, two lungs, and one tongue.
MandrakeSoft | - nomeansno

-

Larry McVoy

unread,
Nov 30, 2001, 2:10:13 PM11/30/01
to
On Fri, Nov 30, 2001 at 07:43:01PM +0100, Daniel Phillips wrote:
> On November 30, 2001 07:13 pm, Larry McVoy wrote:
> > On Fri, Nov 30, 2001 at 06:49:11PM +0100, Daniel Phillips wrote:
> > > On the other hand, the idea of a coding style hall of shame - publicly
> > > humiliating kernel contributers - is immature and just plain silly. It's
> > > good to have a giggle thinking about it, but that's where it should stop.
> >
> > If you've got a more effective way of getting people to do the right thing,
> > lets hear it. Remember, the goal is to protect the source base, not your,
> > my, or another's ego.
>
> Yes, lead by example, it's at least as effective.

I'd like to see some data which backs up that statement. My experience is
that that is an unsupportable statement. You'd need to know how effective
the Sun way is, have seen multiple development organizations using that
way and other ways, and have watched the progress.

I'm in a somewhat unique position in that I have a lot of ex-Sun engineers
using BitKeeper and I have a pretty good idea how fast they make changes.
It's a lot faster and a lot more consistent than the Linux effort, in fact,
there is no comparison.

I'm not claiming that the coding style is the source of their speed, but
it is part of the culture which is the source of their speed.

As far as I can tell, you are just asserting that leading by example is
more effective. Am I incorrect? Do you have data? I have piles which
shows the opposite.

> Maybe humiliation works at
> Sun, when you're getting a paycheck, but in the world of volunteer
> development it just makes people walk.

Huh. Not sure I agree with that either. It's definitely a dicey area
but go through the archives (or your memory if it is better than mine)
and look at how the various leaders here respond to bad choices. It's
basically public humiliation. Linus is especially inclined to speak
his mind when he sees something bad. And people stick around.

I think the thing you are missing is that what I am describing is a lot
like boot camp. Someone with more knowledge and experience than you
yells at your every mistake, you hate it for a while, and you emerge
from boot camp a stronger person with more skills and good habits as
well as a sense of pride. If there was a way to "lead by example" and
accomplish the same goals in the same time, don't you think someone
would have figured that out by now?


--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

RaúlNúñez de Arenas Coronado

unread,
Nov 30, 2001, 2:50:11 PM11/30/01
to
Hi Jeff and Paul :)

>"Paul G. Allen" wrote:
>> IMEO, there is but one source as reference for coding style: A book by
>> the name of "Code Complete". (Sorry, I can't remember the author and I
>> no longer have a copy. Maybe my Brother will chime in here and fill in
>> the blanks since he still has his copy.)
>Hungarian notation???
>That was developed by programmers with apparently no skill to
>see/remember how a variable is defined. IMHO in the Linux community
>it's widely considered one of the worst coding styles possible.

Not at all... Hungarian notation is not so bad, except it is only
understood by people from hungary. So the name }:))) I just use it
when I write code for Hungary or secret code that no one should
read...

>> - Short variable/function names that someone thinks is descriptive but
>> really isn't.
>not all variable names need their purpose obvious to complete newbies.
>sometimes it takes time to understand the code's purpose, in which case
>the variable names become incredibly descriptive.

Here you are right. The code can be seen really as a book: you
can start reading at the middle and yet understand some of the story,
but it's far better when you start at the beginning ;))) Moreover,
most of the variable and function names in the kernel code are quite
descriptive, IMHO.

Of course, more comments and more descriptive names doesn't harm,
but some times they bloat the code...

Raúl

Daniel Phillips

unread,
Nov 30, 2001, 5:00:15 PM11/30/01
to
On November 30, 2001 08:05 pm, Larry McVoy wrote:
> Huh. Not sure I agree with that either. It's definitely a dicey area
> but go through the archives (or your memory if it is better than mine)
> and look at how the various leaders here respond to bad choices. It's
> basically public humiliation. Linus is especially inclined to speak
> his mind when he sees something bad. And people stick around.

There's an additional pattern, you'll notice that the guys who end up wearing
the dung are the ones with full time Linux programming jobs, who basically
have no option but to stick around. Do that to every newbie and after a
while we'll have a smoking hole in the ground where Linux used to be.

A simple rule to remember is: when code is bad, criticize the code, not the
coder.

> I think the thing you are missing is that what I am describing is a lot
> like boot camp. Someone with more knowledge and experience than you
> yells at your every mistake, you hate it for a while, and you emerge
> from boot camp a stronger person with more skills and good habits as
> well as a sense of pride.

Thanks, but I'll spend my summer in some other kind of camp ;-) I'm sure it
works for some people, but mutual respect is more what I'm used to and prefer.

> If there was a way to "lead by example" and
> accomplish the same goals in the same time, don't you think someone
> would have figured that out by now?

Somebody did, and as hard as it is for some to fit it into their own model of
the universe, there is somebody leading by example, not running a command
economy but a self-organizing meritocracy. Do we achieve the same goals in
the same time? Sometimes it doesn't seem like it, but because this thing
just keeps crawling relentlessly forward on a thousand fronts, in the end we
accomplish even more than Sun does.

--
Daniel

Larry McVoy

unread,
Nov 30, 2001, 5:10:10 PM11/30/01
to
This is my last post on this topic, I don't think I can say more than I have.

On Fri, Nov 30, 2001 at 10:54:39PM +0100, Daniel Phillips wrote:
> On November 30, 2001 08:05 pm, Larry McVoy wrote:
> > Huh. Not sure I agree with that either. It's definitely a dicey area
> > but go through the archives (or your memory if it is better than mine)
> > and look at how the various leaders here respond to bad choices. It's
> > basically public humiliation. Linus is especially inclined to speak
> > his mind when he sees something bad. And people stick around.
>
> There's an additional pattern, you'll notice that the guys who end up wearing
> the dung are the ones with full time Linux programming jobs, who basically
> have no option but to stick around. Do that to every newbie and after a
> while we'll have a smoking hole in the ground where Linux used to be.
>
> A simple rule to remember is: when code is bad, criticize the code, not the
> coder.

Your priorities are upside down. The code is more important than the
coder, it will outlive the coder's interest in that code. Besides,
this isn't some touchy feely love fest, it's code. It's suppose to
work and work well and be maintainable. You don't get that by being
"nice", you get that by insisting on quality. If being nice worked,
we wouldn't be having this conversation.

> > I think the thing you are missing is that what I am describing is a lot
> > like boot camp. Someone with more knowledge and experience than you
> > yells at your every mistake, you hate it for a while, and you emerge
> > from boot camp a stronger person with more skills and good habits as
> > well as a sense of pride.
>
> Thanks, but I'll spend my summer in some other kind of camp ;-) I'm sure it
> works for some people, but mutual respect is more what I'm used to and prefer.

The problem here is that you are assuming that yelling at someone means
that you don't respect that someone. Nothing could be further from the
truth. If you didn't respect them enough to think you could get good
results from them, I doubt you'd be yelling at them in the first place.
Don't confuse intense demands for excellence with a lack of respect,
that's not the case.

> > If there was a way to "lead by example" and
> > accomplish the same goals in the same time, don't you think someone
> > would have figured that out by now?
>
> Somebody did, and as hard as it is for some to fit it into their own model of
> the universe, there is somebody leading by example, not running a command
> economy but a self-organizing meritocracy. Do we achieve the same goals in
> the same time? Sometimes it doesn't seem like it, but because this thing
> just keeps crawling relentlessly forward on a thousand fronts, in the end we
> accomplish even more than Sun does.

Bah. Daniel, you are forgetting that I know what Sun has done first hand
and I know what Linux has done first hand. If you think that Linux is
at the same level as Sun's OS or ever will be, you're kidding yourself.
Linux is really cool, I love it, and I use it every day. But it's not
comparable to Solaris, sorry, not even close. I'm not exactly known for
my love of Solaris, you know, in fact I really dislike it. But I respect
it, it can take a licking and keep on ticking. Linux isn't there yet
and unless the development model changes somewhat, I'll stand behind my
belief that it is unlikely to ever get there.

--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

Andrew Morton

unread,
Nov 30, 2001, 5:30:11 PM11/30/01
to
Larry McVoy wrote:
>
> Linux isn't there yet
> and unless the development model changes somewhat, I'll stand behind my
> belief that it is unlikely to ever get there.

I am (genuinely) interested in what changes you think are needed.

-

H. Peter Anvin

unread,
Nov 30, 2001, 5:40:16 PM11/30/01
to
Followup to: <2001113014...@work.bitmover.com>
By author: Larry McVoy <l...@bitmover.com>
In newsgroup: linux.dev.kernel

> >
> > A simple rule to remember is: when code is bad, criticize the code, not the
> > coder.
>
> Your priorities are upside down. The code is more important than the
> coder, it will outlive the coder's interest in that code. Besides,
> this isn't some touchy feely love fest, it's code. It's suppose to
> work and work well and be maintainable. You don't get that by being
> "nice", you get that by insisting on quality. If being nice worked,
> we wouldn't be having this conversation.
>

So the sensible thing to do is, again, to criticize the code, not the
coder.

There are multiple reasons for that:

a) The code is what counts.
b) People take personal attacks, well, personally. It causes
unnecessary bad blood.
c) There are people who will produce beautiful code one minute, and
complete shitpiles the next one.

If a certain piece of code is a shitpile, go ahead and say so. Please
do, however, explain why that is, and please give the maintainer a
chance to listen before being flamed publically.

-hpa
--
<h...@transmeta.com> at work, <h...@zytor.com> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt <am...@zytor.com>

rddu...@osdl.org

unread,
Nov 30, 2001, 6:00:15 PM11/30/01
to
On Fri, 30 Nov 2001, Andrew Morton wrote:

| Larry McVoy wrote:
| >
| > Linux isn't there yet
| > and unless the development model changes somewhat, I'll stand behind my
| > belief that it is unlikely to ever get there.
|
| I am (genuinely) interested in what changes you think are needed.

Same here, regarding both development model and OS functionality,
reliability, etc.
--
~Randy

Alexander Viro

unread,
Nov 30, 2001, 6:20:11 PM11/30/01
to

On Fri, 30 Nov 2001, Andrew Morton wrote:

> Jeff Garzik wrote:
> >
> > We could definitely use a ton more comments... patches accepted.
> >
>

> Too late. Useful comments go in during, or even before the code.
>
> While we're on the coding style topic: ext2_new_block.

Yes. I hope that new variant (see balloc.c,v) gets the thing into
better form, but then I'm obviously biased...

Larry McVoy

unread,
Nov 30, 2001, 7:00:19 PM11/30/01
to
On Fri, Nov 30, 2001 at 02:17:33PM -0800, Andrew Morton wrote:
> Larry McVoy wrote:
> >
> > Linux isn't there yet
> > and unless the development model changes somewhat, I'll stand behind my
> > belief that it is unlikely to ever get there.
>
> I am (genuinely) interested in what changes you think are needed.

Well I have an opinion, not sure if it will be well received or not,
but here goes.

There is a choice which needs to be made up front, and that is this:

do you want to try and turn the Linux kernel hackers into Sun style
hackers or do you want to try something else?

This assumes we have agreement that there is a difference between the
two camps. I'd rather not get into a "this way is better than that way"
discussion, let's just postulate that the Sun way has some pros/cons
and so do the Linux way.

If you want to try and make Linux people work like Sun people, I think
that's going to be tough. First of all, Sun has a pretty small kernel
group, they work closely with each other, and they are full time,
highly paid, professionals working with a culture that is intolerant of
anything but the best. It's a cool place to be, to learn, but I think
it is virtually impossible to replicate in a distributed team, with way
more people, who are not paid the same or working in the same way.

Again, let's not argue the point, let's postulate for the time being
that the Linux guys aren't going to work like the Sun guys any time soon.

So what's the problem? The Sun guys fix more bugs, handle more corner
cases, and scale up better (Linux is still better on the uniprocessors
but the gap has narrowed considerably; Sun is getting better faster than
Linux is getting better, performance wise. That's a bit unfair because
Linux had, and has, better absolute numbers so there was less room to
improve, but the point is that Sun is catching up fast.)

As the source base increases in size, handles more devices, runs on more
platforms, etc., it gets harder and harder to make the OS be reliable.
Anyone can make a small amount of code work well, it's exponentially
more difficult to make a large amount of code work well. There are lots
of studies which show this to be true, the mythical man month is a good
starting place.

OK, so it sounds like I'm saying that the Linux guys are lame, Sun is
great, and there isn't any chance that Linux is going to be as good
as Solaris. That's not quite what I'm saying. *If* you want to play
by the same rules as Sun, i.e., develop and build things the same way,
then that is what I'm saying. The Linux team will never be as good
as the Sun team unless the Sun team gets a lot worse. I think that's
a fact of life, Sun has 100s of millions of dollars a year going into
software development. ESR can spout off all he likes, but there is no
way a team of people working for fun is going to compete with that.

On the other hand, there is perhaps a way Linux could be better. But it
requires changing the rules quite a bit.

Instead of trying to make the Linux hackers compete with the Sun hackers,
what would happen if you architected your way around the problem?
For example, suppose I said we need to have a smaller, faster, more
reliable uniprocessor kernel. Suppose I could wave a magic wand and
make SMP go away (I can't, but bear with me for a second). The problem
space just got at least an order of magnitude less complex than what Sun
deals with. I think they are up to 32-64 way SMP and you can imagine,
I hope, the additional complexity that added. OK, so *if* uniprocessor
was the only thing we had to worry about, or say 2-4 way SMP with just
a handful of locks, then can we agree that it is a lot more likely that
we could produce a kernel which was in every way as good or better than
Sun's kernel, on the same platform? If the answer is yes, keep reading,
if the answer is no, then we're done because I don't know what to do if
we can't get that far.

For the sake of discussion, let's assume that you buy what I am saying
so far. And let's say that we agree that if you were to toss the SMP
stuff then we have a good chance at making a reliable kernel, albeit
an uninteresting one when compared to big boxes. If you want me to go
into what I think it would take to do that, I will.

The problem is that we can't ignore the SMP issues, it drives hardware
sales, gets press, it's cool, etc. We have to have both but the problem
is that if we have both we really need Sun's level of professionalism
to make it work, and it isn't realistic to expect that from a bunch of
underpaid (or not at all paid) Linux hackers.

Here's how you get both. Fork the development efforts into the SMP part
and the uniprocessor part. The uniprocessor focus is on reliability,
stability, performance. The SMP part is a whole new development effort,
which is architected in such a way as to avoid the complexity of a
traditional SMP implementation.

The uniprocessor team owns the core architecture of the system. The
abstractions provided, the baseline code, etc., that's all uni. The
focus there is a small, fast, stable kernel.

The SMP team doesn't get to touch the core code, or at least there is
a very strong feedback loop against. In private converstations, we've
started talking about the "punch in the nose" feedback loop, which means
while it may be necessary to touch generic code for the benefit of SMP,
someone has to feel strongly enough about it that they well sacrifice
their nose.

It would seem like I haven't solved anything here, just painted a nice
but impossible picture. Maybe. I've actually thought long and hard
about the approach needed to scale up without touching all the code
and it is radically different from the traditional way (i.e., how
Sun, SGI, Sequent, etc., did it). If you are interested in that, I'll
talk about it but I'll wait to see if anyone cares.

The summary over all of this is:

If you want to solve all the problems that Sun does, run on the same
range of UP to big SMP, Linux is never going to be as reliable as
Solaris. My opinion, of course, but one that is starting to gain
some traction as the OS becomes more complex.

That leaves you with a choice: either give up on some things,
magically turn the Linux hackers into Sun hackers, or
architect your way around the problem.

My vote is the last one, it fits better with the Linux effort, the answer
is way more cool than anything Sun or anyone else has done, and it lets
you have a simple mainline kernel source base.


--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

Rik van Riel

unread,
Nov 30, 2001, 7:20:12 PM11/30/01
to
On Fri, 30 Nov 2001, Alexander Viro wrote:

> Fact of life: we all suck at reviewing our own code. You, me, Ken
> Thompson, anybody - we tend to overlook bugs in the code we'd written.
> Depending on the skill we can compensate - there are technics for
> that, but it doesn't change the fact that review by clued people who
> didn't write the thing tends to show bugs we'd missed for years.

Absolutely agreed. Note that this goes hand in hand with
another issue, no matter how scary it may sound to other
people ... <drum roll>

DOCUMENTATION

Because, without documentation we can only see what code
does and not what it's supposed to do.

This in turn means other people cannot identify bugs in
the code, simply because they're not sure what the code
is supposed to do.

regards,

Rik
--
Shortwave goes a long way: irc.starchat.net #swl

http://www.surriel.com/ http://distro.conectiva.com/

Daniel Phillips

unread,
Nov 30, 2001, 7:50:09 PM11/30/01
to
Hi Rik,

On December 1, 2001 01:35 am, Rik van Riel wrote:


> On Fri, 30 Nov 2001, Andrew Morton wrote:
> > Larry McVoy wrote:
> > > Linux isn't there yet
> > > and unless the development model changes somewhat, I'll stand behind my
> > > belief that it is unlikely to ever get there.
> >
> > I am (genuinely) interested in what changes you think are needed.
>

> I'm very interested too, though I'll have to agree with Larry
> that Linux really isn't going anywhere in particular and seems
> to be making progress through sheer luck.

You just reminded me of Minnesota Fats most famous quote:

"The more I practice, the luckier I get"

--
Daniel

Linus Torvalds

unread,
Nov 30, 2001, 8:00:16 PM11/30/01
to

On Fri, 30 Nov 2001, Rik van Riel wrote:
>
> I'm very interested too, though I'll have to agree with Larry
> that Linux really isn't going anywhere in particular and seems
> to be making progress through sheer luck.

Hey, that's not a bug, that's a FEATURE!

You know what the most complex piece of engineering known to man in the
whole solar system is?

Guess what - it's not Linux, it's not Solaris, and it's not your car.

It's you. And me.

And think about how you and me actually came about - not through any
complex design.

Right. "sheer luck".

Well, sheer luck, AND:
- free availability and _crosspollination_ through sharing of "source
code", although biologists call it DNA.
- a rather unforgiving user environment, that happily replaces bad
versions of us with better working versions and thus culls the herd
(biologists often call this "survival of the fittest")
- massive undirected parallel development ("trial and error")

I'm deadly serious: we humans have _never_ been able to replicate
something more complicated than what we ourselves are, yet natural
selection did it without even thinking.

Don't underestimate the power of survival of the fittest.

And don't EVER make the mistake that you can design something better than
what you get from ruthless massively parallel trial-and-error with a
feedback cycle. That's giving your intelligence _much_ too much credit.

Quite frankly, Sun is doomed. And it has nothing to do with their
engineering practices or their coding style.

Linus

Davide Libenzi

unread,
Nov 30, 2001, 8:10:08 PM11/30/01
to
On Fri, 30 Nov 2001, Larry McVoy wrote:

[ A lot of stuff Pro-Sun ]

Wait a minute.
Wasn't it you that were screaming against Sun, leaving their team because
their SMP decisions about scaling sucked, because their OS was bloated
like hell with sync spinlocks, saying that they tried to make it scale but
they failed miserably ?
What is changed now to make Solaris, a fairly vanishing OS, to be the
reference OS/devmodel for every kernel developer ?
Wasn't it you that were saying that Linux will never scale with more than
2 CPUs ?
Isn't Linux SMP improved from the very first implementation ?
Isn't Linux SMP improved from the very first implementation w/out losing
reliability ?
Why you don't try to compare 2.0.36 with 2.4.x let's say on a 8 way SMP ?
Why should it stop improving ?
Because you know that adding fine grained spinlocks will make the OS
complex to maintain and bloated ... like it was Solaris before you
suddendly changed your mind.


<YOUR QUOTE>
> Then people want more performance. So they thread some more and now
> the locks aren't 1:1 to the objects. What a lock covers starts to
> become fuzzy. Thinks break down quickly after this because what
> happens is that it becomes unclear if you are covered or not and
> it's too much work to figure it out, so each time a thing is added
> to the kernel, it comes with a lock. Before long, your 10 or 20
> locks are 3000 or more like what Solaris has. This is really bad,
> it hurts performance in far reaching ways and it is impossible to
> undo.
</YOUR QUOTE>

I kindly agree with this, just curious to understand which kind of amazing
architectural solution Solaris took to be a reference for SMP
development/scaling.


- Davide

Larry McVoy

unread,
Nov 30, 2001, 8:20:13 PM11/30/01
to
On Fri, Nov 30, 2001 at 05:13:38PM -0800, Davide Libenzi wrote:
> On Fri, 30 Nov 2001, Larry McVoy wrote:
> Wait a minute.
> Wasn't it you that were screaming against Sun, leaving their team because
> their SMP decisions about scaling sucked, because their OS was bloated
> like hell with sync spinlocks, saying that they tried to make it scale but
> they failed miserably ?

Yup, that's me, guilty on all charges.

> What is changed now to make Solaris, a fairly vanishing OS, to be the
> reference OS/devmodel for every kernel developer ?

It's not. I never said that we should solve the same problems the same
way that Sun did, go back and read the posting.

> Wasn't it you that were saying that Linux will never scale with more than
> 2 CPUs ?

No, that wasn't me. I said it shouldn't scale beyond 4 cpus. I'd be pretty
lame if I said it couldn't scale with more than 2. Should != could.

> Because you know that adding fine grained spinlocks will make the OS
> complex to maintain and bloated ... like it was Solaris before you
> suddendly changed your mind.

Sorry it came out like that, I haven't changed my mind one bit.

> <YOUR QUOTE>
> > Then people want more performance. So they thread some more and now
> > the locks aren't 1:1 to the objects. What a lock covers starts to
> > become fuzzy. Thinks break down quickly after this because what
> > happens is that it becomes unclear if you are covered or not and
> > it's too much work to figure it out, so each time a thing is added
> > to the kernel, it comes with a lock. Before long, your 10 or 20
> > locks are 3000 or more like what Solaris has. This is really bad,
> > it hurts performance in far reaching ways and it is impossible to
> > undo.
> </YOUR QUOTE>
>
> I kindly agree with this, just curious to understand which kind of amazing
> architectural solution Solaris took to be a reference for SMP
> development/scaling.

OK, so you got the wrong message. I do _not_ like the approach Sun took,
it's a minor miracle that they are able to make Solaris work as well as
it works given the design decisions they made.

What I do like is Sun's engineering culture. They work hard, they don't
back away from the corner cases, they have high standards. All of which
and more are, in my opinion, a requirement to try and solve the problems
the way they solved them.

So the problem I've been stewing on is how you go about scaling the OS
in a way that doesn't require all those hot shot sun engineers to make
it work and maintain it.


--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

Stephan von Krawczynski

unread,
Nov 30, 2001, 8:30:14 PM11/30/01
to
On Fri, 30 Nov 2001 15:57:40 -0800
Larry McVoy <l...@bitmover.com> wrote:

> [...]


> Here's how you get both. Fork the development efforts into the SMP part
> and the uniprocessor part. The uniprocessor focus is on reliability,
> stability, performance. The SMP part is a whole new development effort,
> which is architected in such a way as to avoid the complexity of a
> traditional SMP implementation.
>
> The uniprocessor team owns the core architecture of the system. The
> abstractions provided, the baseline code, etc., that's all uni. The
> focus there is a small, fast, stable kernel.
>
> The SMP team doesn't get to touch the core code, or at least there is
> a very strong feedback loop against. In private converstations, we've
> started talking about the "punch in the nose" feedback loop, which means
> while it may be necessary to touch generic code for the benefit of SMP,
> someone has to feel strongly enough about it that they well sacrifice
> their nose.

Hi Larry,

let me first tell you this: I am only stating my very personal opinion here and
am probably no guy of your experience or insight in high-tech delevopment
groups. But I saw the whole business for quite some years now, so my thinking:

Your proposal is the wrong way, because:
1) You split up "the crew". Whatever you may call "the crew" here, they all
have one thing in common: they are working on the _same_ project _and_ (very
important either) _one_ man has the final decision. If you look at _any_ other
OS developed by quite a number of completely different people you have to admit
one thing: everything was busted when they began to split up in different
"branches". That simply does not work out. I am only referring to simple human
interaction and communication here, nothing to do with the technical issues.
One of the basic reasons for the success of Linux so far is the collaborated
work of a damn lot of people on the _same_ project.

2) I cannot see the _need_ for such a "team-splitup". If you say one team (core
team) has the "last word" about the baseline code, what do you think will
happen if "necessary" code changes for the second team will be refused? Simple:
they will fork a completely new linux-tree. And this is _the_ end. If you were
to write a driver, what tree will you choose after that? I mean you are dealing
with the main reason why people like linux, here. And not: SCO Unix 3.x,
Unixware, Solaris, Minix, AT&T Unix, Xenix, HPUX, AIX, BSD, NetBSD, BSDi,
Solaris-for-Intel, make-my-day ... (all registered trademark of their
respective owners, which have all low interaction capabilities)
I don't want that, do we want that?

3) Your SMP team (you are talking of true high scaled SMP here) has a very big
problem: it has _very_ few users (compared to UP) and even less developers with
the _hardware_ you need for something like that. I mean you are not talking
about buying an Asus Board and 2 PIII here, you talk about serious, expensive
hardware here. How many people have this at home, or at work for free playing?
Well, see what I mean?

4) Warning, this is the hard stuff!
Ok, so you are fond of SUN. Well, me too. But I am not completely blind, not
yet :-) So I must tell you, if Solaris were the real big hit, then why its
Intel-Version is virtualy been eaten up on the market (the _buying_ market out
there) by linux? The last time I met a guy seriously talking about Solaris
(x86) being "better" than Linux was about three years ago. And btw, this was
_the_ last guy talking about it at all. So lets simply assume this: the Solaris
UP market is already gone, if you are talking about the _mass_ market. This
parrot is _deceased_! It is plain dead.
Now you are dealing with another problem: SUN (being kind of the last resort of
a widespread and really working commercial unix) will have a lot of work to do
in the future to keep up with the simple mass of software and application
coming in for linux, only because it is even _more_ widespread today than
Solaris has ever been. And it is _real_ cheap, and you get it _everywhere_. And
everybody owns a box on which you can use it.
This is going to get very hard for SUN, if they do not enter a stage of
complete rethinking what is going on out there.
To make that clear: _nobody_ here _fights_ against SUN or Solaris. Quite a few
of us like it very much. But this is a commercial product. It needs to be sold
to survive - and that is going to be a future problem. SUN will not survive
only building the high-power low-mass computer. CRAY did not either. So maybe
the real good choice would be this: let the good parts of Solaris (and maybe
its SMP features) migrate into linux. This does not mean they should lay off
the staff, just on contrary: these are brilliant people, let them do what they
can do best, but keep an eye on the market. We are in the state of: the market
_demands_ linux. It has already become a well-known trademark, I tend to
believe better-known than Solaris. Somehow one has the feeling they indeed know
whats coming (fs), but have not come to the final (and hard to take)
conclusion.
And to clarify: I am not taking any drugs. This is serious. I have the strong
feeling, that there is already a big player out there, that has learnt a hard
lesson: IBM - and the lesson named OS/2.
When OS/2 came out, I told the people: if they are not going to give it away
for free, they will not make it. And they didn't. Indeed I did not expect them
to learn at all (simply because big companies are mostly not quick-movers), but
they do really astonishing things nowadays. I have the strong feeling the
management is at least as brilliant as the Solaris developpers and worth every
buck, too.

But this is only my small voice, and quite possibly only few are listening, if
any ...

Regards,
Stephan

PS: Is this really a topic for a kernel-mailing-list?

Davide Libenzi

unread,
Nov 30, 2001, 8:30:15 PM11/30/01
to
On Fri, 30 Nov 2001, Mike Castle wrote:

> On Fri, Nov 30, 2001 at 04:50:34PM -0800, Linus Torvalds wrote:
> > Well, sheer luck, AND:
> > - free availability and _crosspollination_ through sharing of "source
> > code", although biologists call it DNA.
> > - a rather unforgiving user environment, that happily replaces bad
> > versions of us with better working versions and thus culls the herd
> > (biologists often call this "survival of the fittest")
> > - massive undirected parallel development ("trial and error")
>

> Linux is one big genetic algorithms project?

It is more subtle, look better inside :)

- Davide

Davide Libenzi

unread,
Nov 30, 2001, 9:10:09 PM11/30/01
to
On Fri, 30 Nov 2001, Larry McVoy wrote:

> On Fri, Nov 30, 2001 at 05:13:38PM -0800, Davide Libenzi wrote:
> > On Fri, 30 Nov 2001, Larry McVoy wrote:
> > Wait a minute.
> > Wasn't it you that were screaming against Sun, leaving their team because
> > their SMP decisions about scaling sucked, because their OS was bloated
> > like hell with sync spinlocks, saying that they tried to make it scale but
> > they failed miserably ?
>
> Yup, that's me, guilty on all charges.
>
> > What is changed now to make Solaris, a fairly vanishing OS, to be the
> > reference OS/devmodel for every kernel developer ?
>
> It's not. I never said that we should solve the same problems the same
> way that Sun did, go back and read the posting.

This is your quote Larry :

<>
If you want to try and make Linux people work like Sun people, I think
that's going to be tough. First of all, Sun has a pretty small kernel
group, they work closely with each other, and they are full time,
highly paid, professionals working with a culture that is intolerant of
anything but the best. It's a cool place to be, to learn, but I think
it is virtually impossible to replicate in a distributed team, with way
more people, who are not paid the same or working in the same way.
<>

So, if these guys are smart, work hard and are professionals, why did they
take bad design decisions ?
Why didn't they implemented different solutions like, let's say "multiple
independent OSs running on clusters of 4 CPUs" ?
What we really have to like about Sun ?
Me personally, if I've to choose, I'll take the logo.


- Davide

Linus Torvalds

unread,
Nov 30, 2001, 10:10:08 PM11/30/01
to

On Fri, 30 Nov 2001, Tim Hockin wrote:
>
> > I'm deadly serious: we humans have _never_ been able to replicate
> > something more complicated than what we ourselves are, yet natural
> > selection did it without even thinking.
>
> a very interesting argument, but not very pertinent - we don't have 10's of
> thousands of year or even really 10's of years. We have to use intellect
> to root out the obviously bad ideas, and even more importantly the
> bad-but-not-obviously-bad ideas.

Directed evolution - ie evolution that has more specific goals, and faster
penalties for perceived failure, works on the scale of tens or hundreds of
years, not tens of thousands. Look at dog breeding, but look even more at
livestock breeding, where just a few decades have made a big difference.

The belief that evolution is necessarily slow is totally unfounded.

HOWEVER, the belief that _too_ much direction is bad is certainly not
unfounded: it tends to show up major design problems that do not show up
with less control. Again, see overly aggressive breeding of some dogs
causing bad characteristics, and especially the poultry industry.

And you have to realize that the above is with entities that are much more
complex than your random software project, and where historically you have
not been able to actually influence anything but selection itself.

Being able to influence not just selection, but actually influencing the
_mutations_ that happen directly obviously cuts down the time by another
large piece.

In short, your comment about "not pertinent" only shows that you are
either not very well informed about biological changes, or, more likely,
it's just a gut reaction without actually _thinking_ about it.

Biological evolution is alive and well, and does not take millions of
years to make changes. In fact, most paleolontologists consider some of
the changes due to natural disasters to have happened susprisingly fast,
even in the _absense_ of "intelligent direction".

Of course, at the same time evolution _does_ heavily tend to favour
"stable" life-forms (sharks and many amphibians have been around for
millions of years). That's not because evolution is slow, but simply
because good lifeforms work well in their environments, and if the
environment doesn't change radically they have very few pressures to
change.

There is no inherent "goodness" in change. In fact, there are a lot of
reasons _against_ change, something we often forget in technology. The
fact that evolution is slow when there is no big reason to evolve is a
_goodness_, not a strike against it.

> > Quite frankly, Sun is doomed. And it has nothing to do with their
> > engineering practices or their coding style.
>

> I'd love to hear your thoughts on why.

You heard them above. Sun is basically inbreeding. That tends to be good
to bring out specific characteristics of a breed, and tends to be good for
_specialization_. But it's horrible for actual survival, and generates a
very one-sided system that does not adapt well to change.

Microsoft, for all the arguments against them, is better off simply
because of the size of its population - they have a much wider consumer
base, which in turn has caused them largely to avoid specialization. As a
result, Microsoft has a much wider appeal - and suddenly most of the
niches that Sun used to have are all gone, and its fighting for its life
in many of its remaining ones.

Why do you think Linux ends up being the most widely deployed Unix? It's
avoided niches, it's avoided inbreeding, and not being too directed means
that it doesn't get the problems you see with unbalanced systems.

Face it, being one-sided is a BAD THING. Unix was dying because it was
becoming much too one-sided.

Try to prove me wrong.

Linus

Linus Torvalds

unread,
Nov 30, 2001, 10:50:09 PM11/30/01
to

On Fri, 30 Nov 2001, Larry McVoy wrote:
>
> I can't believe the crap you are spewing on this one and I don't think you
> do either. If you do, you need a break. I'm all for letting people explore,
> let software evolve, that's all good. But somebody needs to keep an eye on
> it.

Like somebody had to keep an eye on our evolution so that you had a chance
to be around?

Who's naive?

> If that's not true, Linus, then bow out. You aren't needed and *you*
> just proved it.

Oh, absolutely.

I wish more people realized it. Some people realize it only when they get
really pissed off at me and say "Go screw yourself, I can do this on my
own". And you know what? They are right too, even if they come to that
conclusion for what I consider the wrong reasons.

The reason I'm doing Linux is not because I think I'm "needed". It's
because I enjoy it, and because I happen to believe that I'm better than
most at it. Not necessarily better than everybody else around there, but
good enough, and with the social ties to make me unbeatable right now.

But "indispensable"? Grow up, Larry. You give me too much credit.

And why should I bow out just because I'm not indispenable? Are you
indispensable for the continued well-being of humanity? I believe not,
although you are of course free to disagree. Should you thus "bow out" of
your life just because you're strictly speaking not really needed?

Do I direct some stuff? Yes. But, quite frankly, so do many others. Alan
Cox, Al Viro, David Miller, even you. And a lot of companies, which are
part of the evolution whether they realize it or not. And all the users,
who end up being part of the "fitness testing".

And yes, I actually do believe in what I'm saying.

Victor Yodaiken

unread,
Dec 1, 2001, 12:00:15 AM12/1/01
to
On Fri, Nov 30, 2001 at 07:15:55PM -0800, Linus Torvalds wrote:
> And I will claim that nobody else "designed" Linux any more than I did,
> and I doubt I'll have many people disagreeing. It grew. It grew with a lot
> of mutations - and because the mutations were less than random, they were
> faster and more directed than alpha-particles in DNA.

Ok. There was no design, just "less than random mutations".
Deep.

There was a overall architecture, from Dennis and Ken. There
where a couple of good sound design principles, and there were a
couple of people with some sense of how it should work together.
None of that is incompatible with lots of trial and error and learn
by doing.

Here's a characteristic good Linux design method ,( or call it "less than random
mutation method" if that makes you feel happy): read the literature,
think hard, try something, implement
it, find it doesn't do what was hoped and tear the whole thing down.
That's design. Undesigned systems use the method of: implement some crap and then try to
engineer the consequences away.

>
> > The question is whether Linux can still be designed at
> > current scale.
>
> Trust me, it never was.

Trust you? Ha.

> And I will go further and claim that _no_ major software project that has
> been successful in a general marketplace (as opposed to niches) has ever
> gone through those nice lifecycles they tell you about in CompSci classes.

That's classic:
A) "trust me"
B) now here's a monster bit of misdirection for you to choke on.

Does anyone believe in those stupid software lifcycles?
No.
So does it follow that this has anything to do with design?
No.


> Have you _ever_ heard of a project that actually started off with trying
> to figure out what it should do, a rigorous design phase, and a
> implementation phase?
>
> Dream on.

I've seen better arguments in slashdot.

There was no puppet master - ok.
There was no step by step recipe that showed how it should all work - ok
There was no design involved - nope.

Mike Fedyk

unread,
Dec 1, 2001, 12:10:08 AM12/1/01
to
On Sat, Dec 01, 2001 at 02:21:57AM +0100, Stephan von Krawczynski wrote:
> _the_ last guy talking about it at all. So lets simply assume this: the Solaris
> UP market is already gone, if you are talking about the _mass_ market. This
> parrot is _deceased_! It is plain dead.

Not completely. Many people who *need* to develop for solaris on sun
hardware will run solaris x86 at home (or maybe on a corporate laptop) to be
able to work at home and test the software there too. I know one such
developer myself, there have to be more.

> So maybe
> the real good choice would be this: let the good parts of Solaris (and maybe
> its SMP features) migrate into linux.

Before 2.3 and 2.4 there probably would've been much more contention against
something like this. Even now with large SMP scalability goals, I think it
would be hard to get something like this to be accepted into Linux.

mf

Alexander Viro

unread,
Dec 1, 2001, 12:20:11 AM12/1/01
to

On Fri, 30 Nov 2001, Mike Fedyk wrote:

> This is Linux-Kernel. Each developer is on their own on how they pay the
> their bills. The question is... Why not accept a *driver* that *works* but
> the source doesn't look so good?

Because this "works" may very well include exploitable buffer overruns in
kernel mode. I had seen that - ioctl() assuming that nobody would pass
it deliberately incorrect arguments and doing something like
copy_from_user(&foo, arg, sizeof(foo));
copy_from_user(bar, foo.addr, foo.len);

The problem being, seeing what really happens required half an hour of
wading through the shitload of #defines. _After_ seeing copy_from_user()
in a macro during greap over the tree - that's what had triggered the
further search.

> What really needs to happen...
>
> Accept the driver, but also accept future submissions that *only* clean up
> the comments. It has been said that patches with comments and without code
> have been notoriously droped.

Commented pile of shit is still nothing but a pile of shit. If you comment
Netscape to hell and back it will still remain a festering dungpile. Same
for NT, GNOME, yodda, yodda...

Mike Fedyk

unread,
Dec 1, 2001, 1:00:09 AM12/1/01
to
On Fri, Nov 30, 2001 at 03:57:40PM -0800, Larry McVoy wrote:
> Here's how you get both. Fork the development efforts into the SMP part
> and the uniprocessor part.

Basically with linux, and enough #ifdef's you end up with both in one. IIUC

What would be nice is UP only drivers for initial release. Write a driver
module that says "I'm made for UP and haven't been tested with SMP/HighMEM"
so if you try to compile it with those features it would break with a
helpful error message.

What would be interesting would be SMP with support for UP. The UP only
module would be inserted into a SMP kernel, but would only work inside one
processor, and would have source compatibility with both UP ans SMP kernels.
No non-UP locking required.

Is something like this possible?

Stephen Satchell

unread,
Dec 1, 2001, 1:00:13 AM12/1/01
to
[cc list trimmed]

At 06:02 PM 11/30/01 -0800, Tim Hockin wrote:


> > Linux sez:
> > I'm deadly serious: we humans have _never_ been able to replicate
> > something more complicated than what we ourselves are, yet natural
> > selection did it without even thinking.
>

>a very interesting argument, but not very pertinent - we don't have 10's of
>thousands of year or even really 10's of years. We have to use intellect
>to root out the obviously bad ideas, and even more importantly the
>bad-but-not-obviously-bad ideas.

Disagree with your position strongly. It's very pertinent.

Most of the bad-but-not-obviously-bad ideas get rooted out by people trying
them and finding them to be wanting. Take, for example, the VM flap in the
2.4.* tree: an astonishing side effect of the operation of the VM system
caused people to come up with one that wasn't so astonishing. We're not
sure why the original VM caused such problems. We fixed it anyway. (No, I
played no part in that particular adventure, I was just viewing from the
sidelines.)

The "Linux Way" as I understand it is to release early and release
often. That means that we go through a "generation" of released code every
few weeks, and a "generation" of beta candidates just about daily...and if
you include the patches that appear here during every 24 hours, the
generation cycle is even faster than that. That means that any mutations
that are detrimental to the organism are exposed within days -- sometimes
even hours -- of their introduction into the code base.

When we have a development tree open (as 2.5 is now freshly open) there are
even more generations of code, which further makes natural selection viable
as a weeding process for good and bad code. The difference is that the
number of people affected by the weeding process is smaller, and the
probability of killing production systems with mutations becomes
smaller. The population of the organism is thus healthier because
mutations affect a smaller fraction of the population, and the chances of
expensive illness is reduced.

Beneficial mutations are "back-ported" into the 2.4 and even the 2.2 code
trees, mutations that have proven their worth by extensive experimentation
and experience. Unlike the biological equivalent, this selective spreading
of mutations further improves the health of the population of organisms.

Now that I've stretched the analogy as far as I care to, I will stop
now. Please consider the life-cycle of the kernel when thinking about what
Linus said.

Just my pair-o-pennies(tm).

Stephen Satchell

Alan Cox

unread,
Dec 1, 2001, 5:10:09 AM12/1/01
to
> > Wasn't it you that were saying that Linux will never scale with more than
> > 2 CPUs ?
>
> No, that wasn't me. I said it shouldn't scale beyond 4 cpus. I'd be pretty
> lame if I said it couldn't scale with more than 2. Should != could.

Question: What happens when people stick 8 threads of execution on a die with
a single L2 cache ?

Alan Cox

unread,
Dec 1, 2001, 5:10:08 AM12/1/01
to
> sufficient for development of a great 1-to-4-way kernel, and
> that the biggest force against that is the introduction of
> fine-grained locking. Are you sure about this? Do you see
> ways in which the uniprocessor team can improve?

ccCluster seems a sane idea to me. I don't by Larry's software engineering
thesis but ccCluster makes sense simply because when you want an efficient
system in computing you get it by not pretending one thing is another.
SMP works best when the processors are not doing anything that interacts
with another CPU.

> key people get atracted into mm/*.c, fs/*.c, net/most_everything
> and kernel/*.c while other great wilderness of the tree (with
> honourable exceptions) get moldier. How to address that?

Actually there are lots of people who work on the driver code nowdays
notably the janitors. The biggest problem there IMHO is that when it comes
to driver code Linus has no taste, so he keeps accepting driver patches
which IMHO are firmly at the hamburger end of "taste"

Another thing for 2.5 is going to be to weed out the unused and unmaintained
drivers, and either someone fixes them or they go down the comsic toilet and
we pull the flush handle before 2.6 comes out.

Thankfully the scsi layer breakage is going to help no end in the area of
clockwork 8 bit scsi controllers, which is major culprit number 1. number 2
is probably the audio which is hopefully going to go away with ALSA based
code.

Alan

Gérard Roudier

unread,
Dec 1, 2001, 7:00:07 AM12/1/01
to

Hi Keith,

When I have had to prepare a Makefile for sym-2 as a sub-directory of
drivers/scsi (sym53c8xx_2), it didn't seem to me that a non-ugly way to do
so was possible. I mean that using sub-directory for scsi drivers wasn't
expected by the normal kernel build procedure. Looking into some network
parts that wanted to do so, I only discovered hacky stuff. This left me in
the situation I had to do this in an ugly way.

As you cannot ignore the scsi driver directory is a mess since years due
to too many sources files in an single directory. Will such ugly-ness be
cleaned up in linux-2.5?

By the way, in my opinion, a software that is as ugly as you describe but
not more looks excellentware to me. :-)

Gérard.


On Sat, 1 Dec 2001, Keith Owens wrote:

> On 30 Nov 2001 18:15:28 +0100,
> Henning Schmiedehausen <h...@intermeta.de> wrote:
> >Are you willing to judge "ugliness" of kernel drivers? What is ugly?
> >... Is the aic7xxx driver ugly because it needs libdb ? ...
>
> Yes, and no, mainly yes. Requiring libdb, lex and yacc to to generate
> the firmware is not ugly, user space programs can use any tools that
> the developer needs. I have no opinion either way about the driver
> code, from what I can tell aic7xxx is a decent SCSI driver, once it is
> built.
>
> What is ugly in aic7xxx is :-
>
> * Kludging BSD makefile style into aix7ccc/aicasm/Makefile. It is not
> compatible with the linux kernel makefile style.
>
> * Using a manual flag (CONFIG_AIC7XXX_BUILD_FIRMWARE) instead of
> automatically detecting when the firmware needs to be rebuilt. Users
> who set that flag by mistake but do not have libdb, lex and yacc
> cannot compile a kernel.
>
> * Not checking that the db.h file it picked will actually compile and
> link.
>
> * Butchering the modules_install rule to add a special case for aic7xxx
> instead of using the same method that every other module uses.
>
> * Including endian.h in the aic7xxx driver, but endian.h is a user
> space include. Code that is linked into the kernel or a module
> MUST NOT include user space headers.
>
> * Not correctly defining the dependencies between generated headers and
> the code that includes those headers. Generated headers require
> explicit dependencies, the only reason it works is because aic7xxx ...
>
> * Ships generated files and overwrites them under the same name.
> Shipping generated files is bad enough but is sometime necessary when
> the end user might not have the tools to build the files (libdb, lex,
> yacc). Overwriting the shipped files under the same name is asking
> for problem with source repositories and generating spurious diffs.
>
> All of the above problems are caused by a developer who insists on
> doing his own makefile style instead of following the kernel standards
> for makefiles. Developers with their own standards are BAD!
>
> BTW, I have made repeated offers to rewrite the aic7xx makefiles for
> 2.4 but the aic7xxx maintainer refuses to do so. I _will_ rewrite them
> in 2.5, as part of the kernel build 2.5 redesign.
>
> Keith Owens, kernel build maintainer.

Gérard Roudier

unread,
Dec 1, 2001, 7:30:12 AM12/1/01
to

On Sat, 1 Dec 2001, Alan Cox wrote:

> > > Wasn't it you that were saying that Linux will never scale with more than
> > > 2 CPUs ?
> >
> > No, that wasn't me. I said it shouldn't scale beyond 4 cpus. I'd be pretty
> > lame if I said it couldn't scale with more than 2. Should != could.
>
> Question: What happens when people stick 8 threads of execution on a die with
> a single L2 cache ?

As long as we will not have clean asynchronous mechanisms available from
user land, some applications will have to use more threads of execution
than needed, even with programmers that aren't thread-maniac.

Response to your question: If the problem is to optimize IOs against 8
slow devices using synchronous IO APIs , you will get far better
performances. :-)

Gérard.

Rik van Riel

unread,
Dec 1, 2001, 11:30:12 AM12/1/01
to
On Sat, 1 Dec 2001, Jamie Lokier wrote:
> Mike Castle wrote:
> > Linux is one big genetic algorithms project?
>
> No but I'm sure the VM layer is :-)

I guess we now know the reason Linus purposefully
makes sure no comment ever matches the code ;/

Rik
--
Shortwave goes a long way: irc.starchat.net #swl

http://www.surriel.com/ http://distro.conectiva.com/

-

Ingo Oeser

unread,
Dec 1, 2001, 1:10:11 PM12/1/01
to
On Sat, Dec 01, 2001 at 09:18:54AM -0200, Rik van Riel wrote:
> Biological selection does nothing except removing the weak
> ones, it cannot automatically create systems which work well.
>
> In short, I believe the biological selection is just that,
> selection. The creation of stuff will need some direction.

Creation is as simple as:

1. If you encounter a problem, try to solve it.
2. If you cannot solve it, mark/document/publish it and try to
work around it for now.
3. If you cannot work around it, leave it to other people and
offer help.

Which is pretty much what this list here is for ;-)

Regards

Ingo Oeser
--
Science is what we can tell a computer. Art is everything else. --- D.E.Knuth

Khyron

unread,
Dec 2, 2001, 1:40:08 AM12/2/01
to
In response to:

> "it works/does not work for me" is not testing. Testing
> is _actively_ trying to break things, _very_ preferably
> by another person that wrote the code and to do it
> in documentable and reproducible way. I don't see many
> people doing it.

from "Stanislav Meduna <st...@meduna.org>", Alan Cox said:

"If you want a high quality, tested supported kernel which
has been through extensive QA then use kernel for a
reputable vendor, or do the QA work yourself or with other
people. We have kernel janitors, so why not kernel QA
projects ?

"However you'll need a lot of time, a lot of hardware and
a lot of attention to procedure"

But in his earlier e-mail, Stanislav Meduna said:

"Evolution does not have the option to vote with its feet.
The people do. While Linux is not much more stable than it
was and goes through a painful stabilization cycle on every
major release, Windows does go up with the general stability with
every release. W2k were better than NT, XP are better than W2k.
Windows (I mean the NT-branch) did never eat my filesystems.
Bad combination of USB and devfs was able to do this in half
an hour, and this was *VENDOR KERNEL* that did hopefully get
more testing than that what is released to the general public.
I surely cannot recommend using 2.4 to our customers."

which seems to negate the point Alan was attempting to make.

Just thought I'd set the record straight.

NOTE: Emphasis mine.


"Everyone's got a story to tell, and everyone's got some pain.
And so do you. Do you think you are invisble?
And everyone's got a story to sell, and everyone is strange.
And so are you. Did you think you were invincible?"
- "Invisible", Majik Alex

Rik van Riel

unread,
Dec 2, 2001, 8:00:16 AM12/2/01
to
On Sun, 2 Dec 2001, Stanislav Meduna wrote:

> The need of the VM change is probably a classical example -
> why was it not clear at the 2.4.0-pre1, that the current
> implementation is broken to the point of no repair?

It wasn't broken to the point of no repair until Linus
started integrating use-once and dropping bugfixes.

Rik
--
Shortwave goes a long way: irc.starchat.net #swl

http://www.surriel.com/ http://distro.conectiva.com/

-

Martin Dalecki

unread,
Dec 2, 2001, 11:40:13 AM12/2/01
to
Alan Cox wrote:
>
> > > Wasn't it you that were saying that Linux will never scale with more than
> > > 2 CPUs ?
> >
> > No, that wasn't me. I said it shouldn't scale beyond 4 cpus. I'd be pretty
> > lame if I said it couldn't scale with more than 2. Should != could.
>
> Question: What happens when people stick 8 threads of execution on a die with
> a single L2 cache ?

That had been already researched. Gogin bejoind 2 threads on a single
CPU
engine doesn't give you very much... The first step is giving about 25%
the second only about 5%. There are papers in the IBM research magazine
on
this topic in context of the PowerPC.

Ingo Molnar

unread,
Dec 2, 2001, 1:20:13 PM12/2/01
to

On Sun, 2 Dec 2001, Rik van Riel wrote:

> Note that this screams for some minimal kind of modularity on the
> source level, trying to limit the "magic" to as small a portion of the
> code base as possible.

Linux is pretty modular. It's not dogmatically so, nor does it attempt to
guarantee absolute or externally visible modularity, but most parts of it
are pretty modular.

> Also, natural selection tends to favour the best return/effort ratio,
> not the best end result. [...]

there is no 'effort' involved in evolution. Nature does not select along
the path we went. It's exactly this property why it took 5 billion years
to get here, while Linux took just 10 years to be built from grounds up.
The fact is that bacteria took pretty random paths for 2 billion years to
get to the next level. That's alot of 'effort'. So *once* we have
something that is better, it does not matter how long it took to get
there.

( This kind of 'development effort' is not the same as 'robustness', ie.
the amount of effort needed to keep it at the complexity level it is,
against small perturbations in the environment - but that is a different
kind of effort. )

> [...] Letting a kernel take shape due to natural selection pressure
> could well result in a system which is relatively simple, works well
> for 95% of the population, has the old cruft level at the upper limit
> of what's deemed acceptable and completely breaks for the last 5% of
> the population.

no. An insect that is 95.1% effective digesting banana leafs in the jungle
will completely eradicate a competing insect that is 95.0% effective
digesting banana leaves, within a few hundred generations. (provided both
insects have exactly the same parameters otherwise.) And it does not
matter whether it took 100 million years to get to 95.1%, or just one
lucky set of alpha particles hitting a specific DNA part of the original
insect.

Ingo

Oliver Xymoron

unread,
Dec 2, 2001, 1:30:20 PM12/2/01
to
On Sun, 2 Dec 2001, Jeff Garzik wrote:

> Oliver Xymoron wrote:
> >
> > And it's practically obsolete itself, outside of the ARM directory. What
> > I'm proposing is something in the Code Maturity menu that's analogous to
> > EXPERIMENTAL along with a big (UNMAINTAINED) marker next to unmaintained
> > drivers. Obsolete and unmaintained and deprecated all mean slightly
> > different things, by the way. So the config option would probably say
> > 'Show obsolete, unmaintained, or deprecated items?' and mark each item
> > appropriately. Anything that no one made a fuss about by 2.7 would be
> > candidates for removal.
>
> The idea behind CONFIG_OBSOLETE is supposed to be that it does not
> actually appear as a Y/N option. You enclose a Config.in option with
> that, and it disappears

Which works for stuff that is really known broken. It doesn't work for
stuff that you'd like to get rid of but you suspect people may still be
using (sbpcd) or stuff that you want to phase out (initrd).

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

Stephan von Krawczynski

unread,
Dec 2, 2001, 2:50:11 PM12/2/01
to
On Sun, 2 Dec 2001 20:17:33 +0100
Daniel Phillips <phil...@bonn-fries.net> wrote:

> > You mean "controlled" up to the point where your small environment got
randomly
> > hit by a smaller sized stone coming right from the nowhere corner of the
> > universe, or not?
>
> See "principal" above. There's a random element in the game of bridge, too,
> but it's not the principal element.

Please name another planet where your controlling principle is proven to even
exist.

For a being that is located on a "very small stage in a vast cosmic arena"
(Carl Sagan) you have a very strong opinion about the big picture. How come you
think you are able to _see_ it at all?
Wouldn't it be more accurate to simply admit, we _can't_ see it (at least
currently) and therefore it must be referred to as _random_? Obviously nobody
is hindered to try to find more pixels of the big picture. But shouldn't one
keep in mind that the picture is possibly _very_ big, compared to oneself and
the levels of complexity we are adjusted to.

A dropped stone is possibly only falling _down_ relative to _your_ personal
point of view.

Regards,
Stephan

n7...@swbell.net

unread,
Dec 2, 2001, 4:00:11 PM12/2/01
to
I have been following this thread with a mixture of amusement and exasperation - amusement that intelligent people like Linus, who ought to know better, are spouting this evolution stuff, and exasperation that some people think that because someone's an expert in one thing, they are an expert in all things.

The idea of genetic evolution itself is complete nonsense - biological systems don't evolve genetically, they evolve environmentally. Biological systems change as a result of random mutation, and what doesn't work doesn't survive. What people try to pass off as evolution is simply the less fit not surviving to pass on their bad genes. Sort of like the hundred monkeys idea.

But that is all completely irrelevent to coding, since it is extremely inefficient for systems to "evolve" based on trial and error. The way modern systems evolve is based on (hopefully) *intelligent* selection - I write a patch, submit it to Linus. He doesn't accept it, throw it in the kernel, and that's it - he looks at it, what it does, and decides if it fits in the Grand Scheme of things - kernel efficiency, speed, flexibility, extensability, and maintainability - and *then* decides if it makes it in. They key difference is that in nature, mutation is random because it can afford to be - in coding, it isn't because we don't have thousands or millions of years to find out whether or not something works or not.

That being said, I am well aware that "genetic programming" has made some progress in that direction, mainly because it doesn't take millenia to figure out what works and what doesn't. But that's a long way from "evolving" an entire operating system. I don't believe for a moment that homo sapiens "evolved" from pond scum although I might believe that some fellow homo sapiens *are* pond scum!) - it only makes sense that we are a created species, and that Homo Erectus ans all the rest were early genetic experiments. Who created homo sapiens is beyond the scope of this discussion ;)

Original Message:
-----------------
From: Larry McVoy l...@bitmover.com
Date: Sun, 02 Dec 2001 12:25:26 -0800
To: vonb...@sleipnir.valparaiso.cl, yoda...@fsmlabs.com, linux-...@vger.kernel.org
Subject: Re: Coding style - a non-issue


On Sat, Dec 01, 2001 at 08:18:06PM -0300, Horst von Brand wrote:
> Victor Yodaiken <yoda...@fsmlabs.com> said:
> > Linux is what it is because of design, not accident. And you know
> > that better than anyone.
>
> I'd say it is better because the mutations themselves (individual patches)
> go through a _very_ harsh evaluation before being applied in the "official"
> sources.

Which is exactly Victor's point. That evaluation is the design. If the
mutation argument held water then Linus would apply *ALL* patches and then
remove the bad ones. But he doesn't. Which just goes to show that on this
mutation nonsense, he's just spouting off.


--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

--------------------------------------------------------------------
mail2web - Check your email from the web at
http://mail2web.com/ .

Brandon McCombs

unread,
Dec 2, 2001, 4:40:10 PM12/2/01
to
On Sun, 2 Dec 2001 15:53:46 -0500
"n7...@swbell.net" <n7...@swbell.net> wrote:

> I have been following this thread with a mixture of amusement and exasperation - amusement that intelligent people like Linus, who ought to know better, are spouting this evolution stuff, and exasperation that some people think that because someone's an expert in one thing, they are an expert in all things.

No offense toward anyone but I find that many non-religious people can be found in the CompSci area of expertise. I'm not sure why this is but besides myself and another friend all the other people I know in that general field are atheists. It would only make sense that we would hear atheist type remarks within these discussions just as we would hear Christian remarks in another field of expertise that seems to attract Christians.

>
> The idea of genetic evolution itself is complete nonsense - biological systems don't evolve genetically, they evolve environmentally. Biological systems change as a result of random mutation, and what doesn't work doesn't survive. What people try to pass off as evolution is simply the less fit not surviving to pass on their bad genes. Sort of like the hundred monkeys idea.

True. Many mutations in human DNA cause the resulting human to be unable to reproduce once they reach the age where a normal human could do so.


>
> But that is all completely irrelevent to coding, since it is extremely inefficient for systems to "evolve" based on trial and error. The way modern systems evolve is based on (hopefully) *intelligent* selection - I write a patch, submit it to Linus. He doesn't accept it, throw it in the kernel, and that's it - he looks at it, what it does, and decides if it fits in the Grand Scheme of things - kernel efficiency, speed, flexibility, extensability, and maintainability - and *then* decides if it makes it in. They key difference is that in nature, mutation is random because it can afford to be - in coding, it isn't because we don't have thousands or millions of years to find out whether or not something works or not.

We have a way of being able to direct the evolution of our code as we can control the bad parts and teh good parts and what gets added and what doesn't. We have no control over our DNA (human genetics may have proven me wrong already but if not, it shouldn't take more than a few months more) so mutations in the human race are more random.

>
> That being said, I am well aware that "genetic programming" has made some progress in that direction, mainly because it doesn't take millenia to figure out what works and what doesn't. But that's a long way from "evolving" an entire operating system. I don't believe for a moment that homo sapiens "evolved" from pond scum although I might believe that some fellow homo sapiens *are* pond scum!) -

*finally* someone who doesn't believe in evolution of the human race. As a side note, i've heard some people say that a bolt of lightning triggered some proteins to start growing into single celled organisms and then into what we now call today human beings. I take offense that I came from a single celled organism. I believe the more complex an object or system is the less randomness can be added in order to arrive at the current/final version. I think we all agree the human body is the most complex object in the universe so how can we say that our existence was an accident?

An operating system is a complex system as well. We all know code doesn't evolve on its own to generate an operating system right? :) It has to be created and as time goes on code forks are sometimes introduced. In humans that could be somewhat akin to whites, blacks, asians, etc. But they were all created from the code that God started with. He just released his source code(dna) a little later in the development tree than some people may have wanted so there was no point in letting us evolve into something more as we were already different enough. :)

>it only makes sense that we are a created species, and that Homo Erectus ans all the rest were early genetic > experiments. Who created homo sapiens is beyond the scope of this discussion ;)

It is beyond the scope. If we attempted that topic we would be branded as close-minded even though the others (read: non-religious) can do it and they defend themselves by saying its free speech.

my time is out for this post.
brandon

Larry McVoy

unread,
Dec 3, 2001, 4:10:14 AM12/3/01
to
> Please Larry, have a look at the environment: nobody here owns a box
> with 128 CPUs. Most of the people here take care of things they either
> - own themselves
> - have their hands own at work
> - get paid for
>
> You will not find _any_ match with 128 CPUs here.

Nor will you find any match with 4 or 8 CPU systems, except in very rare
cases. Yet changes go into the system for 8 way and 16 way performance.
That's a mistake.

I'd be ecstatic if the hackers limited themselves to what was commonly
available, that is essentially what I'm arguing for.

--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

Daniel Phillips

unread,
Dec 3, 2001, 7:00:15 PM12/3/01
to
On December 2, 2001 09:53 pm, n7...@swbell.net wrote:
> I have been following this thread with a mixture of amusement and
> exasperation - amusement that intelligent people like Linus, who ought to
> know better, are spouting this evolution stuff, and exasperation that some
> people think that because someone's an expert in one thing, they are an
> expert in all things.

That's because you're not quite clear on the concept.

> ...in nature, mutation is random

It isn't random, if it were most mutated individuals would be stillborn.

> because it can afford to be...

No it can't.

--
Daniel

Stanislav Meduna

unread,
Dec 3, 2001, 7:00:17 PM12/3/01
to
Horst von Brand wrote:

> Have you got any idea how QA is done in closed environments?

Yes I do. I write commercial sofware for 10 years and have
experience with QA systems in two companies, one of them
major. I think I have seen the full range of QA in various projects -
from a complete negation to a silly buerocratic inefficient one.

> Complex software *has* bugs, bugs which aren't apparent
> except under unsusual circumstances are rarely found in the
> first round of bug chasing.

Sure. But we now have 2.4.16, not 2.4.0 and guess what? -
there is a big thread about fs corruption going right now
in the l-k :-( This should _not_ happen in the stab{le,ilizing}
series and if it happened, the cause should be identified
and measures taken.

I for one think that the kernel has overgrown its current
development model and that some _incremental_ steps
in the direction of both more formal control and delegation
of responsibilities are needed. I think that the most active
kernel developers should discuss the future of the development
model, as they are the only ones that can really come up
with a solution.

It is of course only my opinion - if I am alone having it, forget it.

> > As a user of the vendor's kernel I have no idea what to do
> > with a bug.
>
> Report it to the vendor, through the documented channels?

Did this. It is two months, I did some cornering of the problem
and augmented the report several times. The issue is still NEW,
without any response asking to try a patch, supply more details
or such. Yes, this speaks more of the vendor than of the Linux.
But what impression do you think the average user gets from
such experience?

Regards
--
Stano

David S. Miller

unread,
Dec 3, 2001, 7:00:20 PM12/3/01
to
From: Keith Owens <ka...@ocs.com.au>
Date: Sat, 01 Dec 2001 12:17:03 +1100


What is ugly in aic7xxx is :-

You missed:

* #undef's "current"

Stephan von Krawczynski

unread,
Dec 3, 2001, 7:00:22 PM12/3/01
to
> On Sat, Dec 01, 2001 at 08:05:59PM -0300, Horst von Brand wrote:
> > Just as Linus said, the development is shaped by its environment.
>
> Really? So then people should be designing for 128 CPU machines,
right?
> So why is it that 100% of the SMP patches are incremental? Linux is

> following exactly the same path taken by every other OS, 1->2, then
2->4,
> then 4->8, etc. By your logic, someone should be sitting down and
saying
> here is how you get to 128. Other than myself, noone is doing that
and
> I'm not really a Linux kernel hack, so I don't count.
>
> So why is it that the development is just doing what has been done
before?

Please Larry, have a look at the environment: nobody here owns a box
with 128 CPUs. Most of the people here take care of things they either
- own themselves
- have their hands own at work
- get paid for

You will not find _any_ match with 128 CPUs here.

_Obviously_ you are completely right if this were a company _building_
these boxes. Then your question is the right one, as they would get
paid for the job.
But this is a different environment. As long as you cannot buy these
boxes at some local store for a buck and a bit, you will have no
chance to find willing people for your approach. Therefore it is
absolutely clear, that it will (again) walk the line from 1,2,4,8 ...
CPUs, because the boxes will be available along this line.

I give you this advice: if you _really_ want to move something in this
area, find someone who should care about this specific topic, and has
the money _and_ the will to pay for development of critical GPL code
like this.
Take the _first_ step: create the environment. _Then_ people will come
and follow your direction.

Regards,
Stephan

Daniel Phillips

unread,
Dec 3, 2001, 7:00:23 PM12/3/01
to
On December 2, 2001 09:25 pm, Larry McVoy wrote:
> On Sat, Dec 01, 2001 at 08:18:06PM -0300, Horst von Brand wrote:
> > Victor Yodaiken <yoda...@fsmlabs.com> said:
> > > Linux is what it is because of design, not accident. And you know
> > > that better than anyone.
> >
> > I'd say it is better because the mutations themselves (individual patches)
> > go through a _very_ harsh evaluation before being applied in the
"official"
> > sources.
>
> Which is exactly Victor's point. That evaluation is the design.

Nope, that isn't design, that's reacting.

> If the mutation argument held water then Linus would apply *ALL* patches
> and then remove the bad ones. But he doesn't. Which just goes to show
> that on this mutation nonsense, he's just spouting off.

Hogwash ;) Please see my post above where I point out 'evolution isn't
random'. Your genes have a plan, if only a vague one. It goes something
like this: "we'll allow random variations, but only along certain lines,
within limits, and in certain combinations, and we'll try to stick to
variations that haven't killed us in the past."

Sounds a lot like how Linus does things, huh?

I'm sure Linus does have quite considerable talent for design, but I haven't
seen him execise it much. Mostly he acts as a kind of goodness daemon,
sitting in his little pinhole and letting what he considers 'good' stuff pass
into the box. There's no doubt about it, it's different from the way you
like to develop, you and me both. Equally clearly, it works pretty well.

--
Daniel

Davide Libenzi

unread,
Dec 3, 2001, 7:00:24 PM12/3/01
to
On Sun, 2 Dec 2001, Davide Libenzi wrote:

> That's exactly the Linus point: no long term preventive design.

And now for the ones that don't speak Italish :

s/preventive/prior/


- Davide

Andrew Morton

unread,
Dec 3, 2001, 7:00:25 PM12/3/01
to
Dave Jones wrote:

>
> On Sun, 2 Dec 2001, Andrew Morton wrote:
>
> > > Really? So then people should be designing for 128 CPU machines, right?
> > Linux only supports 99 CPUs. At 100, "ksoftirqd_CPU100" overflows
> > task_struct.comm[].
> > Just thought I'd sneak in that helpful observation.
>
> Wasn't someone looking at fixing that problem so it didn't need a per-cpu
> thread ?

Not to my knowledge.

> 128 kernel threads sitting around waiting for a problem that
> rarely happens seems a little.. strange. (for want of a better word).

I've kinda lost the plot on ksoftirqd. Never really understood
why a thread was needed for this, nor why it runs at nice +20.
But things seem to be working now.

-

H. Peter Anvin

unread,
Dec 3, 2001, 7:00:26 PM12/3/01
to
Followup to: <000b01c17b68$2ff846e0$30d8fea9@ecce>
By author: "[MOc]cda*mirabilos" <mira...@netcologne.de>
In newsgroup: linux.dev.kernel
>
> By the way, what happened to xiafs?
> Back to 2.0.33 it even didn't work (complaints after newfs).
>

It got ripped out because the vfs changed and noone ported it.

-hpa
--
<h...@transmeta.com> at work, <h...@zytor.com> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt <am...@zytor.com>

Keith Owens

unread,
Dec 3, 2001, 7:00:26 PM12/3/01
to
On Sun, 02 Dec 2001 15:21:57 -0800 (PST),
"David S. Miller" <da...@redhat.com> wrote:
> From: Keith Owens <ka...@ocs.com.au>
> Date: Sat, 01 Dec 2001 12:17:03 +1100
>
> What is ugly in aic7xxx is :-
>
>You missed:
>
>* #undef's "current"

Where? fgrep -ir current 2.4.17-pre2/drivers/scsi/aic7xxx did not find it.

Horst von Brand

unread,
Dec 3, 2001, 7:00:28 PM12/3/01
to
Stanislav Meduna <st...@meduna.org> said:
> "Alan Cox" at dec 01, 2001 09:18:15 said:

[...]

> > If you want a high quality, tested supported kernel which has been through
> > extensive QA then use kernel for a reputable vendor, or do the QA work
> > yourself or with other people.

> Correct. But this has one problem - it is splitting resources.
> Pushing much of the QA work later in the process means
> that the bugs are found later, that there is more people
> doing this as absolutely necessary and that much more
> communication (and this can be the most important bottleneck)
> is needed as necessary.

Have you got any idea how QA is done in closed environments?

> The need of the VM change is probably a classical example -


> why was it not clear at the 2.4.0-pre1, that the current
> implementation is broken to the point of no repair?

Perhaps because of the same phenomenon that made MS state "WinNT 4.0 has no
flaws" when asked about a nasty problem shortly after release, and it is
now at sp6a + numerous "hotfixes". Like Win2k which now has sp2. Like
Solaris, which still is being fixed. Etc, ad nauseam. Complex software


*has* bugs, bugs which aren't apparent except under unsusual circumstances
are rarely found in the first round of bug chasing.

[...]

> As a user of the vendor's kernel I have no idea what to do
> with a bug.

Report it to the vendor, through the documented channels?

--
Horst von Brand vonb...@sleipnir.valparaiso.cl
Casilla 9G, Vin~a del Mar, Chile +56 32 672616

Larry McVoy

unread,
Dec 3, 2001, 7:00:32 PM12/3/01
to
On Sun, Dec 02, 2001 at 08:34:09PM -0500, David L. Parsley wrote:
> Larry McVoy wrote:
> > Which is exactly Victor's point. That evaluation is the design. If the
> > mutation argument held water then Linus would apply *ALL* patches and then
> > remove the bad ones. But he doesn't. Which just goes to show that on this
> > mutation nonsense, he's just spouting off.
>
> Eh, come on Larry. You're too smart for this crap (as are others, your
> straw just broke the camel's back). Linus was just using an analogy to
> illustrate some very valid points. All analogies break down when
> applied to the nth degree. Insulting Linus because you've found a spot
> where the analogy breaks is just ludicrous.

This whole mutation crap is ludicrous and if you read through the archives
you can find numerous examples where Linus himself says so. I have no idea
why he is saying what he is, but that's neither here nor there. Nonsense
is nonsense, regardless of who says it or why they say it.

Doesn't it strike you the least bit strange that when I challenge Linus to
bow out because he asserts that he isn't needed, this is just some grand
experiment in genetics which is working fine, he says everything would be
fine if he left but he isn't going to because he's having fun? Isn't that
just a tad convenient? It's a crock of crap too. Linus has excellent taste,
better than any OS guy I've run into in 20 years, and if he bowed out a ton
of crap would make it into the kernel that doesn't now. Genetic mutation
my ass. If you want an experiment in evolution, then let *everything* into
the kernel. That's how evolution works, it tries everything, it doesn't
prescreen. Go read Darwin, go think, there isn't any screening going on,
evolution *is* the screening.


--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

Trever L. Adams

unread,
Dec 3, 2001, 7:00:30 PM12/3/01
to

> *finally* someone who doesn't believe in evolution of the human race. As a side note, i've heard some people say that a bolt of lightning triggered some proteins to start growing into single celled organisms and then into what we now call today human beings. I take offense that I came from a single celled organism. I believe the more complex an object or system is the less randomness can be added in order to arrive at the current/final version. I think we all agree the human body is the most complex object in the universe so how can we say that our existence was an accident?
>

I personally will stay out of the religious side of this argument,
having been flamed for standing up for any religious stand point on this
list.

However, I just finished my two bio classes for my CS degree. It is
interesting that you mention this lightening theory. My bio book (sorry
no references and no quotes, maybe later) stated that many people
(60's-80's) have tried very hard to duplicate and find conditions
whereby simple molecules could even form basic RNA or other such
biological/organic compounds. They had some very minimal success. In
the end it was concluded that the methods they were trying probably
would never have created RNA and other such things that may have
assembled a cell. Some of these tests were based on this lightening
theory.

Maybe such spontaneous life could have happened another way... I don't
really know.

As for software evolution. I would have to weigh in with my opinion
being somewhere between Linus and many others. Software does evolve.
Just about any human project does. This is one reason why there are
"versions", "editions", etc. You can only design so much. Then you go
back and evolve it. Is Linus right that there was nearly no design?? I
think he would know best about the earliest roots of Linux. However, I
think he is wrong that now there is no design (though there may be no
master plan, which would mean it is controlled evolution more than
engineered/designed).

Anyway, I will sink back into silence for now.

Trever Adams

Horst von Brand

unread,
Dec 3, 2001, 7:00:33 PM12/3/01
to
Larry McVoy <l...@bitmover.com> said:
> vonb...@sleipnir.valparaiso.cl on Sat, Dec 01, 2001 at 08:18:06PM -0300

[...]

> > I'd say it is better because the mutations themselves (individual patches)
> > go through a _very_ harsh evaluation before being applied in the "official"
> > sources.

> Which is exactly Victor's point. That evaluation is the design. If the

> mutation argument held water then Linus would apply *ALL* patches and then
> remove the bad ones. But he doesn't. Which just goes to show that on this
> mutation nonsense, he's just spouting off.

Who is to say that bad mutations can't be weeded out _before_ a full
organism is built? It seems not to happen openly in nature's evolution
(then again, there are non-viable embryos, various DNA repair mechanisms
that seem to go wrong all the time in certain parts of the genome, parts
that mutate very fast while others don't change, ...), but this is just a
metaphor, not a slavish following. We certainly (at least think we) can do
better than just random typing.

In your reading, the environment (which evaluates individuals) is the
design. Right (in the sense that you end up with individuals fit to that
environment), but also very wrong (as many quite different layouts will
work).


--
Horst von Brand vonb...@sleipnir.valparaiso.cl
Casilla 9G, Vin~a del Mar, Chile +56 32 672616

David L. Parsley

unread,
Dec 3, 2001, 7:00:27 PM12/3/01
to
Larry McVoy wrote:


> Which is exactly Victor's point. That evaluation is the design. If the
> mutation argument held water then Linus would apply *ALL* patches and then
> remove the bad ones. But he doesn't. Which just goes to show that on this
> mutation nonsense, he's just spouting off.

Eh, come on Larry. You're too smart for this crap (as are others, your
straw just broke the camel's back). Linus was just using an analogy to
illustrate some very valid points. All analogies break down when
applied to the nth degree. Insulting Linus because you've found a spot
where the analogy breaks is just ludicrous.

regards,
David

Eric W. Biederman

unread,
Dec 3, 2001, 7:00:32 PM12/3/01
to
Alan Cox <al...@lxorguk.ukuu.org.uk> writes:

> > The next incremental step is to get some good distributed and parallel
> > file systems. So you can share one filesystem across the cluster.
> > And there is some work going on in those areas. luster, gfs,
> > intermezzo.
>
> gfs went proprietary - you want opengfs

Right.

> A lot of good work on the rest of that multi-node clustering is going on
> already - take a look at the compaq open source site.

Basically my point.

> cccluster is more for numa boxes, but it needs the management and SSI views
> that the compaq stuff offers simply because most programmers won't program
> for a cccluster or manage one.

I've seen a fair number of mpi programs, and if you have a program
that takes weeks to run on a single system. There is a lot of
incentive to work it out. Plus I have read about a lot of web sites
that are running on a farm of servers. Admittedly the normal
architecture has one fat database server behind the web servers, but
that brings me back to needing a good distributed storage techniques.

And I really don't care if most programmers won't program for a
cccluster. Most programmers don't have one or a problem that needs
one to solve. So you really only need those people interested in the
problem to work on it.

But single system image type projects are useful, but need to be
watched. You really need to standardize on how a cluster is put
together (software wise), and making things easier always helps. But
you also need to be very careful because you can easily write code
that does not scale. And people doing cluster have wild notions of
scaling o.k. 64 Nodes worked let's try a thousand...

As far as I can tell the only real difference between a numa box, and
a normal cluster of machines running connected with fast ethernet is
that a numa interconnect is a blazingly fast interconnect. So if you
can come up with a single system image solution over fast ethernet a
ccNuma machine just magically works.

Eric

Alexander Viro

unread,
Dec 3, 2001, 7:00:32 PM12/3/01
to

On Sun, 2 Dec 2001, Brandon McCombs wrote:

[snip badly-formatted creationism advocacy]

Please, learn to
* use line breaks
* be intellectually honest
* be at least remotely on-topic

*plonk*

Jonathan Abbey

unread,
Dec 3, 2001, 7:00:31 PM12/3/01
to
brandon wrote:
|
| *finally* someone who doesn't believe in evolution of the human race.
| As a side note, i've heard some people say that a bolt of lightning
| triggered some proteins to start growing into single celled organisms
| and then into what we now call today human beings. I take offense
| that I came from a single celled organism. I believe the more complex
| an object or system is the less randomness can be added in order to
| arrive at the current/final version. I think we all agree the human
| body is the most complex object in the universe so how can we say that
| our existence was an accident?

Again, a complete misunderstanding of evolution. Evolution is itself
a design process.. it is simply a design process that admits to an
literally unthinkable amount of complexity. No individual or team of
individuals, no matter how intelligent, could sit down and create from
scratch the Linux kernel as it exists today. There are tons and tons
of design elements in the code that emerged from trial and error, and
from interactions between the hardware to be supported, the user level
code to run on it, and the temporal exigencies of the kernel code
itself. The fact that humans applied thought to all (well, at least
to some) of the changes made doesn't mean that the overarching dynamic
isn't an evolutionary one.

Taking offense at evolution having produced us from simpler organisms
is like taking offense at the rain, or the sun setting at night. We
can now look at life and actually read the code, and see how much is
held in common and how much varies between different organisms, just
as surely as we can with all of the linux kernels over the last ten
years. Both systems have lots of characteristics in common, and for
perfect reasons.

Linus is right.

-------------------------------------------------------------------------------
Jonathan Abbey jona...@arlut.utexas.edu
Applied Research Laboratories The University of Texas at Austin
Ganymede, a GPL'ed metadirectory for UNIX http://www.arlut.utexas.edu/gash2

David S. Miller

unread,
Dec 3, 2001, 7:00:33 PM12/3/01
to
From: Alan Cox <al...@lxorguk.ukuu.org.uk>
Date: Sun, 2 Dec 2001 16:57:46 +0000 (GMT)

The main Red Hat test suite is a version of Cerberus (originally from VA
and much extended), its all free software and its available somewhere
although I don't have the URL to hand, Arjan ?

http://people.redhat.com/bmatthews/cerberus/

Chris Ricker

unread,
Dec 3, 2001, 7:00:35 PM12/3/01
to
On Sun, 2 Dec 2001, Alan Cox wrote:

> > Is the test suite (or at least part of it) public, or is it
> > considered to be a trade-secret of Red Hat? I see there
> > is a "Red Hat Ready Test Suite" - is this a part of it?


>
> The main Red Hat test suite is a version of Cerberus (originally from VA
> and much extended), its all free software and its available somewhere
> although I don't have the URL to hand, Arjan ?

I think it's at <http://people.redhat.com/bmatthews/cerberus/>

later,
chris

--
Chris Ricker kab...@gatech.edu

For if we may compare infinities, it would
seem to require a greater infinity of power
to cause the causes of effects, than to
cause the effects themselves.
-- Erasmus Darwin

Horst von Brand

unread,
Dec 3, 2001, 7:00:35 PM12/3/01
to
"M. Edward Borasky" <zn...@aracnet.com> said:

[...]

> My point here is that just because a composer is *capable* of doing
> integration work and building or repairing tools (and I am) does *not* mean
> he (or she :-) has either the time or the willingness to do so (and I
> don't).

So band together with some others with your same problem, and pay somebody
to fix it. What you saved on propietary OS lease should make up for it.
Amply.

Oh wait, you are just a troll, right?


--
Horst von Brand vonb...@sleipnir.valparaiso.cl
Casilla 9G, Vin~a del Mar, Chile +56 32 672616

Victor Yodaiken

unread,
Dec 3, 2001, 7:30:07 PM12/3/01
to
On Mon, Dec 03, 2001 at 01:55:08AM +0100, Daniel Phillips wrote:
> I'm sure Linus does have quite considerable talent for design, but I haven't
> seen him execise it much. Mostly he acts as a kind of goodness daemon,
> sitting in his little pinhole and letting what he considers 'good' stuff pass
> into the box. There's no doubt about it, it's different from the way you
> like to develop, you and me both. Equally clearly, it works pretty well.

This is a good explanation of why Linux may fail as a project, but it is
pure fantasy as to how it has so far succeded as a project.

The tiny part of system I wrote directly and the larger part that
I got to see up close involved a great deal of design, old fashioned
careful engineering, and even aesthetic principles of what wasgood
design.

Don't drink the cool aid. Go back and look in the kernel archives and
you will see extensive design discussions among all the core developers.
Linus has a point about the development of Linux not being in
accord with some master plan (at least not one anyone admits to) , but
that's about as far as it goes.

Ingo Molnar

unread,
Dec 3, 2001, 7:50:12 PM12/3/01
to

On Sun, 2 Dec 2001, Rik van Riel wrote:

> I think you've pretty much proven how well random
> development works.

i think it's fair to say that we should not increase entropy artificially,
eg. we should not apply randomly generated patches to the kernel tree.

the point is, we should accept the fact that while this world appears to
be governed by rules to a certain degree, the world is also chaotic to a
large degree, and that a Grand Plan That Explains Everything does not
exist. And even if it existed, we are very far away from achieving it, and
even if some friendly alien dropped it on our head, we'd very likely be
unable to get our 5 billion brain cells into a state that is commonly
referred to as 'fully grokking it'.

and having accepted these limitations, we should observe the unevitable
effects of them: that any human prediction of future technology
development beyond 5 years is very, very hypothetical, and thus we must
have some fundamental way of dealing with this unpredictability. (such as
trying to follow Nature's Smart Way Of Not Understanding Much But Still
Getting Some Work Done - called evolution.)

Ingo

Horst von Brand

unread,
Dec 3, 2001, 8:10:11 PM12/3/01
to
Larry McVoy <l...@bitmover.com> said:
> sk...@ithnet.com on Sun, Dec 02, 2001 at 11:52:32PM +0100 said:

[...]

> > You will not find _any_ match with 128 CPUs here.
>
> Nor will you find any match with 4 or 8 CPU systems, except in very rare
> cases. Yet changes go into the system for 8 way and 16 way performance.
> That's a mistake.

And you are proposing a fork for handling exactly this? I don't get it...
--
Dr. Horst H. von Brand User #22616 counter.li.org
Departamento de Informatica Fono: +56 32 654431
Universidad Tecnica Federico Santa Maria +56 32 654239
Casilla 110-V, Valparaiso, Chile Fax: +56 32 797513

Daniel Phillips

unread,
Dec 3, 2001, 8:50:14 PM12/3/01
to
On December 3, 2001 01:04 pm, Victor Yodaiken wrote:
> On Mon, Dec 03, 2001 at 01:55:08AM +0100, Daniel Phillips wrote:
> > I'm sure Linus does have quite considerable talent for design, but I haven't
> > seen him exercise it much. Mostly he acts as a kind of goodness daemon,
> > sitting in his little pinhole and letting what he considers 'good' stuff pass
> > into the box. There's no doubt about it, it's different from the way you
> > like to develop, you and me both. Equally clearly, it works pretty well.
>
> This is a good explanation of why Linux may fail as a project, but it is
> pure fantasy as to how it has so far succeeded as a project.
>
> The tiny part of system I wrote directly and the larger part that
^^^^^^^^^

> I got to see up close involved a great deal of design, old fashioned
> careful engineering, and even aesthetic principles of what wasgood
> design.

You're just supporting the point of view that Linus has been espousing, and
I personally support: Linux is engineered at a micro level[1] but evolves
on its own at a macro level.

Sure, Linux evolves with help from Linus, but he acts as a filter, not a
designer. When Linus does on rare occasions forget himself and actually
design something, its micro-engineering like you or I would do. So if Linux
is designed, who does do the designing, can you name him? I can tell you for
sure it's not Linus.

> Don't drink the cool aid. Go back and look in the kernel archives and
> you will see extensive design discussions among all the core developers.
> Linus has a point about the development of Linux not being in
> accord with some master plan (at least not one anyone admits to) , but
> that's about as far as it goes.

Don't worry about me drinking the cool aid, first I already drank it and
second I'm personally already fully devoted to the notion of design process,
including all the usual steps: blue sky, discussion, requirements, data
design, detail design, prototype, etc. etc. You'll find the 'paper trails'
in the archives if you've got the patience to go spelunking, and you'll have
a hard time finding one of those designs that became a dead end. This design
thing does work for me. It doesn't change the fact that what I'm doing is
micro-engineering.

I'll get really worried if Linus wakes up one day and decides that from now
on he's going to properly engineer every aspect of the Linux kernel. The
same way I'd feel if Linux got taken over by a committee.

--
Daniel

[1] In places. All those little warts and occasional pools of sewage are
clearly not 'engineered'.

Martin J. Bligh

unread,
Dec 3, 2001, 9:10:07 PM12/3/01
to
>> Really? So then people should be designing for 128 CPU machines, right?
>
> Linux only supports 99 CPUs. At 100, "ksoftirqd_CPU100" overflows
> task_struct.comm[].
>
> Just thought I'd sneak in that helpful observation.

For machines that are 99bit architectures or more, maybe. For 32 bit machines,
your limit is 32, for 64 bit, 64.

M.

Martin J. Bligh

unread,
Dec 3, 2001, 9:10:09 PM12/3/01
to
> As far as I can tell the only real difference between a numa box, and
> a normal cluster of machines running connected with fast ethernet is
> that a numa interconnect is a blazingly fast interconnect.

Plus some fairly hairy cache coherency hardware.

> So if you
> can come up with a single system image solution over fast ethernet a
> ccNuma machine just magically works.

it's not cc if you just use fast ethernet.

Martin.

Martin J. Bligh

unread,
Dec 3, 2001, 9:20:10 PM12/3/01
to
>> Please Larry, have a look at the environment: nobody here owns a box
>> with 128 CPUs. Most of the people here take care of things they either
>> - own themselves
>> - have their hands own at work
>> - get paid for
>>
>> You will not find _any_ match with 128 CPUs here.
>
> Nor will you find any match with 4 or 8 CPU systems, except in very rare
> cases. Yet changes go into the system for 8 way and 16 way performance.
> That's a mistake.
>
> I'd be ecstatic if the hackers limited themselves to what was commonly
> available, that is essentially what I'm arguing for.

We need a *little* bit of foresight. If 4 ways are common now, and 8 ways
and 16 ways are available, then in a year or two 8 ways (at least) will be
commonplace. On the other hand 128 cpu machines are a way off, and
I'd agree we shouldn't spend too much time on them right now.

M. Edward Borasky

unread,
Dec 3, 2001, 9:30:18 PM12/3/01
to

> -----Original Message-----
> From: Horst von Brand [mailto:vonb...@sleipnir.valparaiso.cl]
> Sent: Sunday, December 02, 2001 7:23 PM
> To: M. Edward Borasky
> Cc: linux-...@vger.kernel.org
> Subject: Re: Linux/Pro [was Re: Coding style - a non-issue]
>
>
> "M. Edward Borasky" <zn...@aracnet.com> said:
>
> [...]
>
> > My point here is that just because a composer is *capable* of doing
> > integration work and building or repairing tools (and I am)
> does *not* mean
> > he (or she :-) has either the time or the willingness to do so (and I
> > don't).
>
> So band together with some others with your same problem, and pay somebody
> to fix it. What you saved on propietary OS lease should make up for it.
> Amply.

What I spent on Windows 2000 is $300 US. This converted my $400
top-of-the-line sound card from a useless space-taker on my desk to a
functioning musical device. As for banding together with some others, well,
they are even *more* frustrated than I am, because most of them are *purely*
musicians and *can't* program. Nor do they have the money to spend on
programmers. I'm on a number of musical mailing lists, and their
overwhelming complaint is that they spend most of their time being system
administrators rather than musicians/composers. And these are people using
*commercial* tools -- some *quite* expensive -- on Windows and Macs.

> Oh wait, you are just a troll, right?

Not really ... if you'd like I can be, though. Eventually, when I run out of
other projects, I'll sit down and force ALSA to work with my sound card if
someone hasn't done it already. Of course, now that I have the sound card
running and Windows 2000, why would I need to? So much of Linux is
plug-and-play right now, at least the Red Hat Linux that I'm using. I bought
a sound card unsupported by Red Hat because I knew of two drivers for it --
OSS/Linux and ALSA. I tried ALSA first and gave up on it after a week of
agony on the ALSA mailing list. Then I bought OSS/Linux, which installed
fine but didn't generate any sound. When I sent e-mail to the support desk,
I got a very fast response -- RTFM. The FM in this case consists of a single
page ASCII document which is less than helpful.

What I'm trying to establish here is that if ALSA is to become the
main-stream Linux sound driver set, it's going to need to support -- *fully*
support -- the top-of-the-line sound cards like my M-Audio Delta 66. It
isn't enough to just support the Envy chip inside -- it has to support the
whole card with interfaces to all the sound tools that come with KDE and
Gnome! It has to install flawlessly, boot flawlessly and understand
everything that is in the card. I haven't checked recently to see if the
ALSA situation has changed any -- too busy making music on my Windows
machine :-).
--
Take Your Trading to the Next Level!
M. Edward Borasky, Meta-Trading Coach

zn...@borasky-research.net
http://www.meta-trading-coach.com
http://groups.yahoo.com/group/meta-trading-coach

Ingo Molnar

unread,
Dec 4, 2001, 4:00:12 AM12/4/01
to

On Sun, 2 Dec 2001, Daniel Phillips wrote:

> One fact that is often missed by armchair evolutionists is that
> evolution is not random. It's controlled by a mechanism (most
> obviously: gene shuffling) and the mechanism *itself* evolves. That is
> why evolution speeds up over time. There's a random element, yes, but
> it's not the principle element.

claiming that the randomness is not the principle element of evolution is
grossly incorrect.

there are two components to the evolution of species: random mutations and
a search of the *existing* gene space called gene shuffling. (to be more
exact gene shuffling is only possible for species that actually do it -
bacteria dont.)

In fact gene shuffling in modern species is designed to 'search for useful
features rewarded by the environment to combine them in one specimen'. Ie.
the combination of useful features such as 'feathers' or 'wings',
introduced as random mutations of dinosaurs. Gene shuffling does not
result in radically new features.

gene shuffling is just the following rule: 'combine two successful DNA
chains more or less randomly to find out whether we can get the better
genes of the two.'. Since most species reproduce more than once, random
gene shuffling has a chance of combining the best possible genes. Risking
oversimplification, i'd say that genes are in essence invariant 'modules'
of a species' genetic plan, which can be intermixed between entities
without harming basic functionality. A requirement of the gene shuffling
process is that the resulting entity has to remain compatible enough with
the source entities to be able to reproduce itself and intermix its genes
with the original gene pool.

in terms of Linux, various new genes are similar to various patches that
improve the kernel. Some of them produce a kernel that crashes trivially,
those are obviously incorrect. Some of them might or might not be useful
in the future. We dont know how the environment will evolve in the future,
so we better keep all our options open and have a big enough 'gene pool'.
The development of individual patches is 'directed' and 'engineered' in
the sense that they produce a working kernel and they are derived from
some past experience and expectations of future. They might be correct or
incorrect, but judging them is always a function of the 'environment'.
Some patches become 'correct' over time. Eg. the preemptable kernel
patches are gaining significance these days - it was clearly a no-no 3
years ago. This is because the environment has changed, and certain
patches predicted or guessed the direction of technology environment
correctly.

if you look at patches on the micro-level, it has lots of structure, and
most of it is 'engineered'. If you look at it on the macro-level, the
Linux kernel as a whole has

(and gene shuffling itself has another random component as well, it's the
imperfectness of it that is one of the sources of random mutations.)

saying that the randomness of evolution is not the principle element is
like claiming that the current Linux code is sufficient and we only have
to shuffle around existing functions to make it better.

> > So *once* we have something that is better, it does not matter how long it
> > took to get there.
>
> Sure, once you are better than the other guy you're going to eat his
> lunch. But time does matter: a critter that fails to get its
> evolutionary tail in gear before somebody eats its lunch isn't going
> to get a second chance.

this is another interesting detail: the speed of being able to adopt to a
changing environment does matter.

But the original claim which i replied to was that the cost of developing
a new 'feature' matters. Which i said is not true - evolution does not
care about time of development if the environment is relatively stable, or
is changing slowly. The speed of evolution/development only matters once
the environment changes fast.

So to draw the analogy with Linux - as long as the environment (chip
technology, etc.) changes rapidly, what matters most is the ability to
evolve. But once the environment 'cools down' a bit, we can freely search
for the most perfect features in a stable environment, and we'll end up
being 99.9% perfect (or better). [ The original claim which i replied to
said that we'll end up being 95% perfect and stop there, because further
development is too expensive - this claim i took issue with. ]

In fact this happened a number of times during Linux's lifetime. Eg. the
prospect of SMP unsettled the codebase alot and the (relative) quality of
uniprocessor Linux perhaps even decreased. Once the external environment
has settled down, other aspects of Linux caught up as well.

believe me, there was no 'grand plan'. Initially (5 years ago) Linus said
that SMP does not matter much at the moment, and that nothing should be
done in SMP space that hurts uniprocessors. Today it's exactly the other
way around. And i think it's perfectly possible that there will be a new
paradigm in 5 years.

Ingo

Alan Cox

unread,
Dec 4, 2001, 4:10:13 AM12/4/01
to
> > can come up with a single system image solution over fast ethernet a
> > ccNuma machine just magically works.
>
> it's not cc if you just use fast ethernet.

Thats a matter for handwaving and distributed shared memory algorithms. The
general point is still true - if you assume your NUMA interconnects are
utter crap when performance and latency issues come up - you'll get the
right results.

Alan Cox

unread,
Dec 4, 2001, 4:30:13 AM12/4/01
to
> What I'm trying to establish here is that if ALSA is to become the
> main-stream Linux sound driver set, it's going to need to support -- *fully*
> support -- the top-of-the-line sound cards like my M-Audio Delta 66. It

Not really. The number of people who actually care about such cards is close
to nil. What matters is that the API can cleanly express what the Delta66
can do, and that you can write a driver for it under ALSA without hacking up
the ALSA core.

I'm happy both of those are true.

Ingo Molnar

unread,
Dec 4, 2001, 4:30:14 AM12/4/01
to

On Mon, 3 Dec 2001, Daniel Phillips wrote:

> [...] Please see my post above where I point out 'evolution isn't
> random'. Your genes have a plan, if only a vague one. It goes
> something like this: "we'll allow random variations, but only along
> certain lines, within limits, and in certain combinations, and we'll
> try to stick to variations that haven't killed us in the past."

so what you say in essence is that "evolution isnt random, it's random"
;-) The fact that the brownean motion is 'vaguely directed' (ie. evolution
has a limited amount of 'memory' of past experience coded into the DNA)
does not make it less random. Randomness does not have to be completely
undirected - perhaps you know a different definition for 'random'. Just
the fact that we got from bacteria to humans and from bacteria to trees
shows that it's not only random, it's also unstable and chaotic. (the same
initial conditions resulted in multiple, wildly different and almost
completely unrelated set of end results.)

and nobody claimed Linux development was totally (chryptographically)
random. We just claim that Linux development has a fair dose of randomness
and unpredictability besides having a fair dose of structure, and that its
development model is much closer to evolution than to the formal methods
of software development.

at which point i think we finally agree?

Ingo

Alan Cox

unread,
Dec 4, 2001, 4:40:13 AM12/4/01
to
> In essence, People don't run big boxes due to scalability issues, fixing
> those might get someone to install a 16-Way.

Hackers don't run Linux on 16 way boxes because they cost $100,000 each

Alan

Jamie Lokier

unread,
Dec 4, 2001, 12:00:17 PM12/4/01
to
Andrew Morton wrote:
> > 128 kernel threads sitting around waiting for a problem that
> > rarely happens seems a little.. strange. (for want of a better word).
>
> I've kinda lost the plot on ksoftirqd. Never really understood
> why a thread was needed for this, nor why it runs at nice +20.
> But things seem to be working now.

Me no idea either. It wasn't to work around the problem of losing
softirqs on syscall return was it? Because there was a patch for that
in the low-latency set that fixed that without a thread, and without a
delay...

-- Jamie

Oliver Xymoron

unread,
Dec 4, 2001, 2:30:16 PM12/4/01
to
On Sun, 2 Dec 2001, Larry McVoy wrote:

> If you want an experiment in evolution, then let *everything* into
> the kernel. That's how evolution works, it tries everything, it doesn't
> prescreen. Go read Darwin, go think, there isn't any screening going on,
> evolution *is* the screening.

So-called 'natural selection' is only a subset of things that can quite
legitimately be called evolution. And there certainly is screening in
nature, it's called sexual selection.

Linus's point is mainly about parallelism. Many more changes get tried in
the Linux space than could ever happen in a traditional software
development environment.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

David Weinehall

unread,
Dec 4, 2001, 3:10:16 PM12/4/01
to
On Tue, Dec 04, 2001 at 11:35:11AM -0800, Dan Hollis wrote:
> On Tue, 4 Dec 2001, David Weinehall wrote:
> > Indeed. And I'm sure the ALSA-team would be delighted and fully willing
> > to write a working driver, if mr Borasky donated an M-Audio Delta 66
> > together with full documentation to them...
>
> ALSA already has a working driver...!

The point I was trying to make was just "stop complaining about lack
of drivers, contribute one or help someone else create one. I wasn't
criticizing ALSA, rather the opposite. Now, if I could just find
someone willing to program a driver for that old 8-bit, totally sucky,
IBM ACPA/A I have (the only MCA sound adapter I have managed to get
hold of...)


/David
_ _
// David Weinehall <t...@acc.umu.se> /> Northern lights wander \\
// Maintainer of the v2.0 kernel // Dance across the winter sky //
\> http://www.acc.umu.se/~tao/ </ Full colour fire </

Gérard Roudier

unread,
Dec 4, 2001, 3:20:12 PM12/4/01
to

On Mon, 3 Dec 2001, Keith Owens wrote:

> On Sun, 02 Dec 2001 15:21:57 -0800 (PST),
> "David S. Miller" <da...@redhat.com> wrote:
> > From: Keith Owens <ka...@ocs.com.au>
> > Date: Sat, 01 Dec 2001 12:17:03 +1100
> >
> > What is ugly in aic7xxx is :-
> >
> >You missed:
> >
> >* #undef's "current"
>
> Where? fgrep -ir current 2.4.17-pre2/drivers/scsi/aic7xxx did not find it.

What is ugly is "David S. Miller" ?

The 'Z' in the first name and the 'K' in the family name. :-)

Gérard.

Gérard Roudier

unread,
Dec 4, 2001, 3:30:11 PM12/4/01
to

On Tue, 4 Dec 2001, Gérard Roudier wrote:

>
> On Mon, 3 Dec 2001, Keith Owens wrote:
>
> > On Sun, 02 Dec 2001 15:21:57 -0800 (PST),
> > "David S. Miller" <da...@redhat.com> wrote:
> > > From: Keith Owens <ka...@ocs.com.au>
> > > Date: Sat, 01 Dec 2001 12:17:03 +1100
> > >
> > > What is ugly in aic7xxx is :-
> > >
> > >You missed:
> > >
> > >* #undef's "current"
> >
> > Where? fgrep -ir current 2.4.17-pre2/drivers/scsi/aic7xxx did not find it.
>
> What is ugly is "David S. Miller" ?

^^
Amusing mistake, I wanted to write 'in' instead of 'is'. :-)

Pavel Machek

unread,
Dec 5, 2001, 5:00:12 PM12/5/01
to
Hi!

> Another thing for 2.5 is going to be to weed out the unused and unmaintained
> drivers, and either someone fixes them or they go down the comsic toilet and
> we pull the flush handle before 2.6 comes out.

Hey, I still have 8-bit isa scsi card somewhere.... Last time I fixed
that was just before 2.4 because that was when I got it... Don't flush
drivers too fast, please... If you kill drivers during 2.5, people
will not notice, and even interesting drivers will get killed. Killing
them during 2.6.2 might be better.
Pavel
--
"I do not steal MS software. It is not worth it."
-- Pavel Kankovsky

Ragnar Hojland Espinosa

unread,
Dec 5, 2001, 5:50:22 PM12/5/01
to
On Mon, Dec 03, 2001 at 10:39:08PM -0300, Horst von Brand wrote:
> > If you want an experiment in evolution, then let *everything* into
> > the kernel. That's how evolution works, it tries everything, it doesn't
> > prescreen. Go read Darwin, go think, there isn't any screening going on,
> > evolution *is* the screening.
>
> Why does the screening have to be at the level of full organisms? It
> _looks_ that way because you don't see the busted sperm or broken eggs, or
> the stillborn embryos which make up the "preliminary checks show it won't
> work" in nature. The process is (hopefully) much more efficient here than
> in nature, and visible, that is all.

And I'd add something more along those lines..

Evolution and selection is about species, not individuals as its commonly
considered, so what might be bad for an individual (getting "screened" at
early ages) might be good for (reproduction of) the species (since it
ensures a better reproduction material quality) Darwinian evolution doesnt
fit too well in the kernel.

On the other hand we can think of developers' minds as a copy-on-write DNA.
DNA knows when something wont work, so it doesn't try it. Screening :)

--
____/| Ragnar Højland Freedom - Linux - OpenGL | Brainbench MVP
\ o.O| PGP94C4B2F0D27DE025BE2302C104B78C56 B72F0822 | for Unix Programming
=(_)= "Thou shalt not follow the NULL pointer for | (www.brainbench.com)
U chaos and madness await thee at its end."

Alan Cox

unread,
Dec 5, 2001, 7:20:12 PM12/5/01
to
> that was just before 2.4 because that was when I got it... Don't flush
> drivers too fast, please... If you kill drivers during 2.5, people
> will not notice, and even interesting drivers will get killed. Killing
> them during 2.6.2 might be better.

They need to die before 2.6. I'm all for leaving the code present and the
ability to select

Expert
Drivers that need fixing
Clockwork scsi controller, windup mark 2

in 2.6 so that people do fix them

Alan

Pavel Machek

unread,
Dec 6, 2001, 3:40:16 PM12/6/01
to
Hi!

> believe me, there was no 'grand plan'. Initially (5 years ago) Linus said
> that SMP does not matter much at the moment, and that nothing should be
> done in SMP space that hurts uniprocessors. Today it's exactly the other
> way around.

I hope uniprocessors still matter... SMP is nice, but 90% machines are
still uniprocessors... [Or someone get me SMP ;)]


Pavel
--
"I do not steal MS software. It is not worth it."
-- Pavel Kankovsky

It is loading more messages.
0 new messages