Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Linux's approaching Achilles' heal

8 views
Skip to first unread message

nbake...@charter.net

unread,
Nov 16, 2007, 9:15:41 PM11/16/07
to
Like a run-away freighttrain, the Open Source Community's "standard
practice" (_faux peer review_ plus shoddy coding standards and casual
dismissal of bug reports pointing out critical flaws http://pulseaudio.org/ticket/158
) is exactly the mind-set that will bring Linux tumbling down the hill
into the valley of the forgotten, non-important OSs that "could have
been".

It is easy to understand that, given the pressure to maintain a
'presence' in the month headlines and the desire to outperform the
competition in the number of 'features', some amount of short-cuts
will be taken and code audits being skipped so that the next 'distro
release' can announce a new fancy gizmo under its wing. *Some* degree
of this behavior is to be expected in an environment where any "Joe
Six-pack" can start a project and have his code used by and
encorporated into other software down the stream. However, I am quite
shocked that the practice is tolerated to the point that it leads to
extremely unstable critical support systems as detailed in the
following forum threads.

http://ubuntuforums.org/showthread.php?t=612606
http://ubuntuforums.org/showthread.php?t=614962

Nathan.

Keith Kanios

unread,
Nov 16, 2007, 9:57:07 PM11/16/07
to
On Nov 16, 8:15 pm, nbaker2...@charter.net wrote:
> Like a run-away freighttrain, the Open Source Community's "standard
> practice" (_faux peer review_ plus shoddy coding standards and casual
> dismissal of bug reports pointing out critical flawshttp://pulseaudio.org/ticket/158

> ) is exactly the mind-set that will bring Linux tumbling down the hill
> into the valley of the forgotten, non-important OSs that "could have
> been".
>
> It is easy to understand that, given the pressure to maintain a
> 'presence' in the month headlines and the desire to outperform the
> competition in the number of 'features', some amount of short-cuts
> will be taken and code audits being skipped so that the next 'distro
> release' can announce a new fancy gizmo under its wing. *Some* degree
> of this behavior is to be expected in an environment where any "Joe
> Six-pack" can start a project and have his code used by and
> encorporated into other software down the stream. However, I am quite
> shocked that the practice is tolerated to the point that it leads to
> extremely unstable critical support systems as detailed in the
> following forum threads.
>
> http://ubuntuforums.org/showthread.php?t=612606http://ubuntuforums.org/showthread.php?t=614962
>
> Nathan.

I wouldn't call audio a *critical* system. If you read the response to
the half-witted comment, you will see why such non-critical systems
would be sacrificed in favor of more critical systems. If you are in a
out-of-memory situation, you will be across the board. In those
situations, you do the very same thing the human body does...
sacrifice appendages first and keep warm blood pumping to the vital
organs above all else.

A better solution to such a problem would be in fronting an effort/
campaign to reduce the amount of bloat and unnecessary memory usage.

Dan Espen

unread,
Nov 16, 2007, 10:24:57 PM11/16/07
to
nbake...@charter.net writes:

Ah, my friend Nathan, I'm afraid it is you that is the idiot.
I assume these malloc wrappers print a message and then abort.
Do you have any idea what else they can do?

Do you really think a program can carry on and do anything reasonable
when it runs out of memory?

Don't you think it might require something for the program to continue
on? Like maybe memory?

Never the less, most of the software I write is middleware and
it does try to return error indications to the caller on out
of memory. I sometimes see dumps produced by
programs using my middleware as they try to report back to the user
that something went wrong.

If you think you are so smart, find out what the real power of open
source is. Find a better way and submit a patch.

But lose the arrogant attitude.

Evenbit

unread,
Nov 16, 2007, 10:34:19 PM11/16/07
to
On Nov 16, 9:57 pm, Keith Kanios <ke...@kanios.net> wrote:
> On Nov 16, 8:15 pm, nbaker2...@charter.net wrote:
>
>
>
> > Like a run-away freighttrain, the Open Source Community's "standard
> > practice" (_faux peer review_ plus shoddy coding standards and casual
> > dismissal of bug reports pointing out critical flawshttp://pulseaudio.org/ticket/158
> > ) is exactly the mind-set that will bring Linux tumbling down the hill
> > into the valley of the forgotten, non-important OSs that "could have
> > been".
>
> > It is easy to understand that, given the pressure to maintain a
> > 'presence' in the month headlines and the desire to outperform the
> > competition in the number of 'features', some amount of short-cuts
> > will be taken and code audits being skipped so that the next 'distro
> > release' can announce a new fancy gizmo under its wing. *Some* degree
> > of this behavior is to be expected in an environment where any "Joe
> > Six-pack" can start a project and have his code used by and
> > encorporated into other software down the stream. However, I am quite
> > shocked that the practice is tolerated to the point that it leads to
> > extremely unstable critical support systems as detailed in the
> > following forum threads.
>
> >http://ubuntuforums.org/showthread.php?t=612606http://ubuntuforums.or...

>
> > Nathan.
>
> I wouldn't call audio a *critical* system. If you read the response to
> the half-witted comment, you will see why such non-critical systems
> would be sacrificed in favor of more critical systems. If you are in a
> out-of-memory situation, you will be across the board. In those
> situations, you do the very same thing the human body does...
> sacrifice appendages first and keep warm blood pumping to the vital
> organs above all else.

Oh come-on, Keith, you know better than to use the same pithy staw-man
that the PulseAudio retard used. We are talking about application
layers that deal primarily with multi-media data... this means the
'desired memory allotment' may run into the tens to the hundreds of
Gigs... so "across the board" is an extremely weak claim since it is
very unlikely for an other application requirement (and this goes for
the other apps currently running) to be anywhere near this size.

>
> A better solution to such a problem would be in fronting an effort/
> campaign to reduce the amount of bloat and unnecessary memory usage.

This can only be successful if it were "drilled into their heads" at
the start of the Freshman programming course and consistantly
continued throughout the CompSci regimen.

Nathan.

ray

unread,
Nov 16, 2007, 10:34:44 PM11/16/07
to

The main problem with your argument being, of course, that Vista which was
delayed several times and had features thrown out so that it could finally
come to market, seems to have even more problems.

Keith Kanios

unread,
Nov 16, 2007, 11:00:31 PM11/16/07
to

I don't see how "straw man" applies here. I am simply commenting from
the appreciation of being a system-level programmer.

If one process is hogging all of the physical and swap memory, other
processes are being deprived of that memory. Ask Windows users how
appreciative it would be to lose one application's worth the data
instead of losing all of your data due to the entire system becoming
unresponsive.

If the problem is actually with running out of process (virtual)
memory, then I can think of more graceful ways to handle such out-of-
memory situations.

>
> > A better solution to such a problem would be in fronting an effort/
> > campaign to reduce the amount of bloat and unnecessary memory usage.
>
> This can only be successful if it were "drilled into their heads" at
> the start of the Freshman programming course and consistantly
> continued throughout the CompSci regimen.
>
> Nathan.

... instead of Java, C# and garbage collection wiping incompetent
asses? It would be appreciated, but highly unrealistic when software
is market driven. Quality is no longer a factor, it is just reduced
down to time and price.

Evenbit

unread,
Nov 16, 2007, 11:06:47 PM11/16/07
to
On Nov 16, 10:24 pm, Dan Espen <dan...@MORE.mk.SPAMtelcordia.com>
wrote:

>
> Ah, my friend Nathan, I'm afraid it is you that is the idiot.
> I assume these malloc wrappers print a message and then abort.
> Do you have any idea what else they can do?

Well, my friend Dan, I really do wish your assumption were correct.
It would be extremely nice (and helpful) if an application would
report an "error condition" before terminating. It would also, by
extension, be extremely nice (and helpful) if a support library would
report said error to the calling application so that the application
developer might have the opportunity to respond in a graceful manner
to environmental conditions. Non-returning function calls certainly
are a bane during debugging sessions.

I am also thinking of the Windows users who are new to Linux. When
programs like Firefox consistantly and suddenly "disappears" on them
(the way it does for me) without reporting the "why", they are going
to migrate back to their Microsoft products. At the very least, they
get the dreaded "Blue Screen of Death" which is a tonne more useful
information than something which terminates your application at will.
Now do you see the danger of PulseAudio and other shoddy libraries???

Nathan.

Dan Espen

unread,
Nov 16, 2007, 11:17:45 PM11/16/07
to
Evenbit <nbake...@charter.net> writes:

> On Nov 16, 10:24 pm, Dan Espen <dan...@MORE.mk.SPAMtelcordia.com>
> wrote:
>>
>> Ah, my friend Nathan, I'm afraid it is you that is the idiot.
>> I assume these malloc wrappers print a message and then abort.
>> Do you have any idea what else they can do?
>
> Well, my friend Dan, I really do wish your assumption were correct.
> It would be extremely nice (and helpful) if an application would
> report an "error condition" before terminating. It would also, by
> extension, be extremely nice (and helpful) if a support library would
> report said error to the calling application so that the application
> developer might have the opportunity to respond in a graceful manner
> to environmental conditions. Non-returning function calls certainly
> are a bane during debugging sessions.

You seem to have missed the point.
When an application is out of memory, almost anything you try to do to
report an error is going to fail.

It takes memory to invoke a function.

> I am also thinking of the Windows users who are new to Linux. When
> programs like Firefox consistantly and suddenly "disappears" on them
> (the way it does for me) without reporting the "why", they are going
> to migrate back to their Microsoft products.

Firefox disappearing, likely has nothing to do with this issue.

Install the Firefox bug reporting tool and a Firefox failure will
invoke a dialog that sends a bug report back to the developers.

> At the very least, they
> get the dreaded "Blue Screen of Death" which is a tonne more useful
> information than something which terminates your application at will.
> Now do you see the danger of PulseAudio and other shoddy libraries???

I don't see any danger.
It's an audio application.
It will stop and I'll look for the problem.

Frank Kotler

unread,
Nov 17, 2007, 6:12:06 AM11/17/07
to
Dan Espen wrote:

...


> When an application is out of memory, almost anything you try to do to
> report an error is going to fail.

...


> Install the Firefox bug reporting tool and a Firefox failure will
> invoke a dialog that sends a bug report back to the developers.

Clever, these Firefox developers...

Best,
Frank

Evenbit

unread,
Nov 17, 2007, 8:35:24 AM11/17/07
to
On Nov 16, 9:57 pm, Keith Kanios <ke...@kanios.net> wrote:
>
> I wouldn't call audio a *critical* system.

Audio is certainly a critical system for those users who are not
blessed with the normal human attribute of being 'sighted'. Blind
people do not depend on either screen graphics or text from a video
monitor -- they are able to use a PC solely via the audio feedback.
Why should library developers be granted exclusive permission to
determine which systems are *critical* and which are not? Shouldn't
these decision be left for the application programmer to decide?

Nathan.

Evenbit

unread,
Nov 17, 2007, 9:02:02 AM11/17/07
to
On Nov 16, 11:00 pm, Keith Kanios <ke...@kanios.net> wrote:
<<snipped>>

>
> > Oh come-on, Keith, you know better than to use the same pithy staw-man
> > that the PulseAudio retard used. We are talking about application
> > layers that deal primarily with multi-media data... this means the
> > 'desired memory allotment' may run into the tens to the hundreds of
> > Gigs... so "across the board" is an extremely weak claim since it is
> > very unlikely for an other application requirement (and this goes for
> > the other apps currently running) to be anywhere near this size.
>
> I don't see how "straw man" applies here. I am simply commenting from
> the appreciation of being a system-level programmer.
>

It is obvious that if you are indeed a "system-level programmer" who
is worth his salt, then you would have _some_ understanding about
modern memory management issues (it is clear from your responses that
you do not). When we issue a call to an OS asking for a chunck of
memory, the OS responds by looking for an area of _contiguous_ free
memory space of the size that we request. So, you see, it is
perfectly possible that an attempt to allocate 50Gigs will fail, while
subsequent calls to the same OS function asking for 10 instances of
10Gigs each will succeed.

> If one process is hogging all of the physical and swap memory, other
> processes are being deprived of that memory. Ask Windows users how
> appreciative it would be to lose one application's worth the data
> instead of losing all of your data due to the entire system becoming
> unresponsive.

Wouldn't the better choice be to not lose ANY data??? Why do Linux
developers consistantly shoot for standards that are _below_ that of
Windows developers? Why should end-users tolerate a less-stable
experience -- especially when Linux-fans consistantly "bill" Linux as
the better(TM) product??

>
> If the problem is actually with running out of process (virtual)
> memory, then I can think of more graceful ways to handle such out-of-
> memory situations.

This is indeed the issue at hand -- being "more graceful" than killing
the calling application and preventing any error reports from being
issued.

> > > A better solution to such a problem would be in fronting an effort/
> > > campaign to reduce the amount of bloat and unnecessary memory usage.
>
> > This can only be successful if it were "drilled into their heads" at
> > the start of the Freshman programming course and consistantly
> > continued throughout the CompSci regimen.
>

> ... instead of Java, C# and garbage collection wiping incompetent
> asses? It would be appreciated, but highly unrealistic when software
> is market driven. Quality is no longer a factor, it is just reduced
> down to time and price.

This is the very mind-set and attitude which will get Linux labelled a
"has been" in the OS history books.

Nathan.

Rod Pemberton

unread,
Nov 17, 2007, 3:51:48 AM11/17/07
to

"Dan Espen" <dan...@MORE.mk.SPAMtelcordia.com> wrote in message
news:ic8x4xa...@mk.telcordia.com...
> nbake...@charter.net writes:
>

Sigh, had to go to Google to read the other six posts that didn't propagate
well...

> > Like a run-away freighttrain, the Open Source Community's "standard
> > practice" (_faux peer review_ plus shoddy coding standards and casual
> > dismissal of bug reports pointing out critical flaws
http://pulseaudio.org/ticket/158
> > ) is exactly the mind-set that will bring Linux tumbling down the hill
> > into the valley of the forgotten, non-important OSs that "could have
> > been".
> >

Although I strongly believe there are reasons to support the claim that
Linux is or will be "tumbling down the hill into the valley of the
forgotten, non-important OSs that 'could have been'," I don't believe the
issue is the mindset of Linux coders, their standards, their failure to fix
bugs, or even other issues such as reversion of prior bug fixes or
filesystem problems...

The real primary issue is money. Can Linux survive long term against a
company with billions in financial and physical capital, licensed and
proprietary software patents, driven programmers who are _paid_ to program
for a living, and an endless supply of software drivers written for their
OS's API by hardware manufacturers. Secondary issues include software
development time for new PC hardware or circuitry and the far above average
intellect of "their" large paid programmer base versus the average IQ,
skill, and time constraints of many unpaid "Joe Six-pack" 's. I see Linux
running into a wall due to the rapid continuous changes and advances in PC
circuitry unless a huge infusion of cash is found. A for profit Linux OS
corporation needs to be formed. Getting Apple to dump OS X for paid copies
of Linux would be a good start. If Linux can't compete with OS X for
profit, I really don't see a long term PC future. Perhaps one might as well
dump Linux now and embrace OS X...

Personally, I also think some long term design changes are needed. I'd
recommend a adopting a syscall only based version of Linux as it's primary
form, like UML. If only a syscall interface had to be written to bootstrap
Linux, cross-compiling to other platforms would be faster and easier.
Unfortunately, even with a UML version available, Linux's syscall interface
has bloated from 40 implemented functions in v0.01 to 290 in v2.16.17. The
number of syscalls needs to be drastically reduced or the syscall interface
needs to be built entirely on a small set of functions. I'd also recommend
using some other highly popular interface that allows development of almost
OS applications, say the SDL library, instead of the current syscall
interface. If SDL, this would allow numerous OS-like applications such as
DOSBox, Scummvm, etc. to run as the "higher level" OS. Writing the low
level OS portions are a pain. Nobody really wants to do that. It's already
been done fairly well for Linux. Much of the low level parts of Linux have
been extracted from Linux for the LinuxBIOS' FILO project anyway. Allowing
different top-ends to the OS would encourage much more upper level OS
development and adaptation. This adaptability might be a good long term
advantage against a corporate competitor that has become stagnant.


Rod Pemberton

Bruce Coryell

unread,
Nov 17, 2007, 4:34:45 PM11/17/07
to

Actually there are "for profit OS Linux corporations" around - such as
Red Hat, Novell (Suse), Caldera, and others of their ilk...

OS/2 is still around, though not owned or supported by IBM anymore:
http://www.ecomstation.com/ OS/2 was one sharp operating system about
15 years ago, just never caught on. But if this company is smart, they
could really position this as a viable alternative to Microsoft.

Another OS that could be a good alternative, if they positioned it a
little better, would be Sun's Solaris operating system. I tried an
evaluation copy and my system really hummed with it, even at 800 MHz.
Just that the networking support with Linux and MS was a little rough.

Fredderic

unread,
Nov 17, 2007, 7:11:12 PM11/17/07
to
On Sat, 17 Nov 2007 06:02:02 -0800 (PST),
Evenbit <nbake...@charter.net> wrote:

> On Nov 16, 11:00 pm, Keith Kanios <ke...@kanios.net> wrote:
> <<snipped>>

>> I don't see how "straw man" applies here. I am simply commenting
>> from the appreciation of being a system-level programmer.
> It is obvious that if you are indeed a "system-level programmer" who
> is worth his salt, then you would have _some_ understanding about
> modern memory management issues (it is clear from your responses that
> you do not). When we issue a call to an OS asking for a chunck of
> memory, the OS responds by looking for an area of _contiguous_ free
> memory space of the size that we request. So, you see, it is
> perfectly possible that an attempt to allocate 50Gigs will fail, while
> subsequent calls to the same OS function asking for 10 instances of
> 10Gigs each will succeed.

That's odd... I was under the impression we had this thing called
paging, on modern operating systems. This has two effects; one,
applications are actually allocated memory in complete pages, and
secondly, those pages can reside anywhere in physical ram, and they'll
still appear contiguous to the application.

The only time this might be an issue, is with DMA, where a component
external to the processor (and hence without the benefit of the kernels
page tables) needs to access data across two or more pages.

Mind you, I'm not a systems level programmer either...


>> If one process is hogging all of the physical and swap memory, other
>> processes are being deprived of that memory. Ask Windows users how
>> appreciative it would be to lose one application's worth the data
>> instead of losing all of your data due to the entire system becoming
>> unresponsive.
> Wouldn't the better choice be to not lose ANY data??? Why do Linux
> developers consistantly shoot for standards that are _below_ that of
> Windows developers? Why should end-users tolerate a less-stable
> experience -- especially when Linux-fans consistantly "bill" Linux as
> the better(TM) product??

You, mate, are an ass. Every time I have run out of memory on a
Windoze system, the entire system crashed. My wife who still uses
Windoze will attest to that. All current unsaved data, in all
applications, gets flushed down the drain when not even Ctrl-Alt-Del
will respond, and you have to reach for the power button (because
modern machines don't come with a reset button anymore).

Every time I run out of memory in a Linux system, one application gets
hosed, _usually_ the right one. Though occasionally it's like my GUI
panel or something, which subsequently gets re-started, causing
something else to die instead, and occasionally it'll roll through two
or three unlucky minor apps before it hits the right one. It can also
be a bitch when it's the X-server itself that it decides to kill, but
such is life. I just sit back and watch for a few minutes, after which
I have a system that's at least stable enough to save down anything
that has survived, and either restart the X server myself, or give the
while system a thorough cleanout with a nice soft restart.

It's still a damn sight better than the Windoze way of just locking up
the entire frigging machine, and hosing everything indiscriminately.


>> If the problem is actually with running out of process (virtual)
>> memory, then I can think of more graceful ways to handle such
>> out-of- memory situations.
> This is indeed the issue at hand -- being "more graceful" than killing
> the calling application and preventing any error reports from being
> issued.

The question, is how exactly do you do that, without allocating
additional memory?

Come to think of it, how do you figure out when enough memory is really
enough? My system will quite happily (albeit a little slowly) run with
3-4 times the base memory allocated, as long as no single application
accounts for twice the base memory. In Windoze, it starts to die well
before that.


>> ... instead of Java, C# and garbage collection wiping incompetent
>> asses? It would be appreciated, but highly unrealistic when software
>> is market driven. Quality is no longer a factor, it is just reduced
>> down to time and price.
> This is the very mind-set and attitude which will get Linux labelled a
> "has been" in the OS history books.

But that is the mind-set that exists industry-wide. One only has to
look at Microsoft's business applications, most of which palm off HTTP
and XML as gods gift to software developers. They've rammed their
stock-standard HTTP/XML libraries into places they simply don't fit,
and focused on making the application look pretty so end users will
like it, and not notice the utter shite under the hood. I've seen it
time and time again. Most of the good quality innovative developments
I've seen of late, have come from Linux, not Microsoft.


So I really think you've got your head on backwards, mate. Linux's
achilles heal, if anything, is that fact that it's doing the job
right, rather than cutting corners and building lock-in boxes, in an
attempt to rule the world.


Fredderic

Fredderic

unread,
Nov 17, 2007, 7:16:20 PM11/17/07
to

They're not. Both systems get pretty much the same regard, as far as I
can see. But one would offer the suggested that without sight, there'd
likely be more memory for the audio system. Plus audio generally has a
lower memory footprint, and so short of audio editors and other
high-end music creation software, a simple screen reader is far less
likely to draw the application killers gaze, and far more likely to be
automatically restarted even if it did.

You know, I may have missed part of the thread, but it seems to me that
tugging on the accessibility string really is another step down the
ladder for you.


Fredderic

Keith Kanios

unread,
Nov 17, 2007, 9:24:30 PM11/17/07
to
On Nov 17, 8:02 am, Evenbit <nbaker2...@charter.net> wrote:
> On Nov 16, 11:00 pm, Keith Kanios <ke...@kanios.net> wrote:
> <<snipped>>
>
>
>
> > > Oh come-on, Keith, you know better than to use the same pithy staw-man
> > > that the PulseAudio retard used. We are talking about application
> > > layers that deal primarily with multi-media data... this means the
> > > 'desired memory allotment' may run into the tens to the hundreds of
> > > Gigs... so "across the board" is an extremely weak claim since it is
> > > very unlikely for an other application requirement (and this goes for
> > > the other apps currently running) to be anywhere near this size.
>
> > I don't see how "straw man" applies here. I am simply commenting from
> > the appreciation of being a system-level programmer.
>
> It is obvious that if you are indeed a "system-level programmer" who
> is worth his salt, then you would have _some_ understanding about
> modern memory management issues (it is clear from your responses that
> you do not). When we issue a call to an OS asking for a chunck of
> memory, the OS responds by looking for an area of _contiguous_ free
> memory space of the size that we request. So, you see, it is
> perfectly possible that an attempt to allocate 50Gigs will fail, while
> subsequent calls to the same OS function asking for 10 instances of
> 10Gigs each will succeed.

Yeah, I know. Like, sheesh... how would I know about paging and memory
management if I have only written my own memory managers (rolls eyes)

Even at 4KB page resolution, physical out-of-memory situations *can*
occur and you *need* your system to do some quick and efficient
triage... and amputations if needed.

Stability comes before usability, not the other way around. If you are
physically out of memory, you simply cannot assume that you have
enough memory to perform even the simplest of operations. You want a
prime example of such bad design??? Use up all of your hard drive
space on your Windows box and then run a memory-intensive application/
game... catch you on the flip-side of that reset button buddy... and
pray that your chkdsk runs clean. This may be OK to get away with on
your desktop, but this is absolutely intolerable for a server/
production environment.

It would be wise to catch yourself up on some of these concepts
instead of insisting that you know them because you *think* they
should be that way. It could quite possibly keep you from looking like
a complete newbie.

> > If one process is hogging all of the physical and swap memory, other
> > processes are being deprived of that memory. Ask Windows users how
> > appreciative it would be to lose one application's worth the data
> > instead of losing all of your data due to the entire system becoming
> > unresponsive.
>
> Wouldn't the better choice be to not lose ANY data??? Why do Linux
> developers consistantly shoot for standards that are _below_ that of
> Windows developers? Why should end-users tolerate a less-stable
> experience -- especially when Linux-fans consistantly "bill" Linux as
> the better(TM) product??
>

I am not going to get into a NT vs. Linux war, as I really don't like
either of their designs and I'll pick BSD over the two any day.
However, I have consistently noticed (i.e. from vast server/desktop
experience) that memory management on Linux is handled much better
than NT... and this is coming from someone who runs Windows XP
despite.

>
> > If the problem is actually with running out of process (virtual)
> > memory, then I can think of more graceful ways to handle such out-of-
> > memory situations.
>
> This is indeed the issue at hand -- being "more graceful" than killing
> the calling application and preventing any error reports from being
> issued.

Is it really? Are you absolutely sure that the program is using up its
entire virtual memory space and not just choking on low RAM and HD
space situations??? Links that state this, exactly, would be
appreciated.

> > > > A better solution to such a problem would be in fronting an effort/
> > > > campaign to reduce the amount of bloat and unnecessary memory usage.
>
> > > This can only be successful if it were "drilled into their heads" at
> > > the start of the Freshman programming course and consistantly
> > > continued throughout the CompSci regimen.
>
> > ... instead of Java, C# and garbage collection wiping incompetent
> > asses? It would be appreciated, but highly unrealistic when software
> > is market driven. Quality is no longer a factor, it is just reduced
> > down to time and price.
>
> This is the very mind-set and attitude which will get Linux labelled a
> "has been" in the OS history books.
>
> Nathan.

I think Linux suffers from the very thing that makes it popular. It
tries to be the one OS that can run everywhere and on everything. In
this respect, it suffers in terms of quality. Mostly everything is
dependent on gcc to make all of the optimizations. There are too many
redundant libraries, and even then most of them do relatively simple
things. However, you will rarely see a properly configured Linux-based
server have the need to be restarted short of upgrades, deep
configuration changes and those rare kernel panics. I wish I could say
the same for even the best NT server setups I have come across.

Toaster Linux FTW!!!

Evenbit

unread,
Nov 17, 2007, 11:34:07 PM11/17/07
to
On Nov 17, 9:24 pm, Keith Kanios <ke...@kanios.net> wrote:
>
> It would be wise to catch yourself up on some of these concepts
> instead of insisting that you know them because you *think* they
> should be that way. It could quite possibly keep you from looking like
> a complete newbie.
>

The only reason that I "insist that [I] know them" is because I *have*
been reading this type of material. I haven't (knowingly) made any
claim about OS functionality that I didn't gain from reading a few
books on the subject.

Nathan.

Keith Kanios

unread,
Nov 18, 2007, 12:46:18 AM11/18/07
to

Ah... theory. Leaves a nice warm feeling, doesn't it?

Three potential solutions to fix your unsound comments.

1) Re-read those books.
2) Get more modern/informative books.
3) Try a little practical implementation so you can see why it is so
foolish to back such inconsistent theories or potential
misunderstandings.

I am not trying to be too much of an a**hole here, but I have nearly 8
years of actual OS development experience and system-level programming
under my belt. It is not a lot, but I would be willing to pit it
against someone who seems to have just graduated from HLA. So, believe
me when I tell you: YOU ARE WRONG.

Now, adapt, overcome and enjoy the enlightenment that will follow ;)

Robert Redelmeier

unread,
Nov 18, 2007, 10:56:34 AM11/18/07
to
In alt.lang.asm ray <r...@zianet.com> wrote in part:

This is a valid high-level argument: success is much more than
avoiding failure. Even glaring failures can be immaterial.

However, I am reading in ALA where details are very relevant so
I feel compelled to offer some of the many:


1) A Linux distro is _not_ the kernel. distros come and go,
the kernel is eternal :)

2) Much greater code review has been done for OpenBSD. It has
not lead to run-away success outside of its' domain.

3) Using a desktop/user distro for a "critical support system"
is unlikely to be successful except for "non-traditional"
definitions of "critical"

4) audio might be one of those defininitions

5) not understanding VM_overcommit and the OOM killer certainly
is "non-traditional" wrt "critical".

6) If Nathan or J-G dislike certainly library coding, they are
completely free to change it, forking the project. This is one
of the great strengths of the GPL and Linux. Whining is very
poor form and a waste of effort. Projects propagate based on
user judgement. Not critics and whiners.


-- Robert


Joe

unread,
Nov 18, 2007, 12:06:19 PM11/18/07
to

This has been the situation for the last twenty years. Linux and GNU
were born into and grew up in exactly this environment. If they die now,
it won't be for this reason.

Microsoft certainly has good people working for it. But they are very
closely constrained by the requirement to keep re-selling what is
broadly the same software, and even more so by the importance of
maintaining the near-monopoly. What innovation does occur is almost
entirely aimed at keeping and improving the incompatibility between
Windows and the rest of the IT world, and to some extent even with
earlier Microsoft software. GNU-Linux has no need or use for planned
obsolescence.

One particularly crippling constraint is that much-loved marketing word
'integration'. This means linking together relatively unrelated programs
so tightly that connection with non-Microsoft software is difficult or
impossible. This is the exact opposite of what is probably the single
strongest programming imperative, to isolate sub-programs as much as
possible and to use only well-defined interfaces between them.

A simple example: the Windows Small Business Server contains a POP3
downloader which drops mail straight into Exchange mailboxes, because it
can, and because the suits can then use the 'i' word. The competitive
POP3 download products all deliver to localhost:25 by SMTP, keeping the
interface clean and simple. The result is that the competitors can
utilise a number of Exchange features which the built-in POP3 connector
bypasses. While Microsoft is not a company to be underestimated, it
should not be overestimated either.

Jerry McBride

unread,
Nov 18, 2007, 3:01:16 PM11/18/07
to
nbake...@charter.net wrote:


FUD... FUD... Go away... Come again... Some other day.

If it smells like a troll, looks like a troll and writes like a troll...

IT MUST BE A TROLL.


--

Jerry McBride (jmcb...@mail-on.us)

Kai-Martin Knaak

unread,
Nov 18, 2007, 5:47:22 PM11/18/07
to
On Fri, 16 Nov 2007 18:15:41 -0800, nbaker2328 wrote:

> is exactly the mind-set that will bring Linux tumbling down the hill
> into the valley of the forgotten, non-important OSs that "could have
> been".

Err, no.

1) Linux is. So it already lost the chance to be "could have been" :-)

2) Development of Linux (the kernel) is a perfect example for meticulous
examination and acid tests before changes enter the main tree.

3) If some major distro were to provide a relevant amount of unstable
applications, people would switch to a different distro rather than to
windows.

4) There is a unique selling point to GPLed software: No license fee, no
registration. As applications become more and more interchangeable, this
becomes a pull factor towards linux systems for anyone with limited
resources.

---<(kaimartin)>---
--
Kai-Martin Knaak
http://lilalaser.de/blog

ray

unread,
Nov 18, 2007, 6:15:00 PM11/18/07
to

Would a real time monitoring system in a major DOD test and evaluation
environment qualify? I think so. And the ones I was familiar with before I
retired three years ago relied on Unix and Linux.

Robert Redelmeier

unread,
Nov 18, 2007, 6:50:49 PM11/18/07
to
In alt.lang.asm ray <r...@zianet.com> wrote in part:
> On Sun, 18 Nov 2007 15:56:34 +0000, Robert Redelmeier wrote:
>> 3) Using a desktop/user distro for a "critical support system"
>> is unlikely to be successful except for "non-traditional"
>> definitions of "critical"
>
> Would a real time monitoring system in a major DOD test and evaluation
> environment qualify? I think so. And the ones I was familiar with
> before I retired three years ago relied on Unix and Linux.

Certainly it would qualify as "critical". And I don't doubt
Unix and other Linux-like systems could pass.

All I'm trying to say is that I doubt an out-of-the-box,
install everything _user_ distro like Ubuntu would pass. Just
think of all the kernel modules. Redhat, Debian, Slackware or
even a correctly stripped Ubuntu would be necessary, preferably
with a kernel customized for the hardware. No X-server, XDM or
other eye candy will necessarily be reliable on all hardware.

-- Robert

Evenbit

unread,
Nov 19, 2007, 4:34:57 PM11/19/07
to
On Nov 18, 12:46 am, Keith Kanios <ke...@kanios.net> wrote:
>
> Ah... theory. Leaves a nice warm feeling, doesn't it?

... and leads to my trade-mark trait of "going off half-cocked!" ;)

>
> Three potential solutions to fix your unsound comments.
>
> 1) Re-read those books.
> 2) Get more modern/informative books.
> 3) Try a little practical implementation so you can see why it is so
> foolish to back such inconsistent theories or potential
> misunderstandings.

Luckily, just re-reading them was enough to convince me of my error.
My confusion was due to putting too much emphasis on the fact that
blocks always contain pages that are assigned to contiguous regions of
a process' address space. It is true that it is possible to fragment
(make a mess of) the process' address space, but this should only
happen due to extremely bad code or via intentional effort on the part
of the programmer. Even then, I suspect that any "fragmentation wall"
is purely theoretical because your process would be stopped long
before it could be hit.

Also, if you write an application that is extremely memory-hungry, you
need to scrap everything and go back to the flow-charts for an
entirely different design.

Nathan.

Keith Kanios

unread,
Nov 20, 2007, 9:05:10 AM11/20/07
to
On Nov 19, 3:34 pm, Evenbit <nbaker2...@charter.net> wrote:
> On Nov 18, 12:46 am, Keith Kanios <ke...@kanios.net> wrote:
>
>
>
> > Ah... theory. Leaves a nice warm feeling, doesn't it?
>
> ... and leads to my trade-mark trait of "going off half-cocked!" ;)

There's nothing to prove around here, only knowledge to gain and
eventually give back.

>
>
> > Three potential solutions to fix your unsound comments.
>
> > 1) Re-read those books.
> > 2) Get more modern/informative books.
> > 3) Try a little practical implementation so you can see why it is so
> > foolish to back such inconsistent theories or potential
> > misunderstandings.
>
> Luckily, just re-reading them was enough to convince me of my error.
> My confusion was due to putting too much emphasis on the fact that
> blocks always contain pages that are assigned to contiguous regions of
> a process' address space. It is true that it is possible to fragment
> (make a mess of) the process' address space, but this should only
> happen due to extremely bad code or via intentional effort on the part
> of the programmer. Even then, I suspect that any "fragmentation wall"
> is purely theoretical because your process would be stopped long
> before it could be hit.
>
> Also, if you write an application that is extremely memory-hungry, you
> need to scrap everything and go back to the flow-charts for an
> entirely different design.
>
> Nathan.

There you go ;)

0 new messages