Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[Advocacy] Why GNU/Linux is The Better Operating System(TM) (caution: long article!)

7 views
Skip to first unread message

Aragorn

unread,
Nov 12, 2005, 9:23:50 AM11/12/05
to
Fellow residents of /comp.os.linux.advocacy,/


As I have said in one of my other posts, I'm on a peace mission. ;-) I
have also already given a hint that a post such as this one was on the
way.

For the record, I wish to add that this post is mostly intended towards
GNU/Linux newbies and Windows advocates who've had little or no
experience with GNU/Linux - i.e. as an invitation. I myself have been
exclusively using GNU/Linux for the last six years and only little
exposure to Windows, so I have a different view on it.

Despite this post being mainly addressed to the abovementioned, I hope
it will also be a nice read for the resident GNU/Linux advocates. I'm
aware that some of it /may/ come across as a bit incoherent as I jump
from one subject to the next in some paragraphs while sticking to one
subject over several other consecutive paragraphs.

Yet, it took me a while to scribble this all together, and I didn't want
to go through the trouble of working with titles, subtitles and topic
separators. It would only have made this post much longer, and I'm not
exactly trying to get you guys bored either. Also, please forgive me
any typos that may have eluded my attention. ;-)

So here goes... ;-)

*********

The name UNIX has long been used as the exclusive, trademarked name for
a specific proprietary operating system from AT&T, which had forked off
into a few "dialects", such as AT&T's System V architecture, or the BSD
architecture.

Today however, UNIX is still a trademark (owned by the Open Group) but
it is no longer one specific operating system. Instead, it is used as
the designation for a _class_ of operating systems. Another quite
popular name to denote a class of UNIX-like operating systems is the
POSIX standard, albeit that POSIX focuses on different aspects of the
system than the Single UNIX Specification.

GNU/Linux is a UNIX-like operating system. It doesn't have a shred of
code from the original proprietary UNIX operating system - regardless
of what some dimwits at SCO or Microsoft may claim ;-) - but it
attempts to live up to the Single UNIX Specification as described by
the Open Group and to the POSIX standards. It has however not yet been
validated as compliant with the Single UNIX Specification.

Why a UNIX architecture, or why POSIX, you may ask... Well, when
Richard Stallman conceived the idea of Free Software, he was employed
at MIT, and he was actually working at another type of operating
system. Still, he felt that a UNIX architecture would be the best
basis for a Free Software operating system because of its portability.
Stallman founded the Free Software Foundation, and the operating system
he envisioned came to life as GNU.

UNIX architectures are large toolboxes. The application software is so
tightly integrated into the system that one can't tell the difference
between application and operating system anymore, or at least not for
most of the software - GUI-typical applications such as OpenOffice are
of course a lot more identifiable as application suites.

Unlike a certain operating system from somewhere in Redmond, Washington,
UNIX systems do not consider individual filesystems as entities from
the user perspective. In Windows, you store a file on "a drive", and
all drives have alphabetic drive letters to access them with. They
bear equal importance as entities to the operator, and the operator
must ascertain their relevant importance himself.

This approach does not make any distinction between the nature of the
filesystem or the hardware it resides on. There is only the convention
that "drive A:" - and if applicable, a "drive B:" - is a floppy drive,
and that the first Windows-accessible primary partition on a hard disk
is "drive C:". For anything higher up in the alphabet, it's unclear as
to what is referred to without looking at the icon next to the "drive".
"Drive D:" could be a second partition on your single hard disk or the
first primary partition on a second hard disk in your system, but it
could also be your DVD/CD drive, a USB memory stick or a network share
- not that I wish to encourage abuse of Google, but there even is a
freeware plug-in for the Windows filemanager that allows you to access
your /gmail.com/ account as a Windows drive letter.

In Windows, each filesystem has its own root directory, and each
filesystem is an entity. It is left to the user to decide which is
what - albeit that the Windows GUI will give you hints by showing
designated icons next to the drive letter.

This approach dates back to the time of the first personal computers
running the CP/M operating system. Those machines didn't have hard
disks - IBM likes to refer to them as "fixed disks", since the idea was
that they'd be non-removable (under normal conditions) - and alphabetic
disk designations were used so that files could be copied from one
floppy disk to another on machines that only had one single floppy
drive, or that had two floppy drives.

Another consideration to this approach was that back then, there was a
clear distinction as to what was considered "internal" and "external",
or to put it more technically, what was considered I/O (Input/Output).
The idea was that the CPU and the physical RAM were "internal", and
that disk storage, printers, monitor screens etc. were considered
"external".

This philosophy dates back to even before the era of floppy disks, when
storage involved tapes, and even before that, when input came from
punch cards. Information technology was still very new in those days,
especially towards the end-user. As it was a new concept totally void
of any abstractions, the end-user was bestowed with such technical
aspects as what was considered "input" and "output", or "internal" and
"external". The whole machine was seen far more as a collection of
components than as a complete entity.

In the meantime however, personal computers have begotten "fixed disks"
just as were used in the larger architectures, such as the already
existing UNIX platform. Microsoft's operating systems have also grown
far more complex than the original CP/M from Digital Research and its
Microsoft successor MS-DOS, and so in fact, this alphabetic volume
identification is no longer efficient.

UNIX systems have a totally different approach. First of all, they came
to existence on an architecture that was totally different from that of
"the home computer" which later evolved into "the PC" and "the
workstation".

UNIX is a multi-user client/server architecture - i.e. a whole network
within one physical computer - and multiple users were connected to the
same computer system via serial port terminals or "consoles".

The earliest terminals were in fact plain old telex machines, in UNIX
called "teletypes". Later on, "visual" terminals were used, which
comprised of an often monochrome CRT monitor, a keyboard and some very
basic logic for sending the keyboard's input to the computer and
displaying the computer's output.

Early terminals didn't even make a distinction between uppercase and
lowercase. They only had the basic electronics to display everything
in uppercase, but as UNIX systems are case-sensitive, the operating
system would detect such a terminal and translate everything coming
from that terminal into lowercase.

You can still experience this effect in GNU/Linux today. Simply try to
log into a character mode console with your *Caps* *Lock* turned on.
Everything you will see from after the login on will be in uppercase.
Indeed, it is not a bug, it's a feature. ;-)

In UNIX, _everything_ is a file. A hard disk is a file. The output of
a process routed to the input of another process - also know as "a
pipe" - is a file. A partition on a hard disk is a file. A terminal
console is a file, etc.

However, UNIX makes an abstraction of that all to the user. In a UNIX
environment, storage media are not necessarily considered to be
separate entities or "external to the machine".

In UNIX, you don't store a file on this or that volume; you store the
file "on the computer", and for doing so, you can store the file in a -
for you - writable place in a unified directory structure. That same
directory structure also features directories which contain virtual
filesystems such as */proc* or */sys* as well as they do physical
filesystems and RAM-based filesystems - e.g. /tmpfs/ or /ramfs,/ -
without that you need to think about the physical media.

UNIX operating systems can use multiple filesystems in one system,
whether they are partitions - often still referred to as "slices" - on
one or multiple fixed disks or whether they are on whatever media they
may reside.

Everything is integrated into the same directory tree, starting at the
root directory. No alphabetic volume designations, no multiple root
directories. Just a single forward slash as the root from which
everything else stems. Even Windows users are somewhat familiar with
this approach because of the "spelling" of URL's, which were modeled
after the UNIX directory tree, as well as because of the fact that it's
very hard to install Windows itself over multiple "volumes" - i.e.
everything will typically be in a directory tree on the /C:/ drive.

This approach of using a single root directory and abstracting the
physical storage locations from the operator's experience also allows a
UNIX operating system to install the operating system itself in
separate filesystems, which not only limits filesystem fragmentation
and the possibility to filesystem corruption, but also allows for a
more apt configuration.

In UNIX - and even more so in GNU/Linux due to the many filesystem types
it supports - it is possible to use different types of filesystems -
e.g. /reiserfs/ or /XFS/ - to house sections of the directory tree, and
to have those filesystems use different blocksizes for efficiency.
Very important is also the option to mount filesystems read-only, so as
to protect their integrity and act as an extra barrier against
malicious coders and their creations.

Security-wise, UNIX systems traditionally use a permissions system
embedded into the filesystem, based upon three main distinctions
between users, i.e.
- the user, i.e. the owner of the file;
- the group, i.e. one of the groups that the user belongs to and which
is set in the filesystem as the "owning group" of the file; and
- the others, i.e. "the rest of the world".

UNIX users have a unique User ID (UID) and a unique group ID (GID), and
every file in the system - and everything *is* a file! - is owned by a
user and a group, even if that user and group are "nobody". The
"nobody" account is an existing account - either with "nobody" or
"nogroup" as the group it belongs to - and with a special UID -
traditionally, this is UID "-1". The system administrator is called
"root" - because in the old days, he didn't have any designated home
directory - and has UID "0".

Permissions are then granted based upon three bits per user category,
i.e.:
- read permission (r)
- write permission (w)
- execute permission (x).

<note>
Just because you've marked a file as read-only by unsetting
the "w" permission for a user or group, doesn't mean that it's
impossible for anyone to write to the file. They key lies in
the permissions said person has on the directory containing
the file. ;-)
</note>

Additionally, extra information can be stored, such as "d" for a
directory, "t" for a "sticky bit" - now only used on world-writable
directories to protect their contents from being overwritten or deleted
by other users; it's also used on named pipes to reserve their place in
swap - "s" to make an executable always run with the user ID of the
owner (or similarly "g" for the owning group), "c" or "b" to denote a
character or block special device - remember that in UNIX, everything
is a file, and "c" is used for things like a printer, a display
adapter, a console et al, while "b" is used for filesystems and the
media they reside on. There are a few other designations as well.

The NT-based versions of Microsoft Windows also have UID's and GID's,
and use Access Control Lists (ACL's), which offer a more fine-grained
control, albeit that the extra functionality of an ACL versus the
UNIX-style permissions system is only really needed in very large-scale
and complex set-ups. Many UNIX-like operating systems (including
GNU/Linux) also feature ACL's now in addition to the native
"user/group/others" permissions set.

Additionally, Microsoft Windows allows one to bypass the filesystem
security provided by ACL's if the system is installed on a /FAT/ or
/vfat/ filesystem, since those filesystems do not support ACL's. Any
user would then have the ability to erase or overwrite any file in the
system, provided that the file is not locked because it's in use by the
system itself, of course.

GNU/Linux for that matter cannot be installed in a filesystem that
doesn't support UNIX-style ownerships and permissions, as this is a
core part of the UNIX security model. There are some GNU/Linux
distributions that _do_ allow to be installed in a /FAT/ partition (or
on a floppy), but they use a trick: they run off of a virtual
UNIX-style filesystem stored inside a file on that /FAT/ filesystem,
called /umsdos./

<note>
These distributions are mainly used for demonstration purposes
only, or for boot floppies that need to hold more than just a
kernel. /umsdos/ is generally considered an unstable approach.
</note>

Windows does not allow the operating system itself to be installed over
multiple filesystems, nor to have those filesystems mounted read-only.
Windows also makes heavy use of temporary files and relies on the
filename's "extension" - the three-character section of the filename
after the last period - for a file to be recognized as being an
executable.

Another aspect of Windows is that it stores anything regarding its
configuration into a registry. The registry does keep the user ID into
consideration when a user or a program attempts to modify anything
inside the registry or attempts to write a new key to the registry, but
it forms a clutter of binary, hexadecimal and string values, all
grouped together into two files.

The Windows registry can therefore easily become corrupted, as it poses
an extra security hazard in regards to malware or badly written
application installers, and as the two files containing the registry
may become fragmented over time.

UNIX systems traditionally use plain text files for their configuration.
This leads to many such files being present on an average installation,
but the good news is that they are all neatly grouped underneath the
*/etc* directory, with additional configuration files stored in an
*./etc* branch under */opt* - where those *./etc* branches are located
in the respective application's directory - or under */usr* or
*/usr/local.*

It all seems complex at first glance, but given a bit of consideration,
such a layout will however soon make sense, as the key to it all is
logic. ;-)

Microsoft Windows has its graphics and even a large portion of Internet
Explorer running inside the kernel. This approach is considered Bad
Practice(TM) by engineers as it leads to more kernel vulnerability or
instability in terms of bugs in these routines. Yet it's considered a
strategic Must Have(TM) by Microsoft itself regarding monopolist
tactics.

On the other hand, Microsoft Windows has a monolithic kernel - like
GNU/Linux and many other UNIX-like operating systems - but does make
use of an advanced microkernel aspect, called DirectX. DirectX is a
userspace API for graphics and other multimedia, which allows direct
hardware access to applications while the CPU is in low privilege mode
(ring 3). The UNIX equivalent of DirectX is OpenGL.

GNU/Linux - as most UNIX-style operating systems - does not have any
graphics in the kernel - we're not talking about graphics _drivers_
here as those are simply hardware drivers, which in regards to the
framebuffer driver incorporates a penguin logo to show at boot time.

In regards to swap memory, GNU/Linux uses a dedicated partition for
paging out the contents of the physical memory. This swap partition is
formatted in a layout very similar to the physical memory. Kernelspace
memory is never paged out. Kernel processes are given a maximum of 1
GB of memory, and userspace processes are given a maximum of 3 GB per
process, although other settings are possible.

Windows uses a swap file for paging out the contents of the physical
memory. This swap file is a regular file within a regular filesystem
and therefore also gets cached again, which makes paging out memory
contents from the RAM to the hard disk quite pointless and tends to
fill up the cache in such a way that the whole system gets bogged down
and requires a reboot to return to a snappy operation.

Windows reserves 2 GB of virtual memory to kernel processes and 2 GB to
userspace processes. I was told by someone more experienced with
Windows than myself that Windows pages out kernelspace processes as
well - notably the GUI code - in its server versions. I guess that
makes sense, but I can neither confirm nor deny this.

Either way, paging out kernel memory is considered Bad Practice(TM) by
engineers, and while I can understand that a server version of Windows
would not need to have its graphical engine inside the kernel all the
time, one could then also argue that it shouldn't be inside the kernel
in the first place.

There is much debating going on in respect to monolithic kernels and
microkernels. There even is a third type, known as the exokernel.

A monolithic kernel manages the memory, the process scheduling, the
hardware access and all I/O. A microkernel only handles process
scheduling and memory management, but leaves hardware access to
userspace. An exokernel only schedules processes, but leaves even the
memory management to userspace.

While each of the above designs has its advocates and naysayers, Linus
Torvalds favors a monolithic design. For one, it is faster and less
complicated than a microkernel design as it involves far less context
switching for things like hardware access.

The monolithic design of the Linux kernel also allows for more control
over who admits code as part of the kernel, and all code is strictly
evaluated before it is accepted into the official kernel tree. As
there are no financial considerations involved in designing or
maintaining the kernel, the developers also have sufficient time to
test new kernel code.

As shown on a recent thread on this newsgroup, code that is submitted as
a "last minute add-on" will have to wait until a next kernel version,
and it's not unusual for new code to require two or three passing minor
kernel versions before it becomes accepted into the main tree.

The kernel is upgraded per minor versions - with a few stable snapshots
in between, e.g. the current stable kernel is 2.6.14.2, with the ".2"
being the patch level - once a set of new code admissions and bugfixes
has been found to be stable. As a result, kernel upgrades are issued a
lot sooner than is the case with Microsoft Windows.

As GNU/Linux is a portable operating system, many new code admissions or
bugfixes may be related to one specific hardware platform, but in
itself, this may lead to much cleaner code. The development of a
portable operating system is the best way to expose erroneous logic
used in the writing up of code that appears to work flawlessly on one
architecture because said architecture would allow for such "logical
errors" or would not use certain architecture-specific code snippets.

UNIX as an architecture has been the preferred choice of information
technology specialists and computer scientists who needn't (and won't)
concern themselves with economical principles. By its very origin, it
is highly scalable and flexible.

By limiting the kernel code to that which is absolutely necessary for
the functionality and security of the system and abandoning such
concepts as "scheduled release dates" or deadlines, the UNIX
architecture - and more specifically GNU/Linux - has already proven to
be highly stable and applicable for mission-critical environments.

Unlike Microsoft Windows, GNU/Linux was never developed for economical
reasons. It was simply developed and is still maintained for technical
excellence. Additionally, it is a statement of Freedom, both of the
developers and towards the users.

Microsoft Windows on the other hand was conceived by a corporate entity
as a commodity which was to be the base for other commodity software
from this corporation and from its peer proprietary vendors. Its
release schedule is time-bound, and its features are market-driven,
much more than technology-driven. Digital Rights Management (DRM) is
such an example.

Architecture-wise, Microsoft Windows builds upon the old legacy of CP/M
and MS-DOS, of earlier and DOS-based versions of Windows and on OS/2,
although the kernel has many - but certainly not all - of the features
of VMS.

Unlike UNIX systems, which scale down from multi-user hardware such as
mainframes and minicomputers, Microsoft Windows attempts to build up
from the single-user, single-tasking platforms formed by CP/M and
MS-DOS to what it is now, and to what it attempts to be for the future.

Put in different words, you could say that GNU/Linux brings the power of
mainframes and minicomputers down to the minicomputers, while Microsoft
builds its system up from the microcomputer level, and thus starts with
a smaller featureset and a narrower view.

Therefore, Microsoft Windows is not a very scalable or portable
operating system, and both its legacy and its incantation as a
tradeable commodity are the main reasons for its instability.

Additionally, Microsoft has long ignored the pleas of its users for more
stability and instead focused on big revenue from big customers, i.e.
the corporate world. Their only concern for the non-corporate customer
is to have that customer buy their software, as anyone who's ever
contacted their Customer Service - and I have; I actually had much
better experiences contacting IBM when I was using OS/2 - will be able
to tell you.

The appearance of GNU/Linux in the IT scene has caused Microsoft to get
on its toes and take notice of what a good operating system should be
like.

A good operating system should not require any reboots unless a new
kernel is being started - and even that is irrelevant in the world of
logical CPU partitions and virtual machines, where multiple instances
of the same operating system or multiple operating systems respectively
run on the same hardware concurrently. This is the native environment
of UNIX-like operating systems, and the Linux kernel supports logical
partitioning, clustering and virtual servers.

Most GNU/Linux distributions also come with /wine/ and/or /dosemu/ to
allow support for Windows or DOS applications. The Linux kernel
supports networking protocols from Microsoft, Apple and Novell's
Netware, next to TCP/IP and UUCP. GNU/Linux thus reaches out to other
platforms far more extensively than Microsoft Windows.

As a sidenote to the above, the FOSS community provides much of the same
application software as available to GNU/Linux for other platforms as
well, including (but not limited to) Microsoft Windows.

Because of GNU/Linux's nature as a UNIX-like platform, it's not only a
superior operating system, but it also forms a great basis for those
studying information technology. Additionally, the availability of the
source code for studying and its re-usability in one's own code are
important motivations for code developers.

Of course, Microsoft is never going to go away, short of an eventual
legal mishap that would destroy them as a corporation. Yet I don't
think that's ever going to happen. They are walking the thin line and
crossing it every once in a while, but they have a batallion of lawyers
to protect them - even if the legal victory would only be based upon
legal technicalities - and to warn them far enough in advance regarding
Things To Avoid(TM).

GNU/Linux was never intended to be a competitor for anything. It was
intended as a Free & Open Source Software _alternative_ to commercial
UNIX operating systems. Still, its technical excellence - mainly due
to the Linux kernel, which was far sooner available and stable than was
the case for GNU's native Mach/Hurd microkernel design - has led to a
widespread and ever-growing acceptance of this operating system.

Although conceived as a portable and scalable operating system running
on anything from wristwatches to IBM mainframes and supercomputers,
much of its usage is seen in the realm of microcomputers, and
inevitably, this also means workstations and desktop PC's. And this is
what Microsoft considers to be its home ground.

This is why they are so determined on getting rid of GNU/Linux, in every
possible way they can imagine: by ignoring it among their supported
platforms, by lying about it in advertisements or in their "Get the
facts" campaign, by political lobbying in the US Congress, by elevating
pressure upon the vendors of brandname microcomputers, by threatening
foreign governments with false claims regarding allegedly stolen
copyrighted code in the Linux kernel, by non-disclosure agreements with
hardware peripheral vendors, and much, much more...

However, just as Microsoft isn't ever going to go away, neither is
GNU/Linux. It's a project driven by technologically and ideologically
inspired engineers, and as there are no economical considerations,
there are also no economical hazards to deal with.

Windows is ubiquitous. Microsoft had and still has the monopoly over
the home and office desktop. People have also grown to accept the
quirks of the Windows operating systems as being "just as a computer is
supposed to behave". Without any other operating systems to compare
with, little did they know any better.

To those people, GNU/Linux will seem daunting, and Microsoft advocates
like to wave the words "steep learning curve" into their faces. Yet,
to an open and logically thinking mind, GNU/Linux isn't all that
difficult to learn.

Excellent graphical environments such as KDE and Gnome make it very easy
for the average office clerk to perform average office jobs such as
handling e-mail, spreadsheets and graphs, typing up legal letters or
retrieving information from a database.

For the more experienced or interested private computer user, GNU/Linux
will reveal a wealth of tools and information, which can all be
deployed by using common sense, logic, and the installed /man/ pages.
Its nature as a UNIX architecture makes it an ideal platform for
scientific computation, visualization and graphical development, as
well as for server purposes. The availability of its source code
allows for it to be used on both older and new hardware.

I've read a post only yesterday from a respected albeit a bit socially
inept Usenet poster whom I've known to be around for quite some time
already - he's a professor in computer sciences - in which he countered
a newbie's complaint regarding the /man/ pages. His reply was
something like "They're not there to tell you what to do, but they are
there to tell you *how* to do it if you already know *what* you want to
do."

And he's right... The /man/ pages are a syntactic help, even if they do
contain some extra information at times, but the real documentation
regarding every aspect of the system is available on the Internet, in
the /HowTo/ documents on the local system, and in the documentation
that came with an application, tool or whatever other package.

GNU/Linux is the most democratic operating system available today. It's
technically also one of the very best. And you know what the great
news is? *You* get to _choose_ whether you want to use it or not, and
when it's installed on your computer, it's *yours.*

;-)

--
With kind regards,

*Aragorn*
(Registered GNU/Linux user #223157)

Erik Funkenbusch

unread,
Nov 12, 2005, 6:17:20 PM11/12/05
to
On Sat, 12 Nov 2005 14:23:50 GMT, Aragorn wrote:

> Unlike a certain operating system from somewhere in Redmond, Washington,
> UNIX systems do not consider individual filesystems as entities from
> the user perspective. In Windows, you store a file on "a drive", and
> all drives have alphabetic drive letters to access them with. They
> bear equal importance as entities to the operator, and the operator
> must ascertain their relevant importance himself.

Actually, it's not necessary to store infomration on drive letters. This
has become largely a convenience, or shortcut.

> This approach does not make any distinction between the nature of the
> filesystem or the hardware it resides on. There is only the convention
> that "drive A:" - and if applicable, a "drive B:" - is a floppy drive,
> and that the first Windows-accessible primary partition on a hard disk
> is "drive C:".

Totally incorrect. The first bootable drive can be any letter you want it
to be (or no letter at all) under Windows. My first drive letter was I:
for many years. Not on purpose, but just because my C: drive died and I
made my I: drive the bootable drive and removed the C: drive.

> For anything higher up in the alphabet, it's unclear as
> to what is referred to without looking at the icon next to the "drive".
> "Drive D:" could be a second partition on your single hard disk or the
> first primary partition on a second hard disk in your system, but it
> could also be your DVD/CD drive, a USB memory stick or a network share
> - not that I wish to encourage abuse of Google, but there even is a
> freeware plug-in for the Windows filemanager that allows you to access
> your /gmail.com/ account as a Windows drive letter.

Again, this is a convenience. A "label" of sorts, although many older
programs are dependant upon the drive letter nomenclature, the OS itself
doesn't require it.

> In Windows, each filesystem has its own root directory, and each
> filesystem is an entity. It is left to the user to decide which is
> what - albeit that the Windows GUI will give you hints by showing
> designated icons next to the drive letter.

Again, completely incorrect. For example, you can mount an entire
partition inside a windows folder, just like you can in Linux.

> In UNIX, you don't store a file on this or that volume; you store the
> file "on the computer", and for doing so, you can store the file in a -
> for you - writable place in a unified directory structure. That same
> directory structure also features directories which contain virtual
> filesystems such as */proc* or */sys* as well as they do physical
> filesystems and RAM-based filesystems - e.g. /tmpfs/ or /ramfs,/ -
> without that you need to think about the physical media.

Yet there are times when you DO need to think about the physical media, and
then you get into the weird situation of placing a volume within the
filesystem, such as /mnt/cdrom or /mnt/smbshare. This kind of dichotomy is
a hack because there are times when the physical volume DO matter. Another
example is when it comes to volume free space. The single volume approach
makes it much less clear why you might be able to write 200GB to one
directory but only 50 in another.

Even so, Windows also has such a filesystem as well, it's called DFS (no
relation to the DFS in here, I don't think) or Distributed File System.

> Additionally, Microsoft Windows allows one to bypass the filesystem
> security provided by ACL's if the system is installed on a /FAT/ or
> /vfat/ filesystem, since those filesystems do not support ACL's. Any
> user would then have the ability to erase or overwrite any file in the
> system, provided that the file is not locked because it's in use by the
> system itself, of course.

Which of course is completely different than mouting FAT under Linux or
Unix, right? Wrong.

> GNU/Linux for that matter cannot be installed in a filesystem that
> doesn't support UNIX-style ownerships and permissions, as this is a
> core part of the UNIX security model. There are some GNU/Linux
> distributions that _do_ allow to be installed in a /FAT/ partition (or
> on a floppy), but they use a trick: they run off of a virtual
> UNIX-style filesystem stored inside a file on that /FAT/ filesystem,
> called /umsdos./

Oh, I get it. It can't be done, except when it's done. Nice double-think.

> Microsoft Windows has its graphics and even a large portion of Internet
> Explorer running inside the kernel.

Internet explorer doesn't run inside the kernel at ALL. Where do you get
this shit? Do you just make it up?

> This approach is considered Bad
> Practice(TM) by engineers as it leads to more kernel vulnerability or
> instability in terms of bugs in these routines. Yet it's considered a
> strategic Must Have(TM) by Microsoft itself regarding monopolist
> tactics.

Yet you ignore the fact that parts of Unix GUI's are also in the kernel,
such as the framebuffer. Further, placing the GUI outside the kernel gives
you no more security since the GUI process is given full root privs and can
access hardware directly, thus causing the same kinds of lockups and
crashes that can occur in the kernel.

> On the other hand, Microsoft Windows has a monolithic kernel - like
> GNU/Linux and many other UNIX-like operating systems - but does make
> use of an advanced microkernel aspect, called DirectX. DirectX is a
> userspace API for graphics and other multimedia, which allows direct
> hardware access to applications while the CPU is in low privilege mode
> (ring 3). The UNIX equivalent of DirectX is OpenGL.

??? You are just completely making this shit up. DirectX has nothing to
do with a microkernel. They're two different things entirely. DirectX
uses a HEL, or Hardware Emulation Layer, which you migh have confused with
HAL (Hardware Abstraction Layer) which is a technique often used by
Microkernels.

Further, while NT isn't a true Microkernel, it's Microkernel based, using a
supervisor kernel (NTOSKRNL) and a userland kernel (Kernel32) for the
Windows Subsystem.

> GNU/Linux - as most UNIX-style operating systems - does not have any
> graphics in the kernel - we're not talking about graphics _drivers_
> here as those are simply hardware drivers, which in regards to the
> framebuffer driver incorporates a penguin logo to show at boot time.

Wrong, the framebuffer is used for a lot more than just the kernel boot
GUI.

> In regards to swap memory, GNU/Linux uses a dedicated partition for
> paging out the contents of the physical memory. This swap partition is
> formatted in a layout very similar to the physical memory. Kernelspace
> memory is never paged out. Kernel processes are given a maximum of 1
> GB of memory, and userspace processes are given a maximum of 3 GB per
> process, although other settings are possible.

Wrong, Linux *CAN* use a dedicated partition, but it doesn't have to. And
it can in fact use a page file, just like Windows. And kernel memory can
be paged out, not all of it of course, but some parts.

> Windows uses a swap file for paging out the contents of the physical
> memory. This swap file is a regular file within a regular filesystem
> and therefore also gets cached again, which makes paging out memory
> contents from the RAM to the hard disk quite pointless and tends to
> fill up the cache in such a way that the whole system gets bogged down
> and requires a reboot to return to a snappy operation.

Again, you're making this shit up. Pagefiles are never cached. The
filesystem has control over what files can be cached and what can't, and
files can be opened in non-cached mode, which the pagefile is.

> Windows reserves 2 GB of virtual memory to kernel processes and 2 GB to
> userspace processes. I was told by someone more experienced with
> Windows than myself that Windows pages out kernelspace processes as
> well - notably the GUI code - in its server versions. I guess that
> makes sense, but I can neither confirm nor deny this.

By default it uses 2GB each, but you can change to to 3GB user/ 1GB kerenl
in some versions. Also, like Linux, there is pageable and non-pageable
kernel memory.

> While each of the above designs has its advocates and naysayers, Linus
> Torvalds favors a monolithic design. For one, it is faster and less
> complicated than a microkernel design as it involves far less context
> switching for things like hardware access.

You know, it's funny how people like yourself praize the monolithic kernel
for speed and efficiency, yet you criticize when a far more microkernel
design like Windows use certain parts in kernel to gain efficiency.

> The monolithic design of the Linux kernel also allows for more control
> over who admits code as part of the kernel, and all code is strictly
> evaluated before it is accepted into the official kernel tree. As
> there are no financial considerations involved in designing or
> maintaining the kernel, the developers also have sufficient time to
> test new kernel code.

More bullshit. The design of the kernel has *ZERO* to do with who can
check in and out stuff in the kernel. They're not even related in the
slightest.

> Unlike Microsoft Windows, GNU/Linux was never developed for economical
> reasons. It was simply developed and is still maintained for technical
> excellence. Additionally, it is a statement of Freedom, both of the
> developers and towards the users.

Not true at all. Now that there are a *LOT* of financial interests in
Linux, such companies have been putting pressure on the kernel developers
to meet deadlines. Some say the 2.4 kernel was released before it was
ready because companies like IBM were pushing for a release because they
had hardware projects they couldn't sell until the kernel was done.

> Unlike UNIX systems, which scale down from multi-user hardware such as
> mainframes and minicomputers, Microsoft Windows attempts to build up
> from the single-user, single-tasking platforms formed by CP/M and
> MS-DOS to what it is now, and to what it attempts to be for the future.

Actually, Unix began life as a single user version of Multics. The name
"Unix" is a play on "Unics". So in that respect, it too grew "upwards"
from a single-user OS. Kind of invalidates your argument, doesn't it?

Funny how your post didn't really say WHY you think GHU/Linux is The Better
Operating System. It was just a bunch of facts and misinformation
intespersed together without any real conclusions drawn or arguments made.

High Plains Thumper

unread,
Nov 13, 2005, 12:35:23 AM11/13/05
to
Erik Funkenbusch <er...@despam-funkenbusch.com> wrote in
news:1ubx8xrq2taij$.d...@funkenbusch.com:

> On Sat, 12 Nov 2005 14:23:50 GMT, Aragorn wrote:
>
>> Unlike a certain operating system from somewhere in Redmond,
>> Washington, UNIX systems do not consider individual filesystems as
>> entities from the user perspective. In Windows, you store a file on
>> "a drive", and all drives have alphabetic drive letters to access
>> them with. They bear equal importance as entities to the operator,
>> and the operator must ascertain their relevant importance himself.
>

> Actually, it's not necessary <SNIP>

After reading your replies, the first 4 words of your reply are perhaps the
more apt. It departs from commonly understood IT industry convention with
the Windows operating system and contains little supportive information to
substantiate your references to deviations.

--
HPT

Aragorn

unread,
Nov 13, 2005, 1:33:05 AM11/13/05
to
On Sunday 13 November 2005 00:17, Erik Funkenbusch stood up and spoke
the following words to the masses in /comp.os.linux.advocacy...:/

> On Sat, 12 Nov 2005 14:23:50 GMT, Aragorn wrote:
>
>> Unlike a certain operating system from somewhere in Redmond,
>> Washington, UNIX systems do not consider individual filesystems as
>> entities from the user perspective. In Windows, you store a file on
>> "a drive", and all drives have alphabetic drive letters to access
>> them with. They bear equal importance as entities to the operator,
>> and the operator must ascertain their relevant importance himself.
>
> Actually, it's not necessary to store infomration on drive letters.
> This has become largely a convenience, or shortcut.

Hmm... I know that mountpoints have been introduced from Windows 2000
on, but as far as I know, nobody uses them. The default approach still
seems to be to primarily designate filesystems by drive letters.

>> This approach does not make any distinction between the nature of the
>> filesystem or the hardware it resides on. There is only the
>> convention that "drive A:" - and if applicable, a "drive B:" - is a
>> floppy drive, and that the first Windows-accessible primary partition
>> on a hard disk is "drive C:".
>
> Totally incorrect. The first bootable drive can be any letter you
> want it to be (or no letter at all) under Windows. My first drive
> letter was I: for many years. Not on purpose, but just because my C:
> drive died and I made my I: drive the bootable drive and removed the
> C: drive.

Under those conditions, yes. However, this is not the default behavior
and requires a situation like the one you just described. At least,
from my experiences with NT. I have actually tried changing the drive
letter to the partition with NT itself from C: to something else, and
Windows showed me a warning that this would cause problems.

Looking inside the registry, I did indeed see where it would cause
problems, as applications - as well as some of the components of
Windows itself - relied on actual drive letters being used, while the
majority of Windows-specific entries used a variable instead.

>> For anything higher up in the alphabet, it's unclear as
>> to what is referred to without looking at the icon next to the
>> "drive". "Drive D:" could be a second partition on your single hard
>> disk or the first primary partition on a second hard disk in your
>> system, but it could also be your DVD/CD drive, a USB memory stick or
>> a network share - not that I wish to encourage abuse of Google, but
>> there even is a freeware plug-in for the Windows filemanager that
>> allows you to access your /gmail.com/ account as a Windows drive
>> letter.
>
> Again, this is a convenience. A "label" of sorts, although many older
> programs are dependant upon the drive letter nomenclature, the OS
> itself doesn't require it.

If so, then I stand corrected.

>> In Windows, each filesystem has its own root directory, and each
>> filesystem is an entity. It is left to the user to decide which is
>> what - albeit that the Windows GUI will give you hints by showing
>> designated icons next to the drive letter.
>
> Again, completely incorrect. For example, you can mount an entire
> partition inside a windows folder, just like you can in Linux.

But the question is whether this is the default behavior or just an
option? What if you - say - install Windows in the first partition of
the hard disk and you format a few extra partitions as well? Will they
show up as drive letters or as mount points?

And what about the volume that contains the Windows installation itself?
Does that have a drive letter or will that simply have a root directory
designation?

>> In UNIX, you don't store a file on this or that volume; you store the
>> file "on the computer", and for doing so, you can store the file in a
>> - for you - writable place in a unified directory structure. That
>> same directory structure also features directories which contain
>> virtual filesystems such as */proc* or */sys* as well as they do
>> physical filesystems and RAM-based filesystems - e.g. /tmpfs/ or
>> /ramfs,/ - without that you need to think about the physical media.
>
> Yet there are times when you DO need to think about the physical
> media, and then you get into the weird situation of placing a volume
> within the filesystem, such as /mnt/cdrom or /mnt/smbshare.

In this case, the names of the chosen mountpoints should be obvious as
to what kind of medium they link to. Yet, with the complexity of
today's operating systems, the hard drive is the main storage and
should be considered part of the computer and not "this or that
particular storage device".

I agree with the view that the name of a mountpoint to removable or
network-based filesystems should be clear. In the latter case however
- i.e. network file systems - it also depends on whether the filesystem
is a filesystem to which you "just have access" or whether it's
actually one of the system's own directories. For instance, you can
have */usr* or */home* over NFS.

> This kind of dichotomy is a hack because there are times when the
> physical volume DO matter. Another example is when it comes to volume
> free space. The single volume approach makes it much less clear why
> you might be able to write 200GB to one directory but only 50 in
> another.

Why would it make it less clearer than a drive letter? The output of a
/df/ command can show you exactly what space is available and what
space is used on which filesystem.

If /df/ reports something like 150 MB available on */mnt/zip,* then
that's perfectly clear enough for me.

> Even so, Windows also has such a filesystem as well, it's called DFS
> (no relation to the DFS in here, I don't think) or Distributed File
> System.

I have heard this being mentioned before, but I always thought it was
something along the lines of a network filesystem that made all the
shared resources of each computer with such a filesystem available to
all computers on the network without that the user would know which is
on what computer.

I must admit that I haven't bothered to look it up yet.

>> Additionally, Microsoft Windows allows one to bypass the filesystem
>> security provided by ACL's if the system is installed on a /FAT/ or
>> /vfat/ filesystem, since those filesystems do not support ACL's. Any
>> user would then have the ability to erase or overwrite any file in
>> the system, provided that the file is not locked because it's in use
>> by the system itself, of course.
>
> Which of course is completely different than mouting FAT under Linux
> or Unix, right? Wrong.

Yes it is, or at least in the sense that I intended it. I was not
speaking of "mounting a /FAT/ or /vfat/ into the UNIX directory tree"
but of using those filesystem as the ones on which the operating system
itself is installed.

Also bear in mind that /FAT/ and /vfat/ are mounted with a set UID under
GNU/Linux, and that unprivileged users can normally never mount a hard
disk partition. Even for floppy disks or CD/DVD media, the default[*]
is that only root can mount them.

[*] The default when nothing is specified, which is not the same thing
as the default choice of GNU/Linux distributions. The distro
installers normally set them up with access to all users on desktop
systems, and to even have them automounted via /hald./

I personally don't approve of this set-up, but typical desktop
distributions seem to aim towards ease of use for Windows-accustomed
newbies.

>> GNU/Linux for that matter cannot be installed in a filesystem that
>> doesn't support UNIX-style ownerships and permissions, as this is a
>> core part of the UNIX security model. There are some GNU/Linux
>> distributions that _do_ allow to be installed in a /FAT/ partition
>> (or on a floppy), but they use a trick: they run off of a virtual
>> UNIX-style filesystem stored inside a file on that /FAT/ filesystem,
>> called /umsdos./
>
> Oh, I get it. It can't be done, except when it's done. Nice
> double-think.

Erik, either we are having a communications problem or you are feigning
one. It can be done, provided that the system does not use the /FAT/
directly. It will still have all functionality of a UNIX filesystem,
because it will natively run off of an *emulated* UNIX filesystem,
which _physically_ is a file in a /FAT/ partition.

The above requires some special "magic", and although considered highly
unstable - and it is - UNIX will still regard its filesystem to be a
UNIX-style filesystem with ownerships and permissions, and will feature
the security of such a UNIX filesystem for as long as the kernel is up
and running. This is far from the same thing as running off of a /FAT/
natively.

As a sidenote, I remember that NT 4.0 initially started installing its
stuff in a /FAT/ partition although I had selected /NTFS/ at install
time. It would then at a certain phase in the installation procedure
ask for a reboot - or initiate one itself - and then start converting
the filesystem to /NTFS/ before booting up further and completing the
installation.

Please note that the above is not an accusation or sneer of any kind.
It's just that I wonder why the partition could not be formatted as
NTFS prior to installing all that stuff.

>> Microsoft Windows has its graphics and even a large portion of
>> Internet Explorer running inside the kernel.
>
> Internet explorer doesn't run inside the kernel at ALL. Where do you
> get this shit? Do you just make it up?

Hmm... I think I've always been polite so far. Could you be bothered
with showing me the same courtesy, please? :-/

As the matter of fact, I got that information from my monthly computer
magazine, and they are much more focused on Windows than on GNU/Linux.
I presume they know what they are talking about.

>> This approach is considered Bad Practice(TM) by engineers as it leads
>> to more kernel vulnerability or instability in terms of bugs in these
>> routines. Yet it's considered a strategic Must Have(TM) by Microsoft
>> itself regarding monopolist tactics.
>
> Yet you ignore the fact that parts of Unix GUI's are also in the
> kernel, such as the framebuffer.

As you could read for yourself, I have mentioned this in my original
post. Then how exactly do you see me ignoring it?

> Further, placing the GUI outside the kernel gives you no more security
> since the GUI process is given full root privs and can access hardware
> directly, thus causing the same kinds of lockups and crashes that can
> occur in the kernel.

This is technically incorrect. The GUI process does not access hardware
directly. It accesses the driver, and the driver runs in kernel mode.
In addition, the GUI process does not run with full root privileges.
It runs under my non-privileged user account here.

To my knowledge X11 only runs as root when a GUI login is used, and as
this knowledge is based upon how things are on my laptop which runs an
old Mandrake release and thus an old XFree86 version, I'm not so sure
that this would still be the case today. It certainly isn't so when
booting up to runlevel 3 and manually starting X11 after the login.

When an X11 lock-up occurs - if it is a *real* lock-up and not a
*perceived* lock-up, then this is practically always due to a faulty
driver, and in most cases, this driver will be proprietary and closed
source. Which in itself already makes a statement about the benefits
of Open Source Software, of course.

Many times however, people report that "their desktop has locked them
out" or that "their system was completely frozen" when this was not the
case.

It could then either just be a bug that caused the GUI to run with more
CPU priority than it should - this is often the case with OpenGL
screensavers on slower graphics cards - or else it would be a bug that
did indeed freeze the GUI but leave the underlying operating system
still active and thus leaves room to remedy the situation over /ssh/ or
via the System Request key combinations.

The bottom line, Erik, is that I think any engineer will agree with me
that leaving such complex things as window managers out of the kernel
can only benefit stability. After all, the kernel is what the whole
operating system depends upon.



>> On the other hand, Microsoft Windows has a monolithic kernel - like
>> GNU/Linux and many other UNIX-like operating systems - but does make
>> use of an advanced microkernel aspect, called DirectX. DirectX is a
>> userspace API for graphics and other multimedia, which allows direct
>> hardware access to applications while the CPU is in low privilege
>> mode (ring 3). The UNIX equivalent of DirectX is OpenGL.
>
> ??? You are just completely making this shit up. DirectX has nothing
> to do with a microkernel. They're two different things entirely.
> DirectX uses a HEL, or Hardware Emulation Layer, which you migh have
> confused with HAL (Hardware Abstraction Layer) which is a technique
> often used by Microkernels.

I was under the impression that DirectX is a microkernel-like aspect in
that it offers (emulated) hardware access to userspace.

I may have phrased my vision incorrectly, but I still stand by that
viewpoint. Offering hardware access - via emulation or abstraction -
to userland is typical for microkernels, albeit that it's not an
uncommon option to UNIX-style systems either.

OpenGL offers the hardware emulation layer to userspace similar to
DirectX, while the */dev* filesystem offers a hardware abstraction
layer to userspace, similar to the HAL libraries in NT-based Windows
versions.

> Further, while NT isn't a true Microkernel, it's Microkernel based,
> using a supervisor kernel (NTOSKRNL) and a userland kernel (Kernel32)
> for the Windows Subsystem.

Yes, I know that it is partly microkernel-based. If I recall correctly,
then the original idea was even to make NT a complete microkernel
design.

The correct designation for the type of kernel used in NT - and in OS X
for that matter - is a "hybrid kernel" design; it has features of both
a monolithic kernel and of a microkernel.

>> GNU/Linux - as most UNIX-style operating systems - does not have any
>> graphics in the kernel - we're not talking about graphics _drivers_
>> here as those are simply hardware drivers, which in regards to the
>> framebuffer driver incorporates a penguin logo to show at boot time.
>
> Wrong, the framebuffer is used for a lot more than just the kernel
> boot GUI.

I know what a framebuffer is and what it is used for, Erik. It is
however not a windowing environment. It's a mere driver. And drivers
do indeed run inside the kernel.

However, as I said, the only real "graphics" inside the kernel - if
compiled in - are the penguin logos that appear in the amount of
detected logical CPU's in the system. The Mandrake/Mandriva-specific
kernels show stars instead of penguins for that matter.

>> In regards to swap memory, GNU/Linux uses a dedicated partition for
>> paging out the contents of the physical memory. This swap partition
>> is formatted in a layout very similar to the physical memory.
>> Kernelspace memory is never paged out. Kernel processes are given a
>> maximum of 1 GB of memory, and userspace processes are given a
>> maximum of 3 GB per process, although other settings are possible.
>
> Wrong, Linux *CAN* use a dedicated partition, but it doesn't have to.
> And it can in fact use a page file, just like Windows.

True, I should have phrased that more correctly.

> And kernel memory can be paged out, not all of it of course, but some
> parts.

No, it cannot. See the following PowerPoint presentation:

http://www.cse.psu.edu/~anand/spring01/linux/memory.ppt



>> Windows uses a swap file for paging out the contents of the physical
>> memory. This swap file is a regular file within a regular filesystem
>> and therefore also gets cached again, which makes paging out memory
>> contents from the RAM to the hard disk quite pointless and tends to
>> fill up the cache in such a way that the whole system gets bogged
>> down and requires a reboot to return to a snappy operation.
>
> Again, you're making this shit up. Pagefiles are never cached. The
> filesystem has control over what files can be cached and what can't,
> and files can be opened in non-cached mode, which the pagefile is.

When used with NTFS, you mean? I don't see how the filesystem could
control what is cached and what not if /FAT/ or /vfat/ are being used,
as they are by far not that advanced.

In addition, if such mechanism does indeed exist in the present day
versions of Windows, then it sure didn't exist in NT 4.0. And I have
*used* NT 4.0 and have turned it inside out for as much as I could,
even if it was only for three years.

>> Windows reserves 2 GB of virtual memory to kernel processes and 2 GB
>> to userspace processes. I was told by someone more experienced with
>> Windows than myself that Windows pages out kernelspace processes as
>> well - notably the GUI code - in its server versions. I guess that
>> makes sense, but I can neither confirm nor deny this.
>
> By default it uses 2GB each, but you can change to to 3GB user/ 1GB
> kerenl in some versions. Also, like Linux, there is pageable and
> non-pageable kernel memory.

Linux does not have pageable kernel memory, Erik. See the link I pasted
a few paragraphs higher up.

However, if Windows does indeed have pageable kernel memory - and what I
have read on the Web seems to corroborate that - then it would indeed
make sense that - say - Win2003 Server would be swapping out its GUI
routines so as to make place for caching.

>> While each of the above designs has its advocates and naysayers,
>> Linus Torvalds favors a monolithic design. For one, it is faster and
>> less complicated than a microkernel design as it involves far less
>> context switching for things like hardware access.
>
> You know, it's funny how people like yourself praize the monolithic
> kernel for speed and efficiency, yet you criticize when a far more
> microkernel design like Windows use certain parts in kernel to gain
> efficiency.

The idea behind a microkernel is not to move things into the kernel,
Erik; it is to move things *out* of the kernel. I'm not commenting on
the choice for a microkernel. I'm sure microkernels can make up very
good pets... ehm... operating systems.

It's just that I have given the concepts some thought and I personally
believe that a monolithic kernel is probably the best approach,
especially regarding the Linux kernel - see below. Linus Torvalds
seems to share my view on this.

>> The monolithic design of the Linux kernel also allows for more
>> control over who admits code as part of the kernel, and all code is
>> strictly evaluated before it is accepted into the official kernel
>> tree. As there are no financial considerations involved in designing
>> or maintaining the kernel, the developers also have sufficient time
>> to test new kernel code.
>
> More bullshit.

Not at all, my good Watson. And while we're at it, could you please
mind your language?

> The design of the kernel has *ZERO* to do with who can check in and
> out stuff in the kernel. They're not even related in the slightest.

Hmm... I think we may be having a communications problem again. What I
am saying in the above quoted paragraph is that the kernel team is a
very tight and strict group of people.

Because of the monolithic design of the kernel, this very same team
oversees every aspect of the official kernel, which includes hardware
drivers. In a microkernel architecture, hardware drivers would not be
maintained by those very same people, and various other coders from all
over the world could come up with their own userspace drivers, just as
they do now with their own userspace applications.

Such a microkernel approach would rely on the goodwill of distribution
makers to determine what hardware their distribution does and does not
support. With Linux's current design, this choice is left up to the
kernel team, and Linus still personally oversees the PPC and x86
branches of the tree.

As such, every distribution has the same hardware support built into the
kernel, albeit that some distributions will use newer vanilla kernels
as the basis for their distro-patched kernel than others, which may
result in one distribution having support for a minor few quite newly
added peripherals while another distro may not.

>> Unlike Microsoft Windows, GNU/Linux was never developed for
>> economical reasons. It was simply developed and is still maintained
>> for technical excellence. Additionally, it is a statement of
>> Freedom, both of the developers and towards the users.
>
> Not true at all. Now that there are a *LOT* of financial interests in
> Linux, such companies have been putting pressure on the kernel
> developers to meet deadlines.

Can you prove this statement? This would be the first I hear about it.
To my knowledge, it is still Linus himself and Linus only who decides
when a new kernel is being released, albeit that he leaves it to Greg
Kroah-Hartman to release intermediary versions, such as the 2.6.14.1
and 2.6.14.2 kernels. I believe this started around the time of the
2.6.11 tree.

> Some say the 2.4 kernel was released before it was ready because
> companies like IBM were pushing for a release because they had
> hardware projects they couldn't sell until the kernel was done.

Some _may_ have said it, but that doesn't make it true, Erik. Again,
can you prove any of this?

>> Unlike UNIX systems, which scale down from multi-user hardware such
>> as mainframes and minicomputers, Microsoft Windows attempts to build
>> up from the single-user, single-tasking platforms formed by CP/M and
>> MS-DOS to what it is now, and to what it attempts to be for the
>> future.
>
> Actually, Unix began life as a single user version of Multics. The
> name "Unix" is a play on "Unics". So in that respect, it too grew
> "upwards" from a single-user OS. Kind of invalidates your argument,
> doesn't it?

That last sentence made your whole paragraph condescending again, Erik,
which was what I hoped you would have abandoned, especially since your
statement is not completely honest.

UNIX did indeed start out as a single user version of Multics and was
jokingly referred to as "Unics". This much is true, _*but*_ this was
in the very beginning of UNIX's existence, before it was actually a
serious operating system. In fact, either Thompson or Ritchie - I keep
forgetting which one - had conceived this Multics-plagiarism solely for
the purpose of playing games on that unused PDP-7. It was originally
written in assembler even.

So in fact, it started out as a hobby project - Linux even started out
as a multifunctional terminal emulator on Minix before Linus Torvalds
even decided to make it into a kernel - but Ken Thompson and Dennis
Ritchie then rewrote the system in C - with of course some sections
still in assembler, as is common practice for performance benefits -
and made it into a multi-user operating system.

UNIX was then used indoors at AT&T for a while longer to process
copyright documents before AT&T decided to market the system.

Contrary to the above, Windows was developed by building up from an
already existing and commercially vended production system which was
originally single-user and single-tasking.

As far as I know - and there is a nice historic thread related to the
hardware aspects of this in C.O.L.M. at the moment - multitasking only
became available to x86 on the Intel 80286, which was in the
early-to-mid eighties, if my memory serves me right.

By that time, UNIX was already fully functional and commercially
available. Its purely assembler-coded single-user incantation as you
describe were just some leisure experiments from a couple of bored
developers and can hardly be called an operating system, for the short
time that it existed in this form.

The bottom line is that you're the one who's wrong again, but you are
using something irrelevant as an argument, and you throw in a
condescending remark.

You don't have to like me, Erik. But the least you can do is be civil.
I told you before that I'm not some kid who doesn't know what he's
talking about and that I may actually even be older than you.

As a sidenote, I don't really know what you do for a living or what you
chose for your education, and I admit that I'm not a software developer
- although I have written my share of code in the days of DOS, OS/2 -
via ReXX - and even a bit of COBOL programming on a minicomputer
running UNIX.

I've also had some mild experiences with networked PC's from when I was
doing an interim at Town Hall. If my memory serves me right, they were
DOS 6.0/Windows 3.11 machines using Microsoft's LAN Manager client
software and connected to an NCR machine running UNIX and Samba.

They had a weird set-up, even. Only part of the Windows libraries were
installed locally, and the rest was kept on the UNIX machine. In other
words, you couldn't start Windows while not connected to the network.

I asked the IT supervisor why they had opted for this strange approach,
and he told me that it was suggested to him as a possible legal
solution as Town Hall didn't want to invest in Windows licenses for
each PC. I doubt that it would be legal to set up Windows that way -
using a single license and a deliberately crippled set-up - but they
sure did it and they got away with it.

> Funny how your post didn't really say WHY you think GHU/Linux is The
> Better Operating System. It was just a bunch of facts and
> misinformation intespersed together without any real conclusions drawn
> or arguments made.

Wow, that's a mouthful, isn't it? Let's see... I think that it was
pretty obvious from the facts, the few minor misinformations - I have
admitted where I was misinformed and where I was not in the respective
sections of the text - that it was clear why I believe GNU/Linux to be
the better operating system.

I did indeed not make a final conclusion to my post, other than saying
that the user is free to choose and that when he does choose for
GNU/Linux, he gets to own to software copy on his CD's/DVD's/hard
disk(s).

I think that's a nice conclusion to the post anyway, but in the end, the
text and the facts therein must speak for themselves. I believe in
GNU/Linux, and I tell people I believe in it and why. What they do
with this information is their prerogative.

Lastly, I would really appreciate future omission of ad hominem remarks
- you will only make me want to make similar remarks myself, and I
don't believe this to be beneficial to either of us, nor to the readers
for that matter - or by stating that I am wrong when you are only
misinterpreting my words.

I do admit that I have a problem correctly expressing myself at times
due to my condition and due to my not being a natively English speaker,
as good as my English may be. Just try to show a little goodwill
instead of jumping on top of a chair everytime I say something, okay?

We're all human and we all make mistakes. Being willing to admit it is
the first step towards a better future...

Mark Kent

unread,
Nov 13, 2005, 3:17:40 AM11/13/05
to
begin oe_protect.scr
High Plains Thumper <h...@singlecylinderbikes.com> espoused:

Erik's been spinning this area for years. Looks like Aragorn just got
caught up by the Funkennet.

--
end
| Mark Kent -- mark at ellandroad dot demon dot co dot uk |
What!? Me worry?
-- Alfred E. Newman

Erik Funkenbusch

unread,
Nov 13, 2005, 7:21:10 AM11/13/05
to

Could you be any more vague?

Erik Funkenbusch

unread,
Nov 13, 2005, 8:20:00 AM11/13/05
to
On Sun, 13 Nov 2005 06:33:05 GMT, Aragorn wrote:

>> Actually, it's not necessary to store infomration on drive letters.
>> This has become largely a convenience, or shortcut.
>
> Hmm... I know that mountpoints have been introduced from Windows 2000
> on, but as far as I know, nobody uses them. The default approach still
> seems to be to primarily designate filesystems by drive letters.

Do you know why? Because it's easier to manage than having all your files
in a single filesystem. People *LIKE* seperate drives.

>>> This approach does not make any distinction between the nature of the
>>> filesystem or the hardware it resides on. There is only the
>>> convention that "drive A:" - and if applicable, a "drive B:" - is a
>>> floppy drive, and that the first Windows-accessible primary partition
>>> on a hard disk is "drive C:".
>>
>> Totally incorrect. The first bootable drive can be any letter you
>> want it to be (or no letter at all) under Windows. My first drive
>> letter was I: for many years. Not on purpose, but just because my C:
>> drive died and I made my I: drive the bootable drive and removed the
>> C: drive.
>
> Under those conditions, yes. However, this is not the default behavior
> and requires a situation like the one you just described. At least,
> from my experiences with NT. I have actually tried changing the drive
> letter to the partition with NT itself from C: to something else, and
> Windows showed me a warning that this would cause problems.

No, it didn't. No such warning exists. What it probably said was that you
can't change the system drive partition, and you solve this by changing
your system drive.

> Looking inside the registry, I did indeed see where it would cause
> problems, as applications - as well as some of the components of
> Windows itself - relied on actual drive letters being used, while the
> majority of Windows-specific entries used a variable instead.

All those are replaceable. You can do a system wide search and replacfe
quite easily. However, it's easiest to install over again to a secondary
partition or drive.

>>> In Windows, each filesystem has its own root directory, and each
>>> filesystem is an entity. It is left to the user to decide which is
>>> what - albeit that the Windows GUI will give you hints by showing
>>> designated icons next to the drive letter.
>>
>> Again, completely incorrect. For example, you can mount an entire
>> partition inside a windows folder, just like you can in Linux.
>
> But the question is whether this is the default behavior or just an
> option? What if you - say - install Windows in the first partition of
> the hard disk and you format a few extra partitions as well? Will they
> show up as drive letters or as mount points?

They'll show up as whatever you set it up to show up as.

> And what about the volume that contains the Windows installation itself?
> Does that have a drive letter or will that simply have a root directory
> designation?

There typically is a drive letter, again, mostly because a lot of legacy
software expects it.

>>> In UNIX, you don't store a file on this or that volume; you store the
>>> file "on the computer", and for doing so, you can store the file in a
>>> - for you - writable place in a unified directory structure. That
>>> same directory structure also features directories which contain
>>> virtual filesystems such as */proc* or */sys* as well as they do
>>> physical filesystems and RAM-based filesystems - e.g. /tmpfs/ or
>>> /ramfs,/ - without that you need to think about the physical media.
>>
>> Yet there are times when you DO need to think about the physical
>> media, and then you get into the weird situation of placing a volume
>> within the filesystem, such as /mnt/cdrom or /mnt/smbshare.
>
> In this case, the names of the chosen mountpoints should be obvious as
> to what kind of medium they link to. Yet, with the complexity of
> today's operating systems, the hard drive is the main storage and
> should be considered part of the computer and not "this or that
> particular storage device".

The problem is, you lose that distinction. Unless you're aware that a
particular folder is mounted on a remote server, you may not know, and thus
be confused when that server isn't available and your files aren't there.

> I agree with the view that the name of a mountpoint to removable or
> network-based filesystems should be clear. In the latter case however
> - i.e. network file systems - it also depends on whether the filesystem
> is a filesystem to which you "just have access" or whether it's
> actually one of the system's own directories. For instance, you can
> have */usr* or */home* over NFS.

And again, you lose track of distinctions you SHOULD keep track of. For
example, you don't want to be saving 10GB of data to a dialup connected NFS
link.

>> This kind of dichotomy is a hack because there are times when the
>> physical volume DO matter. Another example is when it comes to volume
>> free space. The single volume approach makes it much less clear why
>> you might be able to write 200GB to one directory but only 50 in
>> another.
>
> Why would it make it less clearer than a drive letter? The output of a
> /df/ command can show you exactly what space is available and what
> space is used on which filesystem.

Which completely invalidates the entire "one filesystem to rule them all"
argument. Now you're having to differentiate between drives and folders,
and know which is which.

With a drive letter, I *KNOW* it's a different partition than another drive
letter, and it will have different free space than another one. In a
filesystem, things are much fuzzier, and you may not remember that
/home/dir1 is on a different drive than /home/dir2.

> If /df/ reports something like 150 MB available on */mnt/zip,* then
> that's perfectly clear enough for me.

But that requires you to mentally keep track of which folders are which
partition, or constantly df to figure it out. It completely invalidates
the single filesystem concept.

>> Even so, Windows also has such a filesystem as well, it's called DFS
>> (no relation to the DFS in here, I don't think) or Distributed File
>> System.
>
> I have heard this being mentioned before, but I always thought it was
> something along the lines of a network filesystem that made all the
> shared resources of each computer with such a filesystem available to
> all computers on the network without that the user would know which is
> on what computer.

It can be used for that as well, or it can be used on a single computer.

>>> GNU/Linux for that matter cannot be installed in a filesystem that
>>> doesn't support UNIX-style ownerships and permissions, as this is a
>>> core part of the UNIX security model. There are some GNU/Linux
>>> distributions that _do_ allow to be installed in a /FAT/ partition
>>> (or on a floppy), but they use a trick: they run off of a virtual
>>> UNIX-style filesystem stored inside a file on that /FAT/ filesystem,
>>> called /umsdos./
>>
>> Oh, I get it. It can't be done, except when it's done. Nice
>> double-think.
>
> Erik, either we are having a communications problem or you are feigning
> one. It can be done, provided that the system does not use the /FAT/
> directly. It will still have all functionality of a UNIX filesystem,
> because it will natively run off of an *emulated* UNIX filesystem,
> which _physically_ is a file in a /FAT/ partition.

If it's running on a fat filesystem, it's running on a fat filesystem,
regardless of any "tricks" you might want to play. Windows itself plays
such "tricks" on FAT to gain long file names and other factors, that
doesn't make it any less FAT.

Can you install Linux on a FAT filesystem? Yes.

> The above requires some special "magic", and although considered highly
> unstable - and it is - UNIX will still regard its filesystem to be a
> UNIX-style filesystem with ownerships and permissions, and will feature
> the security of such a UNIX filesystem for as long as the kernel is up
> and running. This is far from the same thing as running off of a /FAT/
> natively.

native has nothing to do with it. Is it running off FAT? Yes. Period.

> As a sidenote, I remember that NT 4.0 initially started installing its
> stuff in a /FAT/ partition although I had selected /NTFS/ at install
> time. It would then at a certain phase in the installation procedure
> ask for a reboot - or initiate one itself - and then start converting
> the filesystem to /NTFS/ before booting up further and completing the
> installation.
>
> Please note that the above is not an accusation or sneer of any kind.
> It's just that I wonder why the partition could not be formatted as
> NTFS prior to installing all that stuff.

The reason was that the first phase of the NT install program was a DOS
program, and it couldn't read or write an NTFS volume. A minimal NT OS had
to be copied to the hard disk in FAT, booted, then converted to NTFS to
continue with the rest of it.

>>> Microsoft Windows has its graphics and even a large portion of
>>> Internet Explorer running inside the kernel.
>>
>> Internet explorer doesn't run inside the kernel at ALL. Where do you
>> get this shit? Do you just make it up?
>
> Hmm... I think I've always been polite so far. Could you be bothered
> with showing me the same courtesy, please? :-/
>
> As the matter of fact, I got that information from my monthly computer
> magazine, and they are much more focused on Windows than on GNU/Linux.
> I presume they know what they are talking about.

Oh, so some unnamed magazine that only you read said something vague about
IE being in the kernel. Nice.

>>> This approach is considered Bad Practice(TM) by engineers as it leads
>>> to more kernel vulnerability or instability in terms of bugs in these
>>> routines. Yet it's considered a strategic Must Have(TM) by Microsoft
>>> itself regarding monopolist tactics.
>>
>> Yet you ignore the fact that parts of Unix GUI's are also in the
>> kernel, such as the framebuffer.
>
> As you could read for yourself, I have mentioned this in my original
> post. Then how exactly do you see me ignoring it?

By saying what you said above.

>> Further, placing the GUI outside the kernel gives you no more security
>> since the GUI process is given full root privs and can access hardware
>> directly, thus causing the same kinds of lockups and crashes that can
>> occur in the kernel.
>
> This is technically incorrect. The GUI process does not access hardware
> directly. It accesses the driver, and the driver runs in kernel mode.
> In addition, the GUI process does not run with full root privileges.
> It runs under my non-privileged user account here.

Wrong. X drivers do not run in kernel mode, though they may access kernel
functionality (such as framebuffer). X accesses the hardware directly as a
user process.

>> Wrong, the framebuffer is used for a lot more than just the kernel
>> boot GUI.
>
> I know what a framebuffer is and what it is used for, Erik. It is
> however not a windowing environment. It's a mere driver. And drivers
> do indeed run inside the kernel.

The Windowing environment in Windows doesn't run in the kernel either. The
Windowing environment, known as USER32, runs as part of the userspace CSRSS
process. GDI, or the graphic device interface, runs in the kernel
(partially anyways).

>> And kernel memory can be paged out, not all of it of course, but some
>> parts.
>
> No, it cannot. See the following PowerPoint presentation:
>
> http://www.cse.psu.edu/~anand/spring01/linux/memory.ppt

Well, I stand corrected on that. That's just plain stupid. EVERY modern
OS has pageable kernel memory, to not do so is a major denial of service
waiting to happen. Just load a kernel module that allocates 1GB of memory,
and your system is toast.

>> Again, you're making this shit up. Pagefiles are never cached. The
>> filesystem has control over what files can be cached and what can't,
>> and files can be opened in non-cached mode, which the pagefile is.
>
> When used with NTFS, you mean? I don't see how the filesystem could
> control what is cached and what not if /FAT/ or /vfat/ are being used,
> as they are by far not that advanced.

It has nothing to do with the filesystem. Windows cache system is unified,
and it caches CDFS, FAT, NTFS, SMB, NFS, AppleTalk, etc.. any installable
filesystem can and will be cached. It's a part of the OS itself, and
irrelevant to the filesystem.

> In addition, if such mechanism does indeed exist in the present day
> versions of Windows, then it sure didn't exist in NT 4.0. And I have
> *used* NT 4.0 and have turned it inside out for as much as I could,
> even if it was only for three years.

Yes, it did exist in NT4, and has since NT 3.1.

Read this:

http://www.windowsitpro.com/Articles/Print.cfm?ArticleID=3864

>>> While each of the above designs has its advocates and naysayers,
>>> Linus Torvalds favors a monolithic design. For one, it is faster and
>>> less complicated than a microkernel design as it involves far less
>>> context switching for things like hardware access.
>>
>> You know, it's funny how people like yourself praize the monolithic
>> kernel for speed and efficiency, yet you criticize when a far more
>> microkernel design like Windows use certain parts in kernel to gain
>> efficiency.
>
> The idea behind a microkernel is not to move things into the kernel,
> Erik; it is to move things *out* of the kernel. I'm not commenting on
> the choice for a microkernel. I'm sure microkernels can make up very
> good pets... ehm... operating systems.

You seem to have totally missed my point. A monolithic kernel moves
everything INTO the kernel for speed and efficiency. Yet when a more
modular microkernel based design moves a few things into the kernel to gain
similar speed and efficiency, people like you complain, all the while
praising the monolithic kernel for doing FAR MORE in the kernel.

> It's just that I have given the concepts some thought and I personally
> believe that a monolithic kernel is probably the best approach,
> especially regarding the Linux kernel - see below. Linus Torvalds
> seems to share my view on this.

Then how can you POSSIBLY complain that moving ANYTHING into the kernel is
bad?

>> The design of the kernel has *ZERO* to do with who can check in and
>> out stuff in the kernel. They're not even related in the slightest.
>
> Hmm... I think we may be having a communications problem again. What I
> am saying in the above quoted paragraph is that the kernel team is a
> very tight and strict group of people.

That's true, but it has NOTHING to do with the design of the kernel itself.
You tried to claim that a Monolithic kernel makes it easier to check in and
out changes. That's simply false.

> Such a microkernel approach would rely on the goodwill of distribution
> makers to determine what hardware their distribution does and does not
> support. With Linux's current design, this choice is left up to the
> kernel team, and Linus still personally oversees the PPC and x86
> branches of the tree.

Wrong. Many distribution vendors support their own kernel patches anyways,
with their own set of drivers in many cases.

> As such, every distribution has the same hardware support built into the
> kernel, albeit that some distributions will use newer vanilla kernels
> as the basis for their distro-patched kernel than others, which may
> result in one distribution having support for a minor few quite newly
> added peripherals while another distro may not.

Except that simply not the case with Linux.

>> Not true at all. Now that there are a *LOT* of financial interests in
>> Linux, such companies have been putting pressure on the kernel
>> developers to meet deadlines.
>
> Can you prove this statement? This would be the first I hear about it.
> To my knowledge, it is still Linus himself and Linus only who decides
> when a new kernel is being released, albeit that he leaves it to Greg
> Kroah-Hartman to release intermediary versions, such as the 2.6.14.1
> and 2.6.14.2 kernels. I believe this started around the time of the
> 2.6.11 tree.

While it's true that Linus decides, when 2.4 was in development, there were
a lot of folks waiting for it to ship. People like IBM, HP, and others.

I think there's no clearer evidence that it shipped to early than the
2.9-10-11 debacle with the complete virtual memory manager rewrite in the
stable tree.

>> Actually, Unix began life as a single user version of Multics. The
>> name "Unix" is a play on "Unics". So in that respect, it too grew
>> "upwards" from a single-user OS. Kind of invalidates your argument,
>> doesn't it?
>
> That last sentence made your whole paragraph condescending again, Erik,
> which was what I hoped you would have abandoned, especially since your
> statement is not completely honest.

It is completely honest.

> UNIX did indeed start out as a single user version of Multics and was
> jokingly referred to as "Unics". This much is true, _*but*_ this was
> in the very beginning of UNIX's existence, before it was actually a
> serious operating system.

I fail to see how that is relevant. The fact of the matter is, Unix
evolved from a single user system. Period. Windows NT based systems were
a complete redesign from the ground up as a multitasking, multi-processor
OS. It only grafted on a compatibility layer make the old single tasking
programs still run.

> As far as I know - and there is a nice historic thread related to the
> hardware aspects of this in C.O.L.M. at the moment - multitasking only
> became available to x86 on the Intel 80286, which was in the
> early-to-mid eighties, if my memory serves me right.

Gee, i guess that's why OS's like GEOS multitasked on the 8086 and 6502.

> By that time, UNIX was already fully functional and commercially
> available. Its purely assembler-coded single-user incantation as you
> describe were just some leisure experiments from a couple of bored
> developers and can hardly be called an operating system, for the short
> time that it existed in this form.

They are still what the OS evolved from. And your original condescending
comment about how Windows evolved from a single user OS shows the kind of
double standard you display regularly, and THAT is why your messages
typically tick me off.

> The bottom line is that you're the one who's wrong again, but you are
> using something irrelevant as an argument, and you throw in a
> condescending remark.

Wait, I'm wrong but i'm right? You admit I was correct. Yet i'm wrong
simply because you don't want to consider that as "evolution". Sorry to
burst your bubble, but plugging your fingers in your ears and saying "nah
nah nah i'm going to believe what i want" doesn't cut it.

High Plains Thumper

unread,
Nov 13, 2005, 10:36:39 AM11/13/05
to
Mark Kent <mark...@demon.co.uk> wrote in
news:4e9j43-...@ellandroad.demon.co.uk:

> begin oe_protect.scr
> High Plains Thumper espoused:


>> Erik Funkenbusch wrote:
>>> Aragorn wrote:
>>>
>>>> Unlike a certain operating system from somewhere in Redmond,
>>>> Washington, UNIX systems do not consider individual filesystems as
>>>> entities from the user perspective. In Windows, you store a file
>>>> on "a drive", and all drives have alphabetic drive letters to
>>>> access them with. They bear equal importance as entities to the
>>>> operator, and the operator must ascertain their relevant importance
>>>> himself.
>>>
>>> Actually, it's not necessary <SNIP>
>>
>> After reading your replies, the first 4 words of your reply are
>> perhaps the more apt. It departs from commonly understood IT
>> industry convention with the Windows operating system and contains
>> little supportive information to substantiate your references to
>> deviations.
>
> Erik's been spinning this area for years. Looks like Aragorn just got
> caught up by the Funkennet.

I gather that. What I find annoying are defences that have a thread of
technical correctness to them but are of little practical value. It is
basically arguing for the sake of arguing.

Nothing is new, difficult people abound. 20 years ago, I remember
arguing with a telecommunications specialist, who wanted to install a 2.4
million USD broadband network, when we wanted a modern, fiber optic based
packet switching network capable of telephone integration. It would help
supplant the 1942 Strogier telephone switch system we had. Our cost was
$660K US, for initial campus network. We had no need for a CATV based
system, there wasn't even a security requirement for it. It wound up
being a stalemate, facility closed 10 years later as a cost saving
measure. You wonder why.

--
HPT

Mark Kent

unread,
Nov 13, 2005, 12:36:50 PM11/13/05
to
begin oe_protect.scr
High Plains Thumper <h...@singlecylinderbikes.com> espoused:

It's an interesting issue with people who have solutions looking for
problems. Invariably, the solutions are technically elegant, but are
lacking in any demand/market requirement or similar, and typically
cost far more than competing, simpler, less elegant but financially
justifiable solutions. I would prefer to see more engineering and less
science in these decisions. I'm not against science at all, but it does
tend to be cost-agnostic, and its proponents tend to be similarly
cost-agnostic, thus debates are reduced to irrelevant technicalities
rather than key issues & costs.

High Plains Thumper

unread,
Nov 13, 2005, 3:06:11 PM11/13/05
to
Mark Kent <mark...@demon.co.uk> wrote in
news:i6ak43-...@ellandroad.demon.co.uk:

> begin oe_protect.scr
> High Plains Thumper espoused:

>> Mark Kent wrote begin oe_protect.scr

True. At the time though, it wasn't a matter of science, it was a matter
of the IT staff wanting to maintain outdated status quo, not thinking
outside the box and not for the best of all. For the other offices that
had installed 10 years earlier, yes, it was the backbone technology
available, continue maintaining and plan ahead. For us we had no campus
network except for some x.400 terminal controllers, for a fresh start it
did not make sense. We would have had a modern fiber optic backbone
utility that could serve us for considerable years.

Costs are important considerations, they make or break offices. In the
case of this employer, their overhead costs were higher than at other
corporate sites, hence why work had slowly been taken away over the years
until they no longer could justify their existence.

--
HPT

Tim Smith

unread,
Nov 14, 2005, 2:29:18 AM11/14/05
to
In article <WVmdf.46238$Cc4.2...@phobos.telenet-ops.be>,

Aragorn <str...@telenet.invalid> wrote:
> You can still experience this effect in GNU/Linux today. Simply try to
> log into a character mode console with your *Caps* *Lock* turned on.
> Everything you will see from after the login on will be in uppercase.
> Indeed, it is not a bug, it's a feature. ;-)

What distribution? It doesn't do that in SuSE or CentOS.

...


> UNIX systems traditionally use plain text files for their configuration.
> This leads to many such files being present on an average installation,
> but the good news is that they are all neatly grouped underneath the
> */etc* directory, with additional configuration files stored in an
> *./etc* branch under */opt* - where those *./etc* branches are located
> in the respective application's directory - or under */usr* or
> */usr/local.*
>
> It all seems complex at first glance, but given a bit of consideration,
> such a layout will however soon make sense, as the key to it all is
> logic. ;-)

The problem with this is not complexity, but rather inconsistency. The
configuration file formats are often not friendly to programatic
manipulation, making it hard to write tools to automate configuration
management.

Because of this, the automated tools tend to be fragile, and break if a
human edits the file directly.

Good configuration file design is hard, and few programmers put enough
time into it.

...


> Microsoft Windows has its graphics and even a large portion of Internet
> Explorer running inside the kernel. This approach is considered Bad
> Practice(TM) by engineers as it leads to more kernel vulnerability or
> instability in terms of bugs in these routines. Yet it's considered a
> strategic Must Have(TM) by Microsoft itself regarding monopolist
> tactics.

Experiment shows that this is not so. First, there are the device
drivers themselves. These are in the kernel in Linux, too, so neither
side gets an advantage here.

Second, there is graphics code. While in theory having parts of this
run in kernel mode increase the risk of instability, the graphics code
in the kernel is probably the least buggy code in Windows. Very few
people have ever seen crashes from it, so in practice it has not made a
difference in stability having it there.

As far as the driver code goes, it looks like Microsoft is getting ready
to get an advantage there:

<http://www.wired.com/news/technology/bugs/0,2924,69375,00.html?tw=rss.TE
K>

That's rather interesting technology. I had not realized that formal
modeling had reached the point that they could do device drivers.

...


> In regards to swap memory, GNU/Linux uses a dedicated partition for
> paging out the contents of the physical memory. This swap partition is
> formatted in a layout very similar to the physical memory. Kernelspace

Linux can use swap files. You don't have to use a dedicated partition.
See the mkswap man page for details.

There's a ridiculous myth that swap partitions perform better than swap
files. (I don't know why anyone would believe that--it would only be
true if the people writing the filesystem code were morons). Anyway,
this post from the kernel mailing list should put that myth to rest:

Date: Tue, 28 Jun 2005 22:03:34 -0700
From: Andrew Morton <>
Subject: Re: Swap partition vs swap file

Mike Richards <mrmik...@gmail.com> wrote:
>
> Given this situation, is there any significant performance or
> stability advantage to using a swap partition instead of a swap
> file?

In 2.6 they have the same reliability and they will have the same
performance unless the swapfile is badly fragmented.

So why does every major distribution create a swap partition instead of
using swap files? Habit, probably.

...


> Windows uses a swap file for paging out the contents of the physical
> memory. This swap file is a regular file within a regular filesystem
> and therefore also gets cached again, which makes paging out memory
> contents from the RAM to the hard disk quite pointless and tends to
> fill up the cache in such a way that the whole system gets bogged down
> and requires a reboot to return to a snappy operation.

What makes you think this?

...


> Either way, paging out kernel memory is considered Bad Practice(TM) by
> engineers, and while I can understand that a server version of Windows

Paging out kernel memory is either fine, or it is a complete disaster to
be quickly followed by a crash. Which of these it is depends on the
design of the kernel. If the kernel is designed for it, it can be good
practice. If the kernel isn't, then it is a bug, not bad practice.


--
--Tim Smith

Tim Smith

unread,
Nov 14, 2005, 2:49:20 AM11/14/05
to
In article <140xyfqj...@funkenbusch.com>,

Erik Funkenbusch <er...@despam-funkenbusch.com> wrote:
> Well, I stand corrected on that. That's just plain stupid. EVERY modern
> OS has pageable kernel memory, to not do so is a major denial of service
> waiting to happen. Just load a kernel module that allocates 1GB of memory,
> and your system is toast.

Uhm...that is a rather silly argument.

(1) Every modern OS that has pageable kernel memory also has a way to
allocate non-pageable memory. So, if a kernel module wanted to do a DOS
attack by taking up a gig of memory, it could just ask for non-pageable
memory.

(2) Or it could execute a halt instruction and stop the system. That's
a lot simpler. Or do one of a zillion other things to mess up the
system.

...


> > As far as I know - and there is a nice historic thread related to the
> > hardware aspects of this in C.O.L.M. at the moment - multitasking only
> > became available to x86 on the Intel 80286, which was in the
> > early-to-mid eighties, if my memory serves me right.
>
> Gee, i guess that's why OS's like GEOS multitasked on the 8086 and 6502.

Or PC/ix.

--
--Tim Smith

Erik Funkenbusch

unread,
Nov 14, 2005, 3:50:06 AM11/14/05
to
On Mon, 14 Nov 2005 07:49:20 GMT, Tim Smith wrote:

> In article <140xyfqj...@funkenbusch.com>,
> Erik Funkenbusch <er...@despam-funkenbusch.com> wrote:
>> Well, I stand corrected on that. That's just plain stupid. EVERY modern
>> OS has pageable kernel memory, to not do so is a major denial of service
>> waiting to happen. Just load a kernel module that allocates 1GB of memory,
>> and your system is toast.
>
> Uhm...that is a rather silly argument.
>
> (1) Every modern OS that has pageable kernel memory also has a way to
> allocate non-pageable memory. So, if a kernel module wanted to do a DOS
> attack by taking up a gig of memory, it could just ask for non-pageable
> memory.
>
> (2) Or it could execute a halt instruction and stop the system. That's
> a lot simpler. Or do one of a zillion other things to mess up the
> system.

Valid points. I wasn't really talking about an intentional DoS, I was
talking more about a buggy driver. But, your point still stands, a buggy
driver can take the system down in many ways.

Ray Ingles

unread,
Nov 14, 2005, 9:36:07 AM11/14/05
to
On 2005-11-12, Erik Funkenbusch <er...@despam-funkenbusch.com> wrote:
> On Sat, 12 Nov 2005 14:23:50 GMT, Aragorn wrote:

> Actually, it's not necessary to store infomration on drive letters. This
> has become largely a convenience, or shortcut.

Boy was it inconvenient when my computer flaked out recently. I was
able, with a bunch of work, to substitute one drive partition for
another, but it was not easy, and between the multiple reboots needed
there was a *lot* of software that couldn't find the drive it was
installed on...

Now, the most recent NTFS does contain "reparse points", which are
vaguely like symbolic links, but I don't know of anyone actually using
them...

>> Additionally, Microsoft Windows allows one to bypass the filesystem
>> security provided by ACL's if the system is installed on a /FAT/ or
>> /vfat/ filesystem, since those filesystems do not support ACL's.
>

> Which of course is completely different than mouting FAT under Linux or
> Unix, right? Wrong.

It is, in fact, different. With FAT, there's basically no security
under Windows. With Linux, you can mount a FAT drive with a particular
user and group ownership, and particular permissions, which apply to all
the files in the partition. Thus you can have *some* security with FAT
on Linux, instead of essentially none with Windows.

> Yet you ignore the fact that parts of Unix GUI's are also in the kernel,
> such as the framebuffer.

Only as a driver to expose the graphics to other apps. That's
unavoidable. The apps that use the framebuffer aren't in the kernel.

> Further, placing the GUI outside the kernel gives
> you no more security since the GUI process is given full root privs and can
> access hardware directly, thus causing the same kinds of lockups and
> crashes that can occur in the kernel.

No, actually, just because a program is running as root doesn't mean it
"can access hardware directly". It's in a separate address space from
the kernel, and can't directly corrupt kernel memory with a bad pointer
or something. Stability and reliability is enhanced.

> ??? You are just completely making this shit up. DirectX has nothing to
> do with a microkernel. They're two different things entirely. DirectX
> uses a HEL, or Hardware Emulation Layer, which you migh have confused with
> HAL (Hardware Abstraction Layer) which is a technique often used by
> Microkernels.

Here you're right, DirectX is not related to 'microkernels'.

> Wrong, the framebuffer is used for a lot more than just the kernel boot
> GUI.

But by applications, not by the kernel itself.

> Actually, Unix began life as a single user version of Multics. The name
> "Unix" is a play on "Unics". So in that respect, it too grew "upwards"
> from a single-user OS. Kind of invalidates your argument, doesn't it?

It was never single-user, even from the start. The pun was amusing, but
it was never truly accurate - even the first versio of Unix supported
two users.

--
Sincerely,

Ray Ingles (313) 227-2317

"I'm just not sure which way I want to fund terror. I mean, sure,
I can buy drugs, or pirate music/movies/games. But, I can also
drive an SUV, or use oil in other ways. I can also support terror
by being critical of the government, or being supportive of the
use of encryption and privacy. Where does one start?" - anonymous

Linønut

unread,
Nov 14, 2005, 9:49:44 AM11/14/05
to
After takin' a swig o' grog, Tim Smith belched out this bit o' wisdom:

> The problem with this is not complexity, but rather inconsistency. The
> configuration file formats are often not friendly to programatic
> manipulation, making it hard to write tools to automate configuration
> management.

This makes no sense at all. Is not a configuration file written to be
manipulated (read and written) by a program?

> Because of this, the automated tools tend to be fragile, and break if a
> human edits the file directly.
>
> Good configuration file design is hard, and few programmers put enough
> time into it.

The rest is spot on, though, Tim.

> As far as the driver code goes, it looks like Microsoft is getting ready
> to get an advantage there:
>
> <http://www.wired.com/news/technology/bugs/0,2924,69375,00.html?tw=rss.TE
> K>
>
> That's rather interesting technology. I had not realized that formal
> modeling had reached the point that they could do device drivers.

Doesn't matter. Static analysis is not sufficient to find all sources
of bugs.

Even the article above says as much.

--
Treat yourself to the devices, applications, and services running on the
GNU/Linux® operating system!

Aragorn

unread,
Nov 14, 2005, 1:17:08 PM11/14/05
to
On Monday 14 November 2005 09:50, Erik Funkenbusch stood up and spoke

the following words to the masses in /comp.os.linux.advocacy...:/

> On Mon, 14 Nov 2005 07:49:20 GMT, Tim Smith wrote:

Slightly off-topic, Eric... My ISP's newsserver seems to have dropped
your reply to my post, as well as the post of mine you were replying
to. Even my original post in this thread doesn't show up.

Guess I'll have to Google again... :-/

Aragorn

unread,
Nov 14, 2005, 1:43:48 PM11/14/05
to
On Monday 14 November 2005 08:29, Tim Smith stood up and spoke the

following words to the masses in /comp.os.linux.advocacy...:/

> In article <WVmdf.46238$Cc4.2...@phobos.telenet-ops.be>,


> Aragorn <str...@telenet.invalid> wrote:
>> You can still experience this effect in GNU/Linux today. Simply try
>> to log into a character mode console with your *Caps* *Lock* turned
>> on. Everything you will see from after the login on will be in
>> uppercase. Indeed, it is not a bug, it's a feature. ;-)
>
> What distribution? It doesn't do that in SuSE or CentOS.

Hmm... I've just checked and it doesn't do that anymore here either. I
suppose it only works up until the 2.4.x kernels...

Well, it's deprecated anyway. I don't know anyone who would still be
using all-uppercase terminals.

> ...
>> UNIX systems traditionally use plain text files for their
>> configuration. This leads to many such files being present on an
>> average installation, but the good news is that they are all neatly
>> grouped underneath the */etc* directory, with additional
>> configuration files stored in an *./etc* branch under */opt* - where
>> those *./etc* branches are located in the respective application's
>> directory - or under */usr* or */usr/local.*
>>
>> It all seems complex at first glance, but given a bit of
>> consideration, such a layout will however soon make sense, as the key
>> to it all is logic. ;-)
>
> The problem with this is not complexity, but rather inconsistency.
> The configuration file formats are often not friendly to programatic
> manipulation, making it hard to write tools to automate configuration
> management.

Not really, since tools under UNIX-like systems are usually conceived to
do one thing and do it well. Mandriva - formerly MandrakeSoft - for
instance have a very modular "global configurator", called the Mandriva
Control Center. New tools are added or removed as the distribution
evolves.

> Because of this, the automated tools tend to be fragile, and break if
> a human edits the file directly.

Not necessarily true. I've actually seen this approach only in one
specific application, where the header to the file contained a sum of
the bytes used per section. There, manual editing would lead to
strange results, of course.

Other than the above, I think most configuration files can be
human-edited quite easily.

> Good configuration file design is hard, and few programmers put enough
> time into it.

True...

> ...
>> Microsoft Windows has its graphics and even a large portion of
>> Internet Explorer running inside the kernel. This approach is
>> considered Bad Practice(TM) by engineers as it leads to more kernel
>> vulnerability or instability in terms of bugs in these routines. Yet
>> it's considered a strategic Must Have(TM) by Microsoft itself
>> regarding monopolist tactics.
>
> Experiment shows that this is not so. First, there are the device
> drivers themselves. These are in the kernel in Linux, too, so neither
> side gets an advantage here.
>
> Second, there is graphics code. While in theory having parts of this
> run in kernel mode increase the risk of instability, the graphics code
> in the kernel is probably the least buggy code in Windows. Very few
> people have ever seen crashes from it, so in practice it has not made
> a difference in stability having it there.

Well, I have no actual data on the number of bugs caused by kernelspace
graphics code versus userspace graphics code.

> As far as the driver code goes, it looks like Microsoft is getting
> ready to get an advantage there:
>
>
<http://www.wired.com/news/technology/bugs/0,2924,69375,00.html?tw=rss.TE
> K>
>
> That's rather interesting technology. I had not realized that formal
> modeling had reached the point that they could do device drivers.

I'm no programmer, but from what I've read in the /Changelog/ for the
first release candidate of 2.6.15, Linux seems to be headed towards a
new driver model as well. Might even be similar to the above.

> ...
>> In regards to swap memory, GNU/Linux uses a dedicated partition for
>> paging out the contents of the physical memory. This swap partition
>> is formatted in a layout very similar to the physical memory.
>> Kernelspace
>
> Linux can use swap files. You don't have to use a dedicated
> partition. See the mkswap man page for details.

Yes, I know this. I forgot to mention that in my original post.

> There's a ridiculous myth that swap partitions perform better than
> swap files. (I don't know why anyone would believe that--it would
> only be true if the people writing the filesystem code were morons).
> Anyway, this post from the kernel mailing list should put that myth to
> rest:
>
> Date: Tue, 28 Jun 2005 22:03:34 -0700
> From: Andrew Morton <>
> Subject: Re: Swap partition vs swap file
>
> Mike Richards <mrmik...@gmail.com> wrote:
> >
> > Given this situation, is there any significant performance or
> > stability advantage to using a swap partition instead of a swap
> > file?
>
> In 2.6 they have the same reliability and they will have the same
> performance unless the swapfile is badly fragmented.
>
> So why does every major distribution create a swap partition instead
> of using swap files? Habit, probably.

Hmm... I think it's not so much a performance issue than the advantage
that a swap partition is not subject to fragmentation, which is what
can happen when you use a swap file instead.

Tradition could also play part in it. UNIX has traditionally always
used dedicated swap partitions - or at least to my knowledge it has.

> ...
>> Windows uses a swap file for paging out the contents of the physical
>> memory. This swap file is a regular file within a regular filesystem
>> and therefore also gets cached again, which makes paging out memory
>> contents from the RAM to the hard disk quite pointless and tends to
>> fill up the cache in such a way that the whole system gets bogged
>> down and requires a reboot to return to a snappy operation.
>
> What makes you think this?

My experiences with Windows NT, and experiences from others who were
using the DOS-based Windows versions.

> ...
>> Either way, paging out kernel memory is considered Bad Practice(TM)
>> by engineers, and while I can understand that a server version of
>> Windows
>
> Paging out kernel memory is either fine, or it is a complete disaster
> to be quickly followed by a crash. Which of these it is depends on
> the design of the kernel. If the kernel is designed for it, it can be
> good practice. If the kernel isn't, then it is a bug, not bad
> practice.

Hmm... I'm willing to accept that, although I personally feel that
kernel code should not be swapped out, if none other than only for
performance reasons. Device drivers that aren't being used could
simply be unloaded - which is what the current Linux kernel does, on
the premise that we're talking of modules of course.

Message has been deleted

Aragorn

unread,
Nov 16, 2005, 10:31:29 AM11/16/05
to
On Monday 14 November 2005 08:49, Tim Smith stood up and spoke the

following words to the masses in /comp.os.linux.advocacy...:/

> In article <140xyfqj...@funkenbusch.com>,

Replying here as my ISP's newsserver keeps dropping a lot of posts - it
does that regularly... :-/

The multi-tasking on the 8086 was conducted by the operating system
itself. The 8086 CPU did not have the hardware built-ins for
multi-tasking and offered no memory protection.

Even MS-DOS was by that standard "multi-tasking" as it allowed you to
print a document in the background, from within DOS or from within an
application such as WordPerfect.

True multi-tasking - i.e. with the necessary provisions into the
hardware itself - only became available on the i80286, albeit that it
had to switch to real mode through a CPU reset. In OS/2 or Windows 3.0
Standard Mode, this meant that no other process could get hold of the
CPU while it was executing a DOS session.

The i80386 offered a more elegant solution via the virtual 86 mode,
which allowed for multiple (emulated) real mode sessions to be run from
within protected mode simultaneously. This gave OS/2 and Windows 3.x
(in 386-Enhanced mode) the possibility to run DOS sessions inside a
window, and to run them simultaneously with OS/2 or Windows
applications.

Aragorn

unread,
Nov 16, 2005, 5:43:44 PM11/16/05
to
As I didn't want to rely via Google Groups - where I had to look for
Erik's reply - I've ditched up the reply and pasted it into my /KNode/
editor, adding indentations where needed...


> On Sun, 13 Nov 2005 06:33:05 GMT, Aragorn wrote:
>
>>> Actually, it's not necessary to store infomration on drive letters.
>>> This has become largely a convenience, or shortcut.

>> Hmm... I know that mountpoints have been introduced from Windows


>> 2000 on, but as far as I know, nobody uses them. The default
>> approach still seems to be to primarily designate filesystems by
>> drive letters.

> Do you know why? Because it's easier to manage than having all your
> files in a single filesystem.

It's not a single filesystem, it's a single directory structure.

> People *LIKE* seperate drives.

No, people _are_ _used_ _to_ separate drive letters. As are you,
apparently.

>>>> This approach does not make any distinction between the nature of
>>>> the filesystem or the hardware it resides on. There is only the
>>>> convention that "drive A:" - and if applicable, a "drive B:" - is a
>>>> floppy drive, and that the first Windows-accessible primary
>>>> partition on a hard disk is "drive C:".
>>>
>>> Totally incorrect. The first bootable drive can be any letter you
>>> want it to be (or no letter at all) under Windows. My first drive
>>> letter was I: for many years. Not on purpose, but just because my
>>> C: drive died and I made my I: drive the bootable drive and removed
>>> the C: drive.
>>
>> Under those conditions, yes. However, this is not the default
>> behavior and requires a situation like the one you just described.
>> At least, from my experiences with NT. I have actually tried
>> changing the drive letter to the partition with NT itself from C: to
>> something else, and Windows showed me a warning that this would cause
>> problems.
>
> No, it didn't. No such warning exists. What it probably said was
> that you can't change the system drive partition, and you solve this
> by changing your system drive.

In other words, you reinstall the operating system in a different
partition...

>> Looking inside the registry, I did indeed see where it would cause
>> problems, as applications - as well as some of the components of
>> Windows itself - relied on actual drive letters being used, while the
>> majority of Windows-specific entries used a variable instead.
>
> All those are replaceable. You can do a system wide search and
> replacfe quite easily.

I tried that in NT. Apparently the systemwide search was not so
systemwide...

> However, it's easiest to install over again to a secondary partition
> or drive.

Reinstalling the operating system seems to be quite common advice when
there is a problem to be solved in Windows...

>>>> In Windows, each filesystem has its own root directory, and each
>>>> filesystem is an entity. It is left to the user to decide which is
>>>> what - albeit that the Windows GUI will give you hints by showing
>>>> designated icons next to the drive letter.
>>>
>>> Again, completely incorrect. For example, you can mount an entire
>>> partition inside a windows folder, just like you can in Linux.
>>
>> But the question is whether this is the default behavior or just an
>> option? What if you - say - install Windows in the first partition
>> of the hard disk and you format a few extra partitions as well? Will
>> they show up as drive letters or as mount points?
>
> They'll show up as whatever you set it up to show up as.

This seems illogical, unless you have to tell the system about the
presence of the partition, which I don't recall to be standard behavior
for Windows. From what I remember, Windows automatically designates a
drive letter to any Windows-readable partition it finds.

>> And what about the volume that contains the Windows installation
>> itself? Does that have a drive letter or will that simply have a
>> root directory designation?
>
> There typically is a drive letter, again, mostly because a lot of
> legacy software expects it.

Yes, I know it's the software that requires it. This corroborates with
my experiences.

>>>> In UNIX, you don't store a file on this or that volume; you store
>>>> the file "on the computer", and for doing so, you can store the
>>>> file in a - for you - writable place in a unified directory
>>>> structure. That same directory structure also features directories
>>>> which contain virtual filesystems such as */proc* or */sys* as well
>>>> as they do physical filesystems and RAM-based filesystems - e.g.
>>>> /tmpfs/ or /ramfs,/ - without that you need to think about the
>>>> physical media.
>>>
>>> Yet there are times when you DO need to think about the physical
>>> media, and then you get into the weird situation of placing a volume
>>> within the filesystem, such as /mnt/cdrom or /mnt/smbshare.
>>
>> In this case, the names of the chosen mountpoints should be obvious
>> as to what kind of medium they link to. Yet, with the complexity of
>> today's operating systems, the hard drive is the main storage and
>> should be considered part of the computer and not "this or that
>> particular storage device".
>
> The problem is, you lose that distinction. Unless you're aware that a
> particular folder is mounted on a remote server, you may not know, and
> thus be confused when that server isn't available and your files
> aren't there.

If the above scenario applies, then the mountpoint would make this
clear, e.g. */mnt/networkshare.*

The only scenarios where you would not have such a clearly recognizable
mountpoint would be if you were reading the root filesystem, */usr* or
even */home* via NFS, in which case you'd know soon enough.

Only */home* would need to be writeable in that respect, and without the
rest, you wouldn't be able to do much. Either way, this scenario
applies only when the server is up 24/7, and in such scenarios it would
be known to all if the server were down for - say - maintenance.

>> I agree with the view that the name of a mountpoint to removable or
>> network-based filesystems should be clear. In the latter case
>> however - i.e. network file systems - it also depends on whether the
>> filesystem is a filesystem to which you "just have access" or whether
>> it's actually one of the system's own directories. For instance, you
>> can have */usr* or */home* over NFS.
>
> And again, you lose track of distinctions you SHOULD keep track of.
> For example, you don't want to be saving 10GB of data to a dialup
> connected NFS link.

Anyone who uses a */home* over an NFS is typically intelligent enough to
not want to store 10 GB on a filesystem served over dial-up. By the
same token, any system administrator who sets up a */home* on an NFS
share over dial-up should be fired immediately for incompetence.

>>> This kind of dichotomy is a hack because there are times when the
>>> physical volume DO matter. Another example is when it comes to
>>> volume free space. The single volume approach makes it much less
>>> clear why you might be able to write 200GB to one directory but only
>>> 50 in another.
>>
>> Why would it make it less clearer than a drive letter? The output of
>> a /df/ command can show you exactly what space is available and what
>> space is used on which filesystem.
>
> Which completely invalidates the entire "one filesystem to rule them
> all" argument.

On the contrary. It's not a single filesystem but a single directory
hierarchy. Windows is the one with the single filesystem, or at least,
per default.

The command to request the free space from a filesystem is there for
those who wish to use it, but normally your */home* is the only
directory you have write access to.

If you're writing to a share in */var* as a regular user, then you know
what you're doing because then you know you're writing to a network
share.

The logic you are trying to deploy here is only the logic of someone who
is accustomed to drive letters. People without prior exposure to
Windows and who are required to work at a networked computer can just
as easily be told that */home* is local and */var/share* - or */srv*
on systems that follow the new FHS 2.3 - is a different filesystem, a
network share or whatever.

>> Now you're having to differentiate between drives and folders,
>> and know which is which.
>
> With a drive letter, I *KNOW* it's a different partition than another
> drive letter, and it will have different free space than another one.
> In a filesystem, things are much fuzzier, and you may not remember
> that /home/dir1 is on a different drive than /home/dir2.

And you wouldn't need to. */home/erik* is your home directory, and you
have no business in */home/aragorn* directory. And */home* plus an
eventual share are the only places you are allowed to write anyway,
unless you are root, in which case you know what the system's
partitioning is laid out like.

>> If /df/ reports something like 150 MB available on */mnt/zip,* then
>> that's perfectly clear enough for me.
>
> But that requires you to mentally keep track of which folders are
> which partition, or constantly df to figure it out. It completely
> invalidates the single filesystem concept.

No, it does not, considering that you cannot write to just any
partition. People who need to write to */usr* are clever enough to
know that */usr* might be a different filesystem, because they'd have
the power to be root.

>>> Even so, Windows also has such a filesystem as well, it's called DFS
>>> (no relation to the DFS in here, I don't think) or Distributed File
>>> System.
>>
>> I have heard this being mentioned before, but I always thought it was
>> something along the lines of a network filesystem that made all the
>> shared resources of each computer with such a filesystem available to

>> all computers on the network without that the user would know which


>> is on what computer.
>
> It can be used for that as well, or it can be used on a single
> computer.

In that case, it's obvious that a unified directory structure makes
sense even to Microsoft.

>>>> GNU/Linux for that matter cannot be installed in a filesystem that
>>>> doesn't support UNIX-style ownerships and permissions, as this is a
>>>> core part of the UNIX security model. There are some GNU/Linux
>>>> distributions that _do_ allow to be installed in a /FAT/ partition
>>>> (or on a floppy), but they use a trick: they run off of a virtual
>>>> UNIX-style filesystem stored inside a file on that /FAT/
>>>> filesystem, called /umsdos./
>>>
>>> Oh, I get it. It can't be done, except when it's done. Nice
>>> double-think.
>>
>> Erik, either we are having a communications problem or you are
>> feigning one. It can be done, provided that the system does not use
>> the /FAT/ directly. It will still have all functionality of a UNIX
>> filesystem, because it will natively run off of an *emulated* UNIX
>> filesystem, which _physically_ is a file in a /FAT/ partition.
>
> If it's running on a fat filesystem, it's running on a fat filesystem,
> regardless of any "tricks" you might want to play. Windows itself
> plays such "tricks" on FAT to gain long file names and other factors,
> that doesn't make it any less FAT.

Incorrect comparison. GNU/Linux installed on a /FAT/ filesystem is in
fact installed on a virtual UNIX filesystem residing in a file, and the
/FAT/ partition itself is not used as the system's own filesystem but
through some sort of loopback device.

Windows playing tricks on /FAT/ to store long filenames is a totally
different thing, as those filenames are stored *within* the File
Allocation Table.

Nice try, but no, you're wrong. It's a different thing.

> Can you install Linux on a FAT filesystem? Yes.

Nope. You install it on /umsdos,/ which resides in a /FAT/ partition,
which is not accessed directly.

>> The above requires some special "magic", and although considered
>> highly unstable - and it is - UNIX will still regard its filesystem
>> to be a UNIX-style filesystem with ownerships and permissions, and
>> will feature the security of such a UNIX filesystem for as long as
>> the kernel is up and running. This is far from the same thing as
>> running off of a /FAT/ natively.

> native has nothing to do with it. Is it running off FAT? Yes.
> Period.

It has everything to do with it. It is *not* running off /FAT,/ it is
running off /umsdos,/ which _resides_ in a /FAT,/ which in turn is
accessed as an external filesystem.

>> As a sidenote, I remember that NT 4.0 initially started installing
>> its stuff in a /FAT/ partition although I had selected /NTFS/ at
>> install time. It would then at a certain phase in the installation
>> procedure ask for a reboot - or initiate one itself - and then start
>> converting the filesystem to /NTFS/ before booting up further and
>> completing the installation.
>>
>> Please note that the above is not an accusation or sneer of any kind.
>> It's just that I wonder why the partition could not be formatted as
>> NTFS prior to installing all that stuff.
>
> The reason was that the first phase of the NT install program was a
> DOS program, and it couldn't read or write an NTFS volume. A minimal
> NT OS had to be copied to the hard disk in FAT, booted, then converted
> to NTFS to continue with the rest of it.

That is more or less what I suspected already. But then why would NT
require DOS to be installed?

>>>> Microsoft Windows has its graphics and even a large portion of
>>>> Internet Explorer running inside the kernel.
>>>

>>> Internet explorer doesn't run inside the kernel at ALL. Where do
>>> you get this shit? Do you just make it up?
>>
>> Hmm... I think I've always been polite so far. Could you be
>> bothered with showing me the same courtesy, please? :-/
>
>> As the matter of fact, I got that information from my monthly
>> computer magazine, and they are much more focused on Windows than on
>> GNU/Linux. I presume they know what they are talking about.
>
> Oh, so some unnamed magazine that only you read said something vague
> about IE being in the kernel. Nice.

It's not unnamed. It was called Computer Magazine (www.compmag.com) but
it has merged with the Dutch Personal Computer Magazine (PCM) now.

It's an edition of VNU Business Publications, but as it's in Dutch, I
doubt that it'd be of much use to you.

You are seeking to discredit me without giving me the benefit of the
doubt. Considering that the group is about GNU/Linux advocacy and you
are on the opposite side, I think it should rather be the other way
around.

Yet I'm open to an honest debate, which is why I'm replying to you.

>>>> This approach is considered Bad Practice(TM) by engineers as it
>>>> leads to more kernel vulnerability or instability in terms of bugs
>>>> in these routines. Yet it's considered a strategic Must Have(TM)
>>>> by Microsoft itself regarding monopolist tactics.
>>>

>>> Yet you ignore the fact that parts of Unix GUI's are also in the
>>> kernel, such as the framebuffer.
>>

>> As you could read for yourself, I have mentioned this in my original
>> post. Then how exactly do you see me ignoring it?
>
> By saying what you said above.

Am I to believe that Erik Funkenbusch doesn't understand the difference
between a driver and a window manager?

>>> Further, placing the GUI outside the kernel gives you no more
>>> security since the GUI process is given full root privs and can
>>> access hardware directly, thus causing the same kinds of lockups and
>>> crashes that can occur in the kernel.
>>

>> This is technically incorrect. The GUI process does not access
>> hardware directly. It accesses the driver, and the driver runs in
>> kernel mode.
>>
>> In addition, the GUI process does not run with full root privileges.
>> It runs under my non-privileged user account here.

> Wrong. X drivers do not run in kernel mode, though they may access
> kernel functionality (such as framebuffer). X accesses the hardware
> directly as a user process.

That's twisting the facts around just to come out of it as the winner.
Now now, Erik, I really thought you could do better than that?

Of course the X11 driver runs in kernel mode, because it's a driver.
However the window manager does *not* run inside the kernel, and other
than the driver, *everything* *else* about X11 runs in usermode. And
yes, with the privileges of _my_ user account.

>>> Wrong, the framebuffer is used for a lot more than just the kernel
>>> boot GUI.
>>

>> I know what a framebuffer is and what it is used for, Erik. It is
>> however not a windowing environment. It's a mere driver. And
>> drivers do indeed run inside the kernel.
>
> The Windowing environment in Windows doesn't run in the kernel either.
> The Windowing environment, known as USER32, runs as part of the
> userspace CSRSS process. GDI, or the graphic device interface, runs
> in the kernel (partially anyways).

So is GDI a driver then or what?

>>> And kernel memory can be paged out, not all of it of course, but
>>> some parts.
>>
>> No, it cannot. See the following PowerPoint presentation:
>>
>> http://www.cse.psu.edu/~anand/spring01/linux/memory.ppt

> Well, I stand corrected on that. That's just plain stupid. EVERY


> modern OS has pageable kernel memory, to not do so is a major denial
> of service waiting to happen. Just load a kernel module that
> allocates 1GB of memory, and your system is toast.

I believe this was adequately explained in another reply - I believe by
Tim Smith, but I can't check at this stage as I'm manually putting this
post together from scratch, without that I've come to the original
thread.

Either way, Windows is equally susceptible to such a denial of service.

>>> Again, you're making this shit up. Pagefiles are never cached.
>>> The filesystem has control over what files can be cached and what
>>> can't, and files can be opened in non-cached mode, which the
>>> pagefile is.
>>
>> When used with NTFS, you mean? I don't see how the filesystem could
>> control what is cached and what not if /FAT/ or /vfat/ are being
>> used, as they are by far not that advanced.
>
> It has nothing to do with the filesystem. Windows cache system is
> unified, and it caches CDFS, FAT, NTFS, SMB, NFS, AppleTalk, etc.. any
> installable filesystem can and will be cached. It's a part of the OS
> itself, and irrelevant to the filesystem.

I stand corrected, for as far as Windows 2000 and Windows XP may be
concerned then. I'm even willing to accept it on account of NT 4.0.
It was however far from true in the DOS-based Windows versions.

>> In addition, if such mechanism does indeed exist in the present day
>> versions of Windows, then it sure didn't exist in NT 4.0. And I have
>> *used* NT 4.0 and have turned it inside out for as much as I could,
>> even if it was only for three years.
>
> Yes, it did exist in NT4, and has since NT 3.1.

I guess it must have had a different memory leak then.

> Read this:
>
> http://www.windowsitpro.com/Articles/Print.cfm?ArticleID=3864

As in my two other paragraphs above...

>>>> While each of the above designs has its advocates and naysayers,
>>>> Linus Torvalds favors a monolithic design. For one, it is faster
>>>> and less complicated than a microkernel design as it involves far
>>>> less context switching for things like hardware access.
>>>
>>> You know, it's funny how people like yourself praize the monolithic
>>> kernel for speed and efficiency, yet you criticize when a far more
>>> microkernel design like Windows use certain parts in kernel to gain
>>> efficiency.
>>
>> The idea behind a microkernel is not to move things into the kernel,
>> Erik; it is to move things *out* of the kernel. I'm not commenting
>> on the choice for a microkernel. I'm sure microkernels can make up
>> very good pets... ehm... operating systems.
>
> You seem to have totally missed my point. A monolithic kernel moves
> everything INTO the kernel for speed and efficiency. Yet when a more
> modular microkernel based design moves a few things into the kernel to
> gain similar speed and efficiency, people like you complain, all the
> while praising the monolithic kernel for doing FAR MORE in the kernel.

No, a monolithic kernel does not move everything into the kernel. A
monolithic kernel simply keeps that which is a traditional kernel task
as part of the kernel, i.e. process scheduling, memory management,
hardware access and I/O scheduling. And that is *all* that needs to be
in the kernel. Nothing else needs to be or is moved into the kernel.

Microkernels have their hardware access and I/O scheduling outside of
the kernel.

>> It's just that I have given the concepts some thought and I
>> personally believe that a monolithic kernel is probably the best
>> approach, especially regarding the Linux kernel - see below. Linus
>> Torvalds seems to share my view on this.
>
> Then how can you POSSIBLY complain that moving ANYTHING into the
> kernel is bad?

It depends on *what* you want to move into the kernel. Only what I
mentioned above should be in the kernel, nothing else. Graphics
drivers are drivers and should be in the kernel. Graphics engines
should *not* be.

>>> The design of the kernel has *ZERO* to do with who can check in and
>>> out stuff in the kernel. They're not even related in the slightest.
>>
>> Hmm... I think we may be having a communications problem again.
>> What I am saying in the above quoted paragraph is that the kernel
>> team is a very tight and strict group of people.
>
> That's true, but it has NOTHING to do with the design of the kernel
> itself. You tried to claim that a Monolithic kernel makes it easier
> to check in and out changes. That's simply false.

No, you are once again misinterpreting what I'm saying, and I'm
beginning to think that you are doing it on purpose.

As the kernel is monolithic, the device drivers come from people in the
kernel development team, not from userspace developers. That was my
point.

>> Such a microkernel approach would rely on the goodwill of
>> distribution makers to determine what hardware their distribution
>> does and does not support. With Linux's current design, this choice
>> is left up to the kernel team, and Linus still personally oversees
>> the PPC and x86 branches of the tree.
>
> Wrong. Many distribution vendors support their own kernel patches
> anyways, with their own set of drivers in many cases.

False. Distribution vendors start developing a new distribution release
based upon the kernel /du/ /jour/ on kernel.org. By the time their
distribution is ready for the public, their kernel is already a few
(minor) versions behind on the vanilla tree from kernel.org, and so
various newly submitted patches that make up for new driver code in the
current vanilla kernel against the one the distribution's "new release"
started out with are being backported. Herein lies the only difference
in driver support between distributions.

The non-driver patches that distributions add on to their own kernels
are typically security patches - either from the Internet or developed
in-house - and extra functionalities such as Mandriva's /supermount/
patch.

The only drivers that may not be part of the main kernel tree and that
commercial distributions include - except for their freely downloadable
versions - are the proprietary video drivers from nVidia or ATI, and
when the kernel loads those, it will report a /tainting./

If there are stability problems or graphics support problems, then it's
usually also the fault of these very proprietary drivers.

>> As such, every distribution has the same hardware support built into
>> the kernel, albeit that some distributions will use newer vanilla
>> kernels as the basis for their distro-patched kernel than others,
>> which may result in one distribution having support for a minor few
>> quite newly added peripherals while another distro may not.
>
> Except that simply not the case with Linux.

??? Read the above paragraphs again.

>>> Not true at all. Now that there are a *LOT* of financial interests
>>> in Linux, such companies have been putting pressure on the kernel
>>> developers to meet deadlines.
>>
>> Can you prove this statement? This would be the first I hear about
>> it.
>>
>> To my knowledge, it is still Linus himself and Linus only who decides
>> when a new kernel is being released, albeit that he leaves it to Greg
>> Kroah-Hartman to release intermediary versions, such as the 2.6.14.1
>> and 2.6.14.2 kernels. I believe this started around the time of the
>> 2.6.11 tree.
>
> While it's true that Linus decides, when 2.4 was in development, there
> were a lot of folks waiting for it to ship. People like IBM, HP, and
> others.

That still doesn't equal that companies are putting pressure on the
developers in the form of deadlines. The most they can do is ask Linus
to speed things up. They cannot impose deadlines upon him.

> I think there's no clearer evidence that it shipped to early than the
> 2.9-10-11 debacle with the complete virtual memory manager rewrite in
> the stable tree.

So the kernel was released a bit prematurely in a few versions. Nothing
is perfect, and only recently you could read on this group how Linus
told a few people off over trying to insert new code into the kernel
tree right before the release of that kernel.

Microsoft has a habit of releasing just about everything prematurely, so
I don't consider it too bad if it happens every once in a while with
FOSS. In general, the Linux kernel is released only when stable, and
Linus and friends are doing their very best to keep it at that, even if
it means delaying the release of another kernel.

The above is the simple reason why 2.6.14 received five release
candidates instead of the planned four.

>>> Actually, Unix began life as a single user version of Multics. The
>>> name "Unix" is a play on "Unics". So in that respect, it too grew
>>> "upwards" from a single-user OS. Kind of invalidates your argument,
>>> doesn't it?
>>

>> That last sentence made your whole paragraph condescending again,
>> Erik, which was what I hoped you would have abandoned, especially
>> since your statement is not completely honest.
>
> It is completely honest.

I guess that depends on your definition of honesty then. Apparently,
mine is stricter.

>> UNIX did indeed start out as a single user version of Multics and was
>> jokingly referred to as "Unics". This much is true, _*but*_ this was
>> in the very beginning of UNIX's existence, before it was actually a
>> serious operating system.

On second thought, as it was said on this group - in another reply to
you, I think - "Unics" wasn't single-user anyway. I was not sure
originally, although I know where it comes from and how the name came
to be.

> I fail to see how that is relevant. The fact of the matter is, Unix
> evolved from a single user system. Period.

Every operating system evolves from a single line of code - i.e. the
very first line - and so one could even argue that every operating
system was once not an operating system. In other words, your argument
is far-fetched.

Unics was redesigned from the ground up as UNIX when it was rewritten in
C. The original games OS that Thompson and Ritchie had put together
was written in assembler. The UNIX that came to be the OS that was
used in-house at AT&T and was later on commercially sold may still have
some things in common - as in "a few routines" - with the original
"Unics", but not all that much anymore. It was a redesign.

> Windows NT based systems were a complete redesign from the ground up
> as a multitasking, multi-processor OS.

Sure, but not multi-user. It was designed primarily as a workstation OS
with server abilities. The multi-user aspect was only bolted on later,
and by "multi-user" I do mean simultaneous multi-usage, as in UNIX. I
don't include filesharing or running a webserver among that category.

Even thin clients for Windows need some kind of base GUI OS in the
client itself, typically Windows CE. That's the very reason as to why
Windows Terminal Server thin clients are different from UNIX thin
clients.

> It only grafted on a compatibility layer make the old single tasking
> programs still run.

I've never denied that the current iterations of Windows are different
designs from the DOS-based Windows versions.

>> As far as I know - and there is a nice historic thread related to the
>> hardware aspects of this in C.O.L.M. at the moment - multitasking
>> only became available to x86 on the Intel 80286, which was in the
>> early-to-mid eighties, if my memory serves me right.
>
> Gee, i guess that's why OS's like GEOS multitasked on the 8086 and
> 6502.

As in my other reply, it was the OS that was multitasking. The
multitasking provisions were not present in hardware.

>> By that time, UNIX was already fully functional and commercially
>> available. Its purely assembler-coded single-user incantation as you
>> describe were just some leisure experiments from a couple of bored
>> developers and can hardly be called an operating system, for the
>> short time that it existed in this form.
>
> They are still what the OS evolved from. And your original
> condescending comment about how Windows evolved from a single user OS
> shows the kind of double standard you display regularly, and THAT is
> why your messages typically tick me off.

Windows evolved from a single user OS, yes, as for the global design of
it. It does have a multi-tasking and SMP-capable kernel now, and an
attempt at multi-user functionality - in the UNIX sense - has been
bolted on to the new design, but the philosophy was still the same.

It was intended for workstations primarily and had to be able to
function as a server as well. Microsoft knew that the server market
was already very well-crowded and that there was a chance that NT would
fail there because of the competition from Netware and UNIX.

My remarks were by far not as condescending as yours, Erik, and so far,
your replies have only shown me that you are the one with the double
standards. You twist both the relevance of facts and my words around
to match the purpose.

>> The bottom line is that you're the one who's wrong again, but you are
>> using something irrelevant as an argument, and you throw in a
>> condescending remark.

> Wait, I'm wrong but i'm right? You admit I was correct.

No, I didn't, or at least not on the global point of view. Only on a
few things maybe, but for most part you are wrong, especially because
you twist things around to make what you say seem more convincing.
That's a debate technique that can make you leave the debate as the
winner, but that doesn't make the things you say more true.

> Yet i'm wrong simply because you don't want to consider that as
> "evolution". Sorry to burst your bubble, but plugging your fingers in
> your ears and saying "nah nah nah i'm going to believe what i want"
> doesn't cut it.

So Windows has evolved. So has GNU/Linux. And GNU/Linux had the more
elaborate starting point and the grander vision from the start. It
took Windows until the mid to late 90's before it finally became a
genuine 32-bit OS.

You know Erik, just about every argument you make about me as a person
in this reply is actually something that applies to you far more than
it does to me. It is you who wants to believe what he wants.

Unlike you however, I knew I wanted a UNIX-style operating system as I
couldn't agree with some of the things in the designs of OS/2 and
Windows. You on the other hand are so used to Windows that you're
simply unable to accept anything other as possibly superior to it.

But - as we say here in the Flanders - maybe you are trying to assess me
by assessing yourself?

AZ Nomad

unread,
Nov 16, 2005, 5:56:04 PM11/16/05
to
On Wed, 16 Nov 2005 22:43:44 GMT, Aragorn <str...@telenet.invalid> wrote:


>This seems illogical, unless you have to tell the system about the
>presence of the partition, which I don't recall to be standard behavior
>for Windows. From what I remember, Windows automatically designates a
>drive letter to any Windows-readable partition it finds.

Worse than that; windows will assign a drive letter to any partition or
drive it finds wether or not it can read the file system.
Pop an XP install disk into a computer with a flash card reader and you'll
find the hard drive named "L:" instead of "C:". Ditto if there are linux
partitions already on the drive. The only way to avoid this is to run
the linux installer far enough to set up the windows partition with free
space around it then run the windows installer.

The Ghost In The Machine

unread,
Nov 16, 2005, 8:00:05 PM11/16/05
to
In comp.os.linux.advocacy, AZ Nomad
<azn...@PmunOgeBOX.com>
wrote
on Wed, 16 Nov 2005 22:56:04 GMT
<slrndnne8v....@ip70-176-155-130.ph.ph.cox.net>:

Linux has a vaguely similar problem for physical drives; the good news
is that /etc/fstab insulates the user from the worst of it, though
the sysadmin still has to edit it.

In other words, before the new installation:

drive = /dev/hdb
partition = /dev/hdb1
name = /mydata

and after might be:

drive = /dev/hdc
partition = /dev/hdc1
name = /mydata

Most programs won't care about the /dev/hdc1; they look at the /mydata
(or a subitem thereof somewhere).

Erik F has mentioned a Windows Volume Manager but I can't say I've seen it.

--
#191, ewi...@earthlink.net
It's still legal to go .sigless.

Thomas Wootten

unread,
Nov 16, 2005, 9:45:16 PM11/16/05
to
Aragorn wrote:

<snip>

umsdos uses 'foobar' files (they probably have a proper term but I don't
know it) in every directory to store the permissions, long names, etc. when
mounted correctly, as umsdos, the foobar files are totally invisible, and
the permissions eyc visible. Force a mount as fat, or look at in in
Windows, and the foobar files will be visible, and the normal files will
not have their permissions.

One cannot install Linux on a filesystem that does not support permissions.
Well maybe one can, I've never tried, but it would be a BAD idea.

<snip>


--
Tom Wootten, Fresher NatSci, Trinity Hall.
oof.trinhall.cam.ac.uk
There was only ever one valid use for the notorious <blink> tag:
Schrodinger's cat is <blink>not</blink> dead.

AZ Nomad

unread,
Nov 17, 2005, 11:02:43 AM11/17/05
to

>and after might be:

Windows can't be moved once it installs. If it installs to drive 'L:',
anything that changes that assignment will destroy the system. It goes back
to that pile of shit called the registry.

On linux, moving the root only requires a change to fstab and perhaps
lilo/grub. Copy all the files with 'cp -a' to the new partition and you're
good to go. Probably a good idea to do it by booting the rescue shell from
a distro's install CD.

Erik Funkenbusch

unread,
Nov 17, 2005, 12:33:51 PM11/17/05
to
On Thu, 17 Nov 2005 16:02:43 GMT, AZ Nomad wrote:

> Windows can't be moved once it installs. If it installs to drive 'L:',
> anything that changes that assignment will destroy the system. It goes back
> to that pile of shit called the registry.

Of course it can. There are a number of options. you can, for example,
copy everything to a new drive, then change the drive letter of the new
drive to the old one. Or, you can copy just the windows files to a new
partition and install that partition into your filesystem via a mountpoint.

> On linux, moving the root only requires a change to fstab and perhaps
> lilo/grub. Copy all the files with 'cp -a' to the new partition and you're
> good to go. Probably a good idea to do it by booting the rescue shell from
> a distro's install CD.

You can do exactly the same thing with Windows.

The Ghost In The Machine

unread,
Nov 17, 2005, 3:00:04 PM11/17/05
to
In comp.os.linux.advocacy, Erik Funkenbusch
<er...@despam-funkenbusch.com>
wrote
on Thu, 17 Nov 2005 11:33:51 -0600
<876wis830aye$.d...@funkenbusch.com>:

> On Thu, 17 Nov 2005 16:02:43 GMT, AZ Nomad wrote:
>
>> Windows can't be moved once it installs. If it
>> installs to drive 'L:', anything that changes that
>> assignment will destroy the system. It goes back
>> to that pile of shit called the registry.
>
> Of course it can. There are a number of options.
> you can, for example, copy everything to a new
> drive, then change the drive letter of the new
> drive to the old one. Or, you can copy just the
> windows files to a new partition and install that
> partition into your filesystem via a mountpoint.

OK. Any magic that has to be done to boot.ini?

>
>> On linux, moving the root only requires a change to fstab
>> and perhaps lilo/grub. Copy all the files with 'cp -a'
>> to the new partition and you're good to go. Probably a
>> good idea to do it by booting the rescue shell from
>> a distro's install CD.
>
> You can do exactly the same thing with Windows.

The devil's in the details. How would one, for instance,
take XP Home, move it from partition /dev/hda1 to /dev/hdb2,
and set up everything (including boot manager!), assuming
that /dev/hda1 = 'C:' originally, and that /dev/hda1 is
left blank after all this?

Also, what would Windows see the partition in /dev/hdb2 as?
Would it see it as:

[1] 'C:'?
[2] 'D:'?
[3] anything at all depending on how many DOS and VFAT partitions
there are on drive /dev/hda and /dev/hdb?
[4] anything you want?

I suspect [4] but want to make sure, and in any event, I've yet
to see this Volume Manager on my win2k box. (I'd have to
dual-boot to see WinXP.)

Thomas Wootten

unread,
Nov 17, 2005, 3:09:50 PM11/17/05
to
Erik Funkenbusch wrote:

Even if the new root is actually on a totally different machine? Can do on
Linux, might break a few settings but that's all. Windows...uh...can we say
'product activation'?

AZ Nomad

unread,
Nov 17, 2005, 3:21:31 PM11/17/05
to
On Thu, 17 Nov 2005 11:33:51 -0600, Erik Funkenbusch <er...@despam-funkenbusch.com> wrote:


>On Thu, 17 Nov 2005 16:02:43 GMT, AZ Nomad wrote:

>> Windows can't be moved once it installs. If it installs to drive 'L:',
>> anything that changes that assignment will destroy the system. It goes back
>> to that pile of shit called the registry.

>Of course it can. There are a number of options. you can, for example,
>copy everything to a new drive, then change the drive letter of the new
>drive to the old one. Or, you can copy just the windows files to a new

Good luck doing that on a system that doesn't boot.

>partition and install that partition into your filesystem via a mountpoint.

>> On linux, moving the root only requires a change to fstab and perhaps
>> lilo/grub. Copy all the files with 'cp -a' to the new partition and you're
>> good to go. Probably a good idea to do it by booting the rescue shell from
>> a distro's install CD.

>You can do exactly the same thing with Windows.

But it won't boot if the \windows drive letter has changed.

Erik Funkenbusch

unread,
Nov 17, 2005, 4:01:10 PM11/17/05
to
On Thu, 17 Nov 2005 20:00:04 GMT, The Ghost In The Machine wrote:

> In comp.os.linux.advocacy, Erik Funkenbusch
> <er...@despam-funkenbusch.com>
> wrote
> on Thu, 17 Nov 2005 11:33:51 -0600
> <876wis830aye$.d...@funkenbusch.com>:
>> On Thu, 17 Nov 2005 16:02:43 GMT, AZ Nomad wrote:
>>
>>> Windows can't be moved once it installs. If it
>>> installs to drive 'L:', anything that changes that
>>> assignment will destroy the system. It goes back
>>> to that pile of shit called the registry.
>>
>> Of course it can. There are a number of options.
>> you can, for example, copy everything to a new
>> drive, then change the drive letter of the new
>> drive to the old one. Or, you can copy just the
>> windows files to a new partition and install that
>> partition into your filesystem via a mountpoint.
>
> OK. Any magic that has to be done to boot.ini?

Now that you mention it, yes. That would require that you set the correct
drive and partition to boot from. But that would be it.

>>> On linux, moving the root only requires a change to fstab
>>> and perhaps lilo/grub. Copy all the files with 'cp -a'
>>> to the new partition and you're good to go. Probably a
>>> good idea to do it by booting the rescue shell from
>>> a distro's install CD.
>>
>> You can do exactly the same thing with Windows.
>
> The devil's in the details. How would one, for instance,
> take XP Home, move it from partition /dev/hda1 to /dev/hdb2,
> and set up everything (including boot manager!), assuming
> that /dev/hda1 = 'C:' originally, and that /dev/hda1 is
> left blank after all this?

You need to boot from a different partition, either from a live CD (such as
one built using Bart PE or similar). You can't make active changes to the
partition you are booting from.

Create the second partition or drive, copy the files from the first drive
to the second (if you're using Explorer, you have to make sure it's set to
show all files, including system files), use Disk Manager to set the first
partition to Drive whatever, then set the second partition to Drive C (or
whatever it was originally called), then set the first drive to whatever
the second drive was originally called. Edit boot.ini to change the boot
partition for the OS, and reboot. Optionally, you can then format the
original first partition.

> Also, what would Windows see the partition in /dev/hdb2 as?
> Would it see it as:
>
> [1] 'C:'?
> [2] 'D:'?

It would be set to whatever the original partition was.

> [3] anything at all depending on how many DOS and VFAT partitions
> there are on drive /dev/hda and /dev/hdb?

It can be set to whatever you want.

> [4] anything you want?

Yes. However, some applications will expect files to be on the partition
they were installed on, which means either doing a search and replace in
the registry, or using the same drive letter.

> I suspect [4] but want to make sure, and in any event, I've yet
> to see this Volume Manager on my win2k box. (I'd have to
> dual-boot to see WinXP.)

It's on Win2k as well. Right click My Computer, choose Manage, then go
into Disk Management. Right click on any partition (not the disk) and
choose Change drive letter and paths. This also allows you to mount
paritions in paths on an NTFS partition.

Erik Funkenbusch

unread,
Nov 17, 2005, 4:01:50 PM11/17/05
to

You can set the new partition to the old drive letter.

AZ Nomad

unread,
Nov 17, 2005, 4:30:13 PM11/17/05
to

How do you change the drive letter assignment on a system you can't boot?

The Ghost In The Machine

unread,
Nov 17, 2005, 7:00:03 PM11/17/05
to
In comp.os.linux.advocacy, Erik Funkenbusch
<er...@despam-funkenbusch.com>
wrote
on Thu, 17 Nov 2005 15:01:10 -0600
<1cehgt82...@funkenbusch.com>:

Hmm....*this* guy? Never occurred to me to look beyond Services. :-)

The Disk Manager reminds me vaguely of an Amiga utility.
How a partition can be "healthy" and foreign is not clear
to me. (It's a dualboot.) No offense to resident aliens,
but I'd think "Foreign" or "Unknown" would make a little
more sense here. Looks like one can change a non-system
drive's drive letter without difficulty. However,
there's no equivalent to pivot() (though that's tricky
to do right anyway). I don't know if this can resize
active partitions. Of course the Amiga actually handled
(gasp) multiple letters and numbers in its partition names,
such as MYGAME: INET:, VOL2:, and HOME:. The Amiga even
prompted one if he couldn't find a volume named MYGAME: if
it wanted to open one -- of course it would wait until the
cows came home with its requester up in that case:

Please insert volume
MYGAME
in any drive

[retry] [cancel]

Windows has some work to do here. :-) To be fair, Windows
implemented a very elegant share pathname idea, which might
have been taken from Apollo DomainOS and modified for its
needs: \\hostname\share . (Apollo had //nodename only;
it had no concept of shares. '//' was internally called
the 'canned root' for some reason. Most modern Unix and
Linux nodes use /mnt, /nfs, or /net, and it's a convention,
not a requirement; automounters are routinely used in larger
Unix installations.)

Disk Defragmenter is the usual defragmenter on Windows.
I've seen it before. I'll probably see it again.
Defragmenting active partitions is an interesting idea with
no Linux equivalent (or much of a requirement, though one
can easily defrag Linux partitions by backup up using tar,
cpio, or cp -a [I think], wiping, and restoring).

Logical Drives is a little weird, since I don't see
options such as "Map" and "Unmap" here.

Removable Storage I'd have to study. The idea of Operator
Requests on a desktop is a bit strange, though I suspect
a protocol to ... well ... an Operator running an NT share.

I'll have to see what my Kayak has in this area -- though
in its case I have the little problem of trying to boot
Win98 off a SCSI drive.

The Ghost In The Machine

unread,
Nov 17, 2005, 7:00:04 PM11/17/05
to
In comp.os.linux.advocacy, AZ Nomad
<azn...@PmunOgeBOX.com>
wrote
on Thu, 17 Nov 2005 21:30:13 GMT
<slrndnptk1....@ip70-176-155-130.ph.ph.cox.net>:

[1] Buy a new XP license and a new drive.
[2] Install XP on the new drive.
[3] Boot the new drive.
[4] Run the Volume Manager on the old drive, and set it to the
letter one wants.
[5] Wonder why everyone is looking at me funny as I'm now out about
$500 ($300 or thereabouts for XP Home, $200 for the drive) or so.

Or, one can get the system to boot first, in order to try to
change the drive letter assignment on a non-system...wait, that
won't work.

There used to be a Win95 Rescue Floppy which among other things
allowed run to run FDISK.EXE, which of course wipes the entire
system. One can then install Linux, of course. :-)

Microsoft Windows(tm). How much did we want you to spend today?

Jim

unread,
Nov 17, 2005, 7:12:49 PM11/17/05
to
The Ghost In The Machine wrote:
<snip>

> I'll have to see what my Kayak has in this area -- though
> in its case I have the little problem of trying to boot
> Win98 off a SCSI drive.
>

I've got one of those! :) Previous owner hated it, but I found it
perfect for Knoppix 3.8; everything works, it's a 633 PIII with 256MB
RDRAM. Pretty darn nippy, and uber quiet. Can hardly hear the fans or
the drives (4x4GB SCSI in RAID0,1 + 1x8GB SCSI). He pretty much chucked
it at me after I expressed a passing interest in some other kit he was
getting rid of.

Ron House

unread,
Nov 17, 2005, 10:01:50 PM11/17/05
to
Erik Funkenbusch wrote:
> On Thu, 17 Nov 2005 16:02:43 GMT, AZ Nomad wrote:
>
>
>>Windows can't be moved once it installs. If it installs to drive 'L:',
>>anything that changes that assignment will destroy the system. It goes back
>>to that pile of shit called the registry.
>
>
> Of course it can. There are a number of options. you can, for example,
> copy everything to a new drive, then change the drive letter of the new
> drive to the old one. Or, you can copy just the windows files to a new
> partition and install that partition into your filesystem via a mountpoint.

There used to be files that had to stay contiguous, files that had to be
first, etc. Has XP go around all that stuff? Is it _really_ safe to
file-copy a windows partition and expect it to work?

--
Ron House ho...@usq.edu.au
http://www.sci.usq.edu.au/staff/house

Arkady Duntov

unread,
Nov 17, 2005, 10:24:10 PM11/17/05
to
On Thursday 17 November 2005 13:00, The Ghost In The Machine
<ew...@sirius.tg00suus7038.net> (<i22v43-...@sirius.tg00suus7038.net>)
wrote:

> In comp.os.linux.advocacy, Erik Funkenbusch
>>


>> Of course it can. There are a number of options.
>> you can, for example, copy everything to a new
>> drive, then change the drive letter of the new
>> drive to the old one. Or, you can copy just the
>> windows files to a new partition and install that
>> partition into your filesystem via a mountpoint.
>
> OK. Any magic that has to be done to boot.ini?

Don't bother with this advice.

If a file with a "long" name is copied, Microsoft offers no guarantee that
it will get the same short name (e.g., Micros~1) in its new location. If
the short name changes, all references to the old name in the registry are
now broken.

Windows: No Warranty, Express or Implied.

Erik Funkenbusch

unread,
Nov 18, 2005, 12:13:55 AM11/18/05
to

By booting from a livecd or by doing it before you reboot.

Erik Funkenbusch

unread,
Nov 18, 2005, 12:20:41 AM11/18/05
to
On Fri, 18 Nov 2005 00:00:03 GMT, The Ghost In The Machine wrote:

> I don't know if this can resize
> active partitions.

You can only resize partitions on "dynamic" disks, which uses an alternate
partition structure that typically isn't compatible with non-NT OS's

> Of course the Amiga actually handled
> (gasp) multiple letters and numbers in its partition names,

Yes, Amiga drive labels were quite nice.

Erik Funkenbusch

unread,
Nov 18, 2005, 12:23:15 AM11/18/05
to

I'm not sure what you're talking about. I know of no files that need to be
contiguous, and as long as your BIOS can see the boot sector of a patition,
you can boot from it.

Kelsey Bjarnason

unread,
Nov 18, 2005, 4:40:02 AM11/18/05
to
[snips]

On Wed, 16 Nov 2005 15:31:29 +0000, Aragorn wrote:

> Even MS-DOS was by that standard "multi-tasking" as it allowed you to
> print a document in the background, from within DOS or from within an
> application such as WordPerfect.

Not quite, as DOS lacked the process management features to accomplish
this. What DOS offered was the ability for applications to chain
interrupt handlers in such a manner that they could respond independent of
main application flow - e.g. a TSR which could handle file transfers while
your main application, such as a word processor, ran in the foreground.

Even then, many apps didn't use this approach quite so directly and
instead opted to simply install their own interrupt handlers within the
application - terminal apps did this with serial interrupts, some document
processing apps did this with timer and other interrupts, etc.


> True multitasking - i.e. with the necessary provisions into the


> hardware itself - only became available on the i80286

True multitasking was available long before the 286. You could do it with
the 8086. Or 8088. Or, indeed, virtually any processor - you don't need
any sort of hardware support for it whatsoever.

All multitasking means, in essence, is the ability to launch and manage
separate tasks, such that they can share processor time and other
resources, rather than running serially. No need for special hardware for
this, it's entirely possible to do it in software. And preemptive,
cooperative, round-robin or others are all equally real.

What hardware additions bring to multitasking isn't "true" multitasking,
but rather, the ability to make multitasking more efficient and effective.
It's cheaper to switch contexts if the CPU can manage it. It's almost
impossible to do flexible memory management or protection unless the
hardware supports it. None of these are required for multitasking, but
they do make it more efficient and effective.

About the only thing hardware adds in terms of extending the basic concept
of multitasking is if the processor - or processors - actually allow
multiple simultaneous execution of disparate code branches, something you
can't do with a single CPU (or single CPU thread, as the case may be); you
require multiple physical execution paths, which does require hardware
support which can't be accomplished in software.


Aragorn

unread,
Nov 18, 2005, 10:44:12 AM11/18/05
to
On Friday 18 November 2005 10:40, Kelsey Bjarnason stood up and spoke

the following words to the masses in /comp.os.linux.advocacy...:/

> [snips]


>
> On Wed, 16 Nov 2005 15:31:29 +0000, Aragorn wrote:
>
>> Even MS-DOS was by that standard "multi-tasking" as it allowed you to
>> print a document in the background, from within DOS or from within an
>> application such as WordPerfect.
>
> Not quite, as DOS lacked the process management features to accomplish
> this. What DOS offered was the ability for applications to chain
> interrupt handlers in such a manner that they could respond
> independent of main application flow - e.g. a TSR which could handle
> file transfers while your main application, such as a word processor,
> ran in the foreground.
>
> Even then, many apps didn't use this approach quite so directly and
> instead opted to simply install their own interrupt handlers within
> the application - terminal apps did this with serial interrupts, some
> document processing apps did this with timer and other interrupts,
> etc.

I had no idea actually as to how DOS was able to do things in the
background. My experiences with DOS date back to my very first
experiences with a computer in general, which was around 1990. ;-)

>> True multitasking - i.e. with the necessary provisions into the
>> hardware itself - only became available on the i80286
>
> True multitasking was available long before the 286. You could do it
> with the 8086. Or 8088. Or, indeed, virtually any processor - you
> don't need any sort of hardware support for it whatsoever.

Well, that's not what I was saying, but I may have expressed myself
poorly. I tend to do that, for which I apologize. :-)

> All multitasking means, in essence, is the ability to launch and
> manage separate tasks, such that they can share processor time and
> other resources, rather than running serially. No need for special
> hardware for this, it's entirely possible to do it in software. And
> preemptive, cooperative, round-robin or others are all equally real.

I'm aware of this, yes.

> What hardware additions bring to multitasking isn't "true"
> multitasking, but rather, the ability to make multitasking more
> efficient and effective. It's cheaper to switch contexts if the CPU
> can manage it. It's almost impossible to do flexible memory
> management or protection unless the hardware supports it. None of
> these are required for multitasking, but they do make it more
> efficient and effective.

That was my point. I was talking of the hardware multi-tasking support
in a microprocessor, not of the actual ability of an operating system
to multi-task any which way or on any hardware.

> About the only thing hardware adds in terms of extending the basic
> concept of multitasking is if the processor - or processors - actually
> allow multiple simultaneous execution of disparate code branches,
> something you can't do with a single CPU (or single CPU thread, as the
> case may be); you require multiple physical execution paths, which
> does require hardware support which can't be accomplished in software.

Well, okay, but I guess it was pretty obvious what I meant, even if my
words got scrambled and did not reflect my thoughts properly. ;-)

See, the above _is_ one of the reasons why I repeatedly feel the need to
remind people of my AS. What I'm thinking and how I channel those
thoughts into words are two different things.

Of course, I also do realize that the above is the dreamed excuse for
any troll to dismiss what I'm saying or pin my down on an incorrect
phrasing. And then there will of course be others who consider me an
inferior being because of this poor synchronization between my thoughts
and my typing. It's easier for them than putting in the effort of
trying to understand what I mean.

In addition, the USA is such a big country that its citizens often
forget that there really are people out there who natively speak other
languages, even if their English is good.

In light of the above, I have discovered that many words that I had
already been using previously - not necessarily here - actually mean
something other than what I thought they did.

I suppose that's what you get from instinctively learning the meaning of
a word from speaking the language and observing the language as spoken
by others, rather than to literally look up the words in a dictionary.

The Ghost In The Machine

unread,
Nov 18, 2005, 12:00:06 PM11/18/05
to
In comp.os.linux.advocacy, Jim
<ja...@the-computer-shop.co.uk>
wrote
on Fri, 18 Nov 2005 00:12:49 GMT
<509ff.622$f9....@newsfe3-win.ntli.net>:

The main annoyance is that this has dual-CPU capabilities,
but only one CPU (866 MHz) in-box. It also has some
issues with the display, though that's mostly the fault
of my sticking in my ATI 9000 card in it. (The removed
card is a Rage128.)

Considering that the +5 is reading a little low (OK, so
that's with an ancient Radio-Shack voltmeter), I'm not
sure I'd want to convert it to dual-CPU. I'm also not
horribly happy since it has no ISA slots.

Still, all in all, it's a nice system, for a box that's
a few years old. I bought it used so can't complain
overly much. (I guess I figured at the time it would
be easier to get that second CPU. Ah well...)

And I definitely like the patented plastic rails; makes it
very easy to slide things in and out. :-)

The Ghost In The Machine

unread,
Nov 18, 2005, 12:00:06 PM11/18/05
to
In comp.os.linux.advocacy, Erik Funkenbusch
<er...@despam-funkenbusch.com>
wrote
on Thu, 17 Nov 2005 23:20:41 -0600
<zbigt411...@funkenbusch.com>:

> On Fri, 18 Nov 2005 00:00:03 GMT, The Ghost In The Machine wrote:
>
>> I don't know if this can resize
>> active partitions.
>
> You can only resize partitions on "dynamic" disks, which uses an alternate
> partition structure that typically isn't compatible with non-NT OS's

<mode voice="Marvin"> Oh no, not another one. </mode>

Fortunately, ntfsresize works quite nicely off of the
Gentoo LiveCD -- and I suspect it's on most other
LiveCDs as well. Makes converting a box from pure
1-partition Windows XP to a dualboot a cinch. :-)

>
>> Of course the Amiga actually handled
>> (gasp) multiple letters and numbers in its partition names,
>
> Yes, Amiga drive labels were quite nice.

Aye.

The Ghost In The Machine

unread,
Nov 18, 2005, 1:00:06 PM11/18/05
to
In comp.os.linux.advocacy, Aragorn
<str...@telenet.invalid>
wrote
on Fri, 18 Nov 2005 15:44:12 GMT
<gFmff.51614$fH6.3...@phobos.telenet-ops.be>:

> On Friday 18 November 2005 10:40, Kelsey Bjarnason stood up and spoke
> the following words to the masses in /comp.os.linux.advocacy...:/
>
>> [snips]
>>
>> On Wed, 16 Nov 2005 15:31:29 +0000, Aragorn wrote:
>>
>>> Even MS-DOS was by that standard "multi-tasking" as it allowed you to
>>> print a document in the background, from within DOS or from within an
>>> application such as WordPerfect.
>>
>> Not quite, as DOS lacked the process management features to accomplish
>> this. What DOS offered was the ability for applications to chain
>> interrupt handlers in such a manner that they could respond
>> independent of main application flow - e.g. a TSR which could handle
>> file transfers while your main application, such as a word processor,
>> ran in the foreground.
>>
>> Even then, many apps didn't use this approach quite so directly and
>> instead opted to simply install their own interrupt handlers within
>> the application - terminal apps did this with serial interrupts, some
>> document processing apps did this with timer and other interrupts,
>> etc.
>
> I had no idea actually as to how DOS was able to do things in the
> background. My experiences with DOS date back to my very first
> experiences with a computer in general, which was around 1990. ;-)

Background things in DOS were probably done with the INT 27
(Terminate and Stay Resident) interrupt call. Basically, this
call returns back to the command shell (COMMAND.COM). I'm
not entirely sure how memory is managed (DOS gives it all
to the running command so there was presumably a way to give
some of it back before this interrupt). Of course such utilities
came to be called TSRs. Most TSRs would save the old interrupt
value (INT 10H, for example, was a CS:IP vector at location
0000:0040) somewhere, and "hook" (store) a new value in,
pointing into their own code.

The most obvious form of TSR nowadays might be DOSKEY.

It's the closest thing to a "background" DOS had, although there
was at one point a rather cute Switcher-like hack that allowed
the user to flip between DOS "processes". But given the
INT 27, that's easily implemented in a 286-era system with
some fiddling of the memory control registers.

>
>>> True multitasking - i.e. with the necessary provisions into the
>>> hardware itself - only became available on the i80286
>>
>> True multitasking was available long before the 286. You could do it
>> with the 8086. Or 8088. Or, indeed, virtually any processor - you
>> don't need any sort of hardware support for it whatsoever.
>
> Well, that's not what I was saying, but I may have expressed myself
> poorly. I tend to do that, for which I apologize. :-)

I'm not sure how to properly define the term myself. In a more
logical world one might call it "time-slicing" -- since that's
exactly how one creates the illusion of the processor working
on multiple things at once.

Of course with true multiprocessor nodes one has additional issues
such as keeping track of what each processor is doing, and such
things as ensuring that the processors don't try to clobber a
semaphore.

>
>> All multitasking means, in essence, is the ability to launch and
>> manage separate tasks, such that they can share processor time and
>> other resources, rather than running serially. No need for special
>> hardware for this, it's entirely possible to do it in software. And
>> preemptive, cooperative, round-robin or others are all equally real.
>
> I'm aware of this, yes.
>
>> What hardware additions bring to multitasking isn't "true"
>> multitasking, but rather, the ability to make multitasking more
>> efficient and effective. It's cheaper to switch contexts if the CPU
>> can manage it. It's almost impossible to do flexible memory
>> management or protection unless the hardware supports it. None of
>> these are required for multitasking, but they do make it more
>> efficient and effective.
>
> That was my point. I was talking of the hardware multi-tasking support
> in a microprocessor, not of the actual ability of an operating system
> to multi-task any which way or on any hardware.

In a 486 the multitasking support is primarily a method by which
the processor can save the registers in memory, and reload them.
However, there are other issues, such as trapping to kernel mode.

I'm frankly not sure how to answer your implied question here,
beyond referring you to Intel's technical literature (which
I'd have to do myself) and such things as the LOCK prefix (one
of Intel's rather weirder innovations) and the Test And Set
instruction (TAS), which I don't remember whether that's Intel
or Motorola now, but allowed for a bit of memory to be
simultaneously loaded into a register, and set, all in one
non-interruptable memory-cycle.

Such instructions allow for synchronization of memory areas
in privileged kernel storage amongst multiple processors,
allowing one processor at a time to modify it (the concept
is called a "monitor", an unfortunate choice of terms since
most others will think of a display device, but think
"hall monitor" or "gatekeeper"; user threads must also use
such monitors on occasion, to keep themselves from falling
all over each other).

>
>> About the only thing hardware adds in terms of extending the basic
>> concept of multitasking is if the processor - or processors - actually
>> allow multiple simultaneous execution of disparate code branches,
>> something you can't do with a single CPU (or single CPU thread, as the
>> case may be); you require multiple physical execution paths, which
>> does require hardware support which can't be accomplished in software.
>
> Well, okay, but I guess it was pretty obvious what I meant, even if my
> words got scrambled and did not reflect my thoughts properly. ;-)

I've seen worse English by some native speakers. :-)
I don't know whether anyone still remembers the "Valley
Girl" stereotype (the San Fernando Valley is in the Los
Angeles, California area, and was famous for a certain
speaking/behavior pattern; it probably also generated
therefrom a certain number of rather silly "dumb blonde"
jokes), for example; a more contemporary example might
be gangsta rap (not to be confused with Chicago mob
gangstERs of the 1930's, although the methods aren't all
that different; today's "drive-by" was yesterday's "hit").
No doubt others can find even stranger examples. Neither
is quite English, though both are derived therefrom.

And of course written English is quite different from the
spoken word -- mostly because the spoken word may not
always be complete sentences and there may be hand-waving
or pointing.

And then there's advertising, which corrupts the language:
"Winston takes good like a cigarette should" is probably
illustrative; the "like" should be "as" in proper English,
though nobody seems to bother much with such nowadays. A
more recent idea is "fahrvergnügen", which is actually a
German word ("driving pleasure") but is now firmly in the
user's mind -- at least, in this user's mind.

And English is full of such borrowed words anyway:
"rendezvous" and "limousine" being two obvious examples
(French) but the language is littered with borrowings.
In due course "perestroika" and "glasnost" may very well
be swallowed as well, though more likely candidates will
be "kleenex", "xerox", and "frisbee", despite the efforts
of Kleenex, Xerox, and Wham-O! to prevent it. (So far,
they've succeeded in not being genericized, but eternity
is a long time. :-) )

In any event, I learned some German in my youth, and have
never regretted it (though can't say I've actually used it
since high school); it sharpens one's understanding of what
is really meant by nouns, verbs, adverbs, prepositions,
sentence clauses, and such, and just, at a deep level,
suggests that English is *not* the only language in the
world and that to effectively communicate one might have
to think about the other's language capabilities. :-)

>
> See, the above _is_ one of the reasons why I repeatedly feel the need to
> remind people of my AS. What I'm thinking and how I channel those
> thoughts into words are two different things.
>
> Of course, I also do realize that the above is the dreamed excuse for
> any troll to dismiss what I'm saying or pin my down on an incorrect
> phrasing. And then there will of course be others who consider me an
> inferior being because of this poor synchronization between my thoughts
> and my typing. It's easier for them than putting in the effort of
> trying to understand what I mean.
>
> In addition, the USA is such a big country that its citizens often
> forget that there really are people out there who natively speak other
> languages, even if their English is good.
>
> In light of the above, I have discovered that many words that I had
> already been using previously - not necessarily here - actually mean
> something other than what I thought they did.

English is like that. :-/ Fortunately, in the technical arena
most of the words such as "microprocessor", "multitasking", and
"DOS" don't change all that much. :-)

>
> I suppose that's what you get from instinctively learning the meaning of
> a word from speaking the language and observing the language as spoken
> by others, rather than to literally look up the words in a dictionary.
>

Dictionaries can be a useful tool, and once learned, a
word can become part of one's vocabulary. I do know that
a a non-native speaker (usually adult) learns languages
in a different manner than a precocious 3-4 year old who
gets immersed in his or her parent's conversation (and,
later on, teachers and peers/other students). However, I
don't remember the details, and in any event, I'm not
a neurologist or brain specialist.

Erik Funkenbusch

unread,
Nov 18, 2005, 1:52:05 PM11/18/05
to
On Fri, 18 Nov 2005 17:00:06 GMT, The Ghost In The Machine wrote:

> In comp.os.linux.advocacy, Erik Funkenbusch
> <er...@despam-funkenbusch.com>
> wrote
> on Thu, 17 Nov 2005 23:20:41 -0600
> <zbigt411...@funkenbusch.com>:
>> On Fri, 18 Nov 2005 00:00:03 GMT, The Ghost In The Machine wrote:
>>
>>> I don't know if this can resize
>>> active partitions.
>>
>> You can only resize partitions on "dynamic" disks, which uses an alternate
>> partition structure that typically isn't compatible with non-NT OS's
>
> <mode voice="Marvin"> Oh no, not another one. </mode>

I thought I had heard that the Linux NTFS support now supports dynamic
disks, but i'm not certain.

The Ghost In The Machine

unread,
Nov 18, 2005, 3:00:04 PM11/18/05
to
In comp.os.linux.advocacy, Erik Funkenbusch
<er...@despam-funkenbusch.com>
wrote
on Fri, 18 Nov 2005 12:52:05 -0600
<gkd9m3vh6jp0$.d...@funkenbusch.com>:

There is a CONFIG_LDM_PARTITION setting in /usr/src/linux/.config;
I'm not entirely sure what it does but the file
/usr/src/linux/Documentation/ldm.txt might be enlightening to
those that need it. It certainly looks like it should be what
we're discussing here.

There is also a CONFIG_LDM_DEBUG for those who like to get
even deeper into the issues here.

(This is as of 2.6.12.)

I don't know what Linux's fdisk will do with such partition types.

Kelsey Bjarnason

unread,
Nov 18, 2005, 4:25:02 PM11/18/05
to
[snips]

On Fri, 18 Nov 2005 15:44:12 +0000, Aragorn wrote:

> I had no idea actually as to how DOS was able to do things in the
> background. My experiences with DOS date back to my very first
> experiences with a computer in general, which was around 1990. ;-)

Most stuff - background printing, etc - was actually in-app, just tied to
an interrupt handler, say for the parallel port, or a timer.

More advanced things - backroundable apps such as file transfer apps that
returned control and let you run other apps - were generally TSRs.

> Well, that's not what I was saying, but I may have expressed myself
> poorly. I tend to do that, for which I apologize. :-)

It's not you, it's the concept; for some reason, most folks have a really
weird idea what constitutes "multitasking", so tend to be led astray by
irrelevancies such as what the CPU supports.

> See, the above _is_ one of the reasons why I repeatedly feel the need to
> remind people of my AS. What I'm thinking and how I channel those
> thoughts into words are two different things.

Bah. You do better than a lot of folks. :)


Mark Kent

unread,
Dec 8, 2005, 6:36:30 PM12/8/05
to
begin oe_protect.scr
The Ghost In The Machine <ew...@sirius.tg00suus7038.net> espoused:

I wrote a few apps based around this capability. The problem is that if
you wanted to do something big, you really needed to create a semaphore
technique, as there wasn't a lot of time in the ISR to do much, as the
dos kernel wasn't reentrant in any way, so taking too long would either
lose interrupts or cause a re-entry, depending on whether you'd renabled
the interrupts or not. This was a pretty big problem... There was a
library created to ease this, I think called Tesseract, and I think
Borland had a strong interesting in it at the time.

It was all a bit of a mess, to be honest! I did write a synchronous
floppy drive writer to allow data collect on a per second basis to
co-exist with an interruptible floppy writer. It worked rather well,
but it could take *ages* to write a file. I abandoned it in the end,
thinking that I'd got it wrong, but I saw freedos a couple of years ago,
which has a floppy driver written much the same way, with the same speed
challenges, but is interruptible.


Hmm, must be getting old...

--
end
| Mark Kent -- mark at ellandroad dot demon dot co dot uk |
Economists state their GNP growth projections to the nearest tenth of a
percentage point to prove they have a sense of humor.
-- Edgar R. Fiedler

The Ghost In The Machine

unread,
Dec 9, 2005, 12:00:07 PM12/9/05
to
In comp.os.linux.advocacy, Mark Kent
<mark...@demon.co.uk>
wrote
on Thu, 8 Dec 2005 23:36:30 +0000
<uksm63-...@ellandroad.demon.co.uk>:

FreeDOS is still out there, though. :-)

http://www.freedos.org/

Maybe it'll clean up a little of the mess DOS was, though there's
only so much one can do for a design that was pre-MMU. But it's
simple, free, and, within its many limitations, robust.

But then, Linux is also free, though not quite as simple, and
has fewer limitations. :-) (FreeDOS, however, runs very well
within plex86, VmWare, and dosbox, as I understand it.)

0 new messages