Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[News] Choose Vista, Reduce Productivity

0 views
Skip to first unread message

Roy Schestowitz

unread,
Apr 9, 2007, 11:44:33 PM4/9/07
to
Vista slower than XP at start-up, shutdown, gripe users

,----[ Quote ]
| Windows Vista users are complaining on Microsoft Corp.'s support
| forums about long start-up, shutdown and application load times
| compared with Windows XP.
`----

http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9015905&source=rss_news50
http://tinyurl.com/364c9z

On a related note, Vista still does not handle de/fragmentation
automatically.

The fastest way to defragment your hard-drive

,----[ Quote ]
| Many are saying that Microsoft said it is unnecessary to defragment NTFS.
| While that may be true, many are noticing an increase in performance once
| they defrag their system, including myself. This article is a tutorial on
| how to speed up the defragmentation process, not one that is asking you to
| defragment your drive if you don't think you need to. To defragment or not
| to defragment is entirely up to you. Sorry for all those confused.
`----

http://vistarewired.com/2007/02/15/defragment/

It is amazing that after 5-6 years of work, the O/S is still unable to manage
its own filesystem properly. To an enterprise, this means wasted time.
People do not do what they ought to be doing, which is to do real work
(rather than reboot, defrag, and sometimes disinfect/wipe).


Related:

My View: Vista Takes Too Many Clicks to do the Job

,----[ Quote ]
| I think in this case Microsoft engineers really missed the mark
| in this regard and I'll write more soon about similar issues
| with too many keystrokes in Office 2007.
`----

http://byronmiller.typepad.com/byronmiller/2007/03/my_view_vista_t.html


Vista Irritations

,----[ Quote ]
| According to this Slashdot article, copying, moving and deleting
| files is slower under Vista. At least now I know why extracting a
| compressed file under Vista is like watching paint dry/grass grow
| (I've only tried using Winzip 11).
|
| [...]
|
| Now we name our directory and it?s done right? Not quite, because
| after typing your directory name and pressing enter, it's time
| for yet more prompts...
`----

http://harrisben.wordpress.com/2007/03/29/vista-irritations/


Vista: Slow and Dangerous

,----[ Quote ]
| Most of the time I spent testing Vista was with sluggish pre-release
| versions. I expected things to improve when I ran the finished software
| on PCs configured for the new Windows version. I now realize that
| Vista really is slow unless you throw a lot of hardware at it.
| Microsoft claims it will run with 512 megabytes of memory. I had
| recommended a minimum of a gigabyte, but 2 GB is more like it if
| you want snappy performance.
|
| [...]
|
| The most exasperating thing about Vista, though, is the security
| feature called User Account Control. UAC, satirized in an Apple
| ad as a security guy who constantly interrupts a conversation,
| appears as a pop-up asking permission before Windows...
`----

http://www.keepmedia.com/pubs/BusinessWeek/2007/03/26/3124001


Copying files across LAN with Vista is deathly slow

,----[ Quote ]
| Copying files from my XP video capture pc to my Vista pc is 3 times
| slower than copying from my XP video capture PC to my old XP PC.
`----

http://episteme.arstechnica.com/eve/forums/a/tpc/f/99609816/m/109009593831


Copying files across LAN with Vista is deathly slow

,----[ Quote ]
| Copying files from my XP video capture pc to my Vista pc is 3 times
| slower than copying from my XP video capture PC to my old XP PC.
`----

http://episteme.arstechnica.com/eve/forums/a/tpc/f/99609816/m/109009593831


The copy process may stop responding when you try to copy files from a server
on a network to a Windows Vista-based computer

,----[ Quote ]
| On a Windows Vista-based computer, when you try to copy files from a
| server on a network, the copy process may stop responding (hang), and
| you may receive a message that resembles the following:
|
| Calculating Time Remaining
|
| 0 minutes remaining
`----

http://support.microsoft.com/default.aspx/kb/931770


Your expense = my revenue

,----[ Quote ]
| "Windows is a money making machine for everyone involved" - but
| describing it as really a kind of work for welfare scheme in which
| everyone wins -except the customer to whom it's a cost, and the
| national economy for which it's a productivity sink.
`----

http://blogs.zdnet.com/Murphy/?p=803


Analyst slams Vista's 'backward' UI

,----[ Quote ]
| Windows Vista is a step back in usability, researcher claims
`----

http://www.macworld.co.uk/news/index.cfm?RSS&newsID=17334

Erik Funkenbusch

unread,
Apr 10, 2007, 7:05:55 AM4/10/07
to
On Tue, 10 Apr 2007 04:44:33 +0100, Roy Schestowitz wrote:

> On a related note, Vista still does not handle de/fragmentation
> automatically.
>
> The fastest way to defragment your hard-drive
>
> ,----[ Quote ]
>| Many are saying that Microsoft said it is unnecessary to defragment NTFS.
>| While that may be true, many are noticing an increase in performance once
>| they defrag their system, including myself. This article is a tutorial on
>| how to speed up the defragmentation process, not one that is asking you to
>| defragment your drive if you don't think you need to. To defragment or not
>| to defragment is entirely up to you. Sorry for all those confused.
> `----
>
> http://vistarewired.com/2007/02/15/defragment/
>
> It is amazing that after 5-6 years of work, the O/S is still unable to manage
> its own filesystem properly. To an enterprise, this means wasted time.
> People do not do what they ought to be doing, which is to do real work
> (rather than reboot, defrag, and sometimes disinfect/wipe).

Since the "Defend Roy at all costs, no matter how much he lies" or the
"Object to anything Erik says no matter if he is right or not" patrols will
quickly come to Roy's defense I will point out explicitly the ways Roy has
lied here:

1) Nowhere in the article does it claim that Vista can not automatically
defrag it's drives. Roy made this up. Completely.

2) Vista does in fact automatically defrag it's drives. The defrag
process, by default, runs after installation, and then, again by default is
set to run at 4am every sunday. This is the default configuration. If you
don't believe me, read this:

http://www.winsupersite.com/showcase/winvista_ff_auto_defrag.asp

3) This is not a mistake. He made this comment up, with absolutely no
support from the article. He had to KNOW he was lying.

Linonut

unread,
Apr 10, 2007, 7:44:25 AM4/10/07
to
After takin' a swig o' grog, Erik Funkenbusch belched out this bit o' wisdom:

> 2) Vista does in fact automatically defrag it's drives. The defrag
> process, by default, runs after installation, and then, again by default is
> set to run at 4am every sunday.
>

> http://www.winsupersite.com/showcase/winvista_ff_auto_defrag.asp

He doesn't say what happens if your computer is not powered on at 4am Sunday.

--
The Microsoft Solution -- Apply money liberally. Re-apply as necessary.

Roy Schestowitz

unread,
Apr 10, 2007, 7:47:36 AM4/10/07
to
__/ [ Linonut ] on Tuesday 10 April 2007 12:44 \__

> After takin' a swig o' grog, Erik Funkenbusch belched out this bit o'
> wisdom:
>
>> 2) Vista does in fact automatically defrag it's drives. The defrag
>> process, by default, runs after installation, and then, again by default
>> is set to run at 4am every sunday.
>>
>> http://www.winsupersite.com/showcase/winvista_ff_auto_defrag.asp
>
> He doesn't say what happens if your computer is not powered on at 4am
> Sunday.

I was not lying. The subject line perfectly aligns with the main item which
is at the top of the OP. Vista slows things down (startup and shutdown).

--
~~ With kind regards

Roy S. Schestowitz | WARNING: /dev/null running out of space
http://Schestowitz.com | RHAT GNU/Linux Ś PGP-Key: 0x74572E8E
run-level 5 Mar 11 15:57 last=S
http://iuron.com - help build a non-profit search engine

chrisv

unread,
Apr 10, 2007, 8:41:46 AM4/10/07
to
Erik Funkenbusch trolled:

>Since the "Defend Roy at all costs, no matter how much he lies" or the
>"Object to anything Erik says no matter if he is right or not" patrols will
>quickly come to Roy's defense

Poor Erik. Feeling bad because everyone just tears you a new one
whenever you lie?

AB

unread,
Apr 10, 2007, 11:44:36 AM4/10/07
to
On 2007-04-10, chrisv <chr...@nospam.invalid> claimed:

I don't see the lie. I saw what Erik wrote. But the title I see that he
changed appears to be accurate to me. Using Vista to defrag slows down
everything. That would include productivity. In fact, it's a factor of
10x longer on Vista ON AN ALREADY-DEFRAGGED DRIVE than using their
program, which did the defragging that Vista took 10x longer to do.

Now I'm no math genius. But if something takes 10x longer to do than
something else that's similar or the same, couldn't that extra delay be
expected to have somehwat of a negative effect on productivity?

--
Vista: Six years to make, six minutes to break.

7

unread,
Apr 10, 2007, 2:55:10 PM4/10/07
to
Asstroturfer Erik Funkenbusch wrote on behalf of micoshaft corporation:


BWAHAHAHAHAHHAHAHAHAHAHHAHAHAAAAAAA!!!!!!!!!

What's 'defrag'?

Clippy doesn't enter into correspondence on what it does and why.

Help me, my Linux computer doesn't seem to that feature.


The Ghost In The Machine

unread,
Apr 10, 2007, 3:59:36 PM4/10/07
to
In comp.os.linux.advocacy, 7
<website_...@www.enemygadgets.com>
wrote
on Tue, 10 Apr 2007 18:55:10 GMT
<i4RSh.8925$NK2....@text.news.blueyonder.co.uk>:

I've wondered about this on occasion. Obviously, Windows
filesystem defragmentation -- it's either a gigantic design
flaw, inherited from DOS's FAT failings, or a feature;
I think it's a mix of the first two -- is a fact of life,
even in Vista. (That Vista automates the process is a
tradeoff: is it better to fix the failings, or to bodge
something that works around them? UNIX(tm)/Linux 'find'
and 'locate' are themselves a bit of bodging, though 'find'
is far more general and has been around since the 1970's,
and 'locate' is the observer side of the 'locate/updatedb'
system, which scans the entire disk on a repeated basis
(once every day or so) to build a file list. It's a
tradeoff: if a system's idle it might as well do something.
I'm not sure about beagle/beagled yet though it's an
unobtrusive puppy at present -- and hasn't really made
any sort of mess on the carpet in my mind yet :-) .)

For its part Linux has defragmentation built-in, at least
AFAIK, into the filesystem block/inode allocation methods,
and the grouping methodology therein. It probably depends
on the filesystem.

I'm also given to think Linux has better caching, since
it doesn't bother to cache DLLs and EXEs, but arbitrary
file pages. (That presumably cuts down on the corner
cases and such.)

As for Clippy -- well, we do have the OpenOffice smiling
light bulb [*], but he's not quite as annoying. ;-)
Certainly he doesn't:

- take notes as one fills out a requester form
("It looks like you're writing a suicide note" being a
somewhat popular joke satirizing on Clippy's antics;
"vigor" is a clippy look-alike running around for
"vi" users)

- convert into a motorbike when asked to leave

- tap on the monitor glass to get one's attention

- look quite as stupid, especially when taking notes,
converting into a motorbike, or tapping on the monitor
glass (since said bulb does not take notes, convert into
a motorbike, tap on the glass, or even move; clicking on
the bulb -- a natural reaction to an "idea" metaphor --
does take one to the help pages, which is about what one
would expect, or one can simply close his window with
a provided little "x" in the upper right hand corner).

No doubt someone will hack it at some point to allow for
arbitrary personalized light bulbs, as an inside joke.
(Certainly Microsoft BOB allowed for icon personalization,
from Rover the dog to Scuzzo the rat to Speaker the
... electronic noise device? XP also has search icon
personalization -- and bathed the dog.)

One might even put a variant of Clippy or Rover in
there...though at some point one would have to ask
"why bother".

[*] as I recall it used to just be a diagonally-positioned
light bulb; presumably it changed in version 2.

--
#191, ewi...@earthlink.net
If your CPU can't stand the heat, get another fan.

--
Posted via a free Usenet account from http://www.teranews.com

7

unread,
Apr 10, 2007, 4:17:10 PM4/10/07
to


Defragging is practically non-existent until the drive
begins to get really full. The reason is the clever way
files are stored eliminating the need to defrag.
Google for it - there is atleast one neat article out there that describes
this clever system.

Erik Funkenbusch

unread,
Apr 10, 2007, 4:37:43 PM4/10/07
to
On Tue, 10 Apr 2007 10:44:36 -0500, AB wrote:

> On 2007-04-10, chrisv <chr...@nospam.invalid> claimed:
>> Erik Funkenbusch trolled:
>>
>>>Since the "Defend Roy at all costs, no matter how much he lies" or the
>>>"Object to anything Erik says no matter if he is right or not" patrols will
>>>quickly come to Roy's defense
>>
>> Poor Erik. Feeling bad because everyone just tears you a new one
>> whenever you lie?
>
> I don't see the lie. I saw what Erik wrote.

The lie was where Roy wrote:

>> On a related note, Vista still does not handle de/fragmentation
>> automatically.

> But the title I see that he changed appears to be accurate to me.

This time it wasn't the title where he lied, he lied in the contents where
he commented on Vista's defragging.

> Using Vista to defrag slows down everything.

Whether or not that's true, it's irrelevant to the fact that Roy claimed
Vista does handle defragging automatically.

The Ghost In The Machine

unread,
Apr 10, 2007, 5:58:37 PM4/10/07
to
In comp.os.linux.advocacy, 7
<website_...@www.enemygadgets.com>
wrote
on Tue, 10 Apr 2007 20:17:10 GMT
<ahSSh.9002$NK2....@text.news.blueyonder.co.uk>:

A simplified explanation appears to be at

http://geekblog.oneandoneis2.org/index.php/2006/08/17/why_doesn_t_linux_need_defragmenting

Interestingly,

http://www.kdedevelopers.org/node/2270

appears to be of a different opinion, though neither
mentions issues such as rotational latency -- for a 10,000
RPM drive that makes an average of 3 ms difference, if
the next sector's halfway around the disk at the moment
the program wants to read it.

Both are wrong, to a certain extent -- as it's not Linux
that's managing the problem; it's one of Linux's modules,
which could (with a little work) be plugged into FreeBSD,
HURD, or even Windows.

Neither one also mentions -- though it's probably not all
that relevant -- the outright lying most interface cards
do to the CPU regarding disk size. How many disks have
255 heads? That would be a pancake stack more than a
foot in length, assuming 5 mm platter-to-platter spacing.

(And I think they're slightly thicker than that.)

In fact, I believe ext2 has been plugged into all three;
certainly Win95 had "fsdext2", which was buggy but functioned
after a fashion (it was of course a third-party add-on),
allowing Windows to read ext2-formatted drive partitions.

For its part FreeBSD has some interesting executables in
/usr/sbin for mounting "foreign" partition data types.

http://www.linuxquestions.org/questions/showthread.php?t=543142

mentions the notebook analogy, which is probably why ext2
et al don't need defragging; if a file extends the data
clusters in that file can sop it up until the group (or
"notebook page" in the analogy) is full. The journaling,
which the module intelligently implements, also helps,
as the write is deferred until later, when the module can
intelligently decide where to put things (and has an idea
on how big said things are likely to get).

There's also the issue of file fragmentation versus
data ordering. The first refers to how many contiguous
pieces a single file might have scattered around the disk;
the second is an issue for applications that need to read
several or many pieces of data, scattered around various
files and directories.

The best file defragmenter won't help a whit if it puts
the file data in the wrong order, :-) even if the files
themselves are contiguous.

Tools also contribute to the lack of a need. I'll admit
I'm far from certain what the state of the art is here
at this point, but in Linux tools such as make are more
likely to be used, and make isn't horribly intelligent
when it comes to logging -- which might be a plus here as
one can simply type "make" and usually generate a lot of
little files -- .o files, libraries, executables, and other
such -- which are almost guaranteed contiguous, are usually
fairly small, and might even be reasonably clustered (since
they are generally created at more or less the same time).

The log, such as it is, is in memory (a scrollback buffer,
perhaps), but not in the filesystem, unless the user puts
it there with a redirect -- and it might be in /tmp, which
is most likely a different filesystem altogether.

Ant is about the same, in the Java world.

I'm not sure regarding Visual Studio, but it appears VS
has at least one logging file, which it leaves open during
builds. That way lies fragmentation madness. It doesn't
help that the underlying filesystem doesn't manage the
problem well -- but I'll admit I've not done a lot of work
in Windows lately, so haven't a clue as to how badly WinXP
fragments now. There's also .pch and other such files,
which appear to be work files of some sort used for
optimization of the build -- and fragmentation of the disk.

The implementation used in Linux isn't perfect, and
benchmarks might show some flaws therein -- but it's a
far sight better than Windows.

--
#191, ewi...@earthlink.net
Warning: This encrypted signature is a dangerous
munition. Please notify the US government
immediately upon reception.
0000 0000 0000 0000 0001 0000 0000 0000 ...

Freeride

unread,
Apr 10, 2007, 10:37:10 PM4/10/07
to
On Tue, 10 Apr 2007 06:05:55 -0500, Erik Funkenbusch wrote:

> 2) Vista does in fact automatically defrag it's drives. The defrag
> process, by default, runs after installation, and then, again by default is
> set to run at 4am every sunday. This is the default configuration. If you
> don't believe me, read this:

Why does NTFS still need to be defrag? Can Microsoft not design a
efficient file system?

Freeride

unread,
Apr 10, 2007, 10:47:16 PM4/10/07
to
On Tue, 10 Apr 2007 15:37:43 -0500, Erik Funkenbusch wrote:

> The lie was where Roy wrote:
>
>>> On a related note, Vista still does not handle de/fragmentation
>>> automatically.

Maybe you miss read what he was trying to say.

Maybe he is saying that Vista and NTFS fragments and requires time and
resource consuming external defragmentation! Hence the file system
fragments and sucks!

Erik Funkenbusch

unread,
Apr 10, 2007, 11:00:07 PM4/10/07
to

NTFS is a multi-user filesystem. Multiuser filesystems actually work
better when fragmented because sectors are more distributed across the
disk. Multiple users aren't likely to be constantly accessing the exact
same file, so you get disk head thrashing when files are on opposite ends
of the disk and multiple processes reading and writing them simultaneously.

And i'm not the only one that says this. So do Linux users:

http://www.salmar.com/pipermail/wftl-lug/2002-March/000603.html

Roy Schestowitz

unread,
Apr 10, 2007, 10:45:32 PM4/10/07
to
__/ [ AB ] on Tuesday 10 April 2007 16:44 \__

Look at the OP. He conveniently (and probably deliberately) snipped out what
actually justifies the subject line (there are 2 items, not just one). He's
quote mining.

--
~~ With kind regards

Roy S. Schestowitz | "I feed my 3 penguins with electricity and love"
http://Schestowitz.com | Open Prospects Ś PGP-Key: 0x74572E8E
Tasks: 127 total, 1 running, 120 sleeping, 0 stopped, 6 zombie
http://iuron.com - knowledge engine, not a search engine

Erik Funkenbusch

unread,
Apr 10, 2007, 11:34:26 PM4/10/07
to
On Wed, 11 Apr 2007 03:45:32 +0100, Roy Schestowitz wrote:

> __/ [ AB ] on Tuesday 10 April 2007 16:44 \__
>
>> On 2007-04-10, chrisv <chr...@nospam.invalid> claimed:
>>> Erik Funkenbusch trolled:
>>>
>>>>Since the "Defend Roy at all costs, no matter how much he lies" or the
>>>>"Object to anything Erik says no matter if he is right or not" patrols
>>>>will quickly come to Roy's defense
>>>
>>> Poor Erik. Feeling bad because everyone just tears you a new one
>>> whenever you lie?
>>
>> I don't see the lie. I saw what Erik wrote. But the title I see that he
>> changed appears to be accurate to me. Using Vista to defrag slows down
>> everything. That would include productivity. In fact, it's a factor of
>> 10x longer on Vista ON AN ALREADY-DEFRAGGED DRIVE than using their
>> program, which did the defragging that Vista took 10x longer to do.
>>
>> Now I'm no math genius. But if something takes 10x longer to do than
>> something else that's similar or the same, couldn't that extra delay be
>> expected to have somehwat of a negative effect on productivity?
>
> Look at the OP. He conveniently (and probably deliberately) snipped out what
> actually justifies the subject line (there are 2 items, not just one). He's
> quote mining.

Moron, i'm not talking about the subject line, and you damn well know it.
I'm talking about the actual commentary you made about Vista not being able
to automatically defrag.

Erik Funkenbusch

unread,
Apr 10, 2007, 11:36:24 PM4/10/07
to

Vista doesn't require external programs to defragment. The defragmenter is
built-in. Why would you claim otherwise?

Roy Schestowitz

unread,
Apr 10, 2007, 11:33:40 PM4/10/07
to
__/ [ Freeride ] on Wednesday 11 April 2007 03:47 \__

But EF spins like a ballerina. Spin, spin, spin, rather than fix the bug that
you somehow come to convince yourself is a feature.

--
~~ With kind regards

Roy S. Schestowitz | Prevalence does not imply ideali$M
http://Schestowitz.com | Free as in Free Beer Ś PGP-Key: 0x74572E8E
Load average (/proc/loadavg): 1.35 1.43 1.36 2/134 11708
http://iuron.com - semantic search engine project initiative

Freeride

unread,
Apr 11, 2007, 12:02:13 AM4/11/07
to
On Tue, 10 Apr 2007 22:36:24 -0500, Erik Funkenbusch wrote:

> Vista doesn't require external programs to defragment. The defragmenter is
> built-in. Why would you claim otherwise?

Are you really that stupid Eriw? Defrag.exe is that built into the NTFS
file system? :) Or is that not an external defragmentation program for the
crappy NTFS file system?

Roy Schestowitz

unread,
Apr 10, 2007, 11:48:02 PM4/10/07
to
__/ [ Erik Funkenbusch ] on Wednesday 11 April 2007 04:34 \__

Watch your language, Erik. I believe there was a subtle attempt to deceive by
exclusion, so I pointed that out.

Why do you insist on modifying the subject line to get past the filters and
attach a label to my name? Do you think that if you repeat this often enough
they will become a stereotype and hurt me? You have sunk to the bottom of
the barrel, Erik. You should be ashamed of yourself. So who's paying you (or
'compensates' you) to do this? Come on, Erik, tell us. You have joined
ranked with MOG, Loyns, Enderle, and Didio. But you're 10 levels below them
because while they are responsible for Microsoft placements in the
mainstream press, you just do UseNet.

--
~~ With kind regards

Roy S. Schestowitz

http://Schestowitz.com | Open Prospects Ś PGP-Key: 0x74572E8E

Tasks: 127 total, 1 running, 119 sleeping, 0 stopped, 7 zombie

AB

unread,
Apr 11, 2007, 1:26:16 AM4/11/07
to
On 2007-04-10, Erik Funkenbusch <er...@despam-funkenbusch.com> claimed:

> On Tue, 10 Apr 2007 10:44:36 -0500, AB wrote:
>
>> On 2007-04-10, chrisv <chr...@nospam.invalid> claimed:
>>> Erik Funkenbusch trolled:
>>>
>>>>Since the "Defend Roy at all costs, no matter how much he lies" or the
>>>>"Object to anything Erik says no matter if he is right or not" patrols will
>>>>quickly come to Roy's defense
>>>
>>> Poor Erik. Feeling bad because everyone just tears you a new one
>>> whenever you lie?
>>
>> I don't see the lie. I saw what Erik wrote.
>
> The lie was where Roy wrote:
>
>>> On a related note, Vista still does not handle de/fragmentation
>>> automatically.
>
>> But the title I see that he changed appears to be accurate to me.
>
> This time it wasn't the title where he lied, he lied in the contents where
> he commented on Vista's defragging.

I see. Then it's OK that it might take 10x as long to do what needs to
be done as long as it can do it withut asking?

And how do you know he *knew* it could do it automatically? Doesn't it
require having knowledge to the contrary to constitue a lie?

>> Using Vista to defrag slows down everything.
>
> Whether or not that's true, it's irrelevant to the fact that Roy claimed
> Vista does handle defragging automatically.

"My aircraft won't fly because the wings keep cracking in half, the
engine always explodes, the body usually splits in two and the tail
folds up in flight. Stop lying and saying the wheels fall off!"

--
Windows: Because you have too much free time.

Peter Kai Jensen

unread,
Apr 11, 2007, 2:13:19 AM4/11/07
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Erik Funkenbusch wrote:

>> Why does NTFS still need to be defrag? Can Microsoft not design a
>> efficient file system?
>
> NTFS is a multi-user filesystem.

Yet predominantly used on single-user systems (or at least single
concurrent user). Surely they can optimize a file-system to their
largest user base?

> Multiuser filesystems actually work better when fragmented because
> sectors are more distributed across the disk.

Utter bull-crap. They definitely won't work *better*, though perhaps
the *impact* is not as horrible if multiple concurrent users are
accessing the disk.

> Multiple users aren't likely to be constantly accessing the exact same
> file, so you get disk head thrashing when files are on opposite ends
> of the disk and multiple processes reading and writing them
> simultaneously.

But NTFS fragments in a most horrible way (with small fragments), so
even during one users time-slice, you'll have lots of disk head
thrashing.

> And i'm not the only one that says this. So do Linux users:
>
> http://www.salmar.com/pipermail/wftl-lug/2002-March/000603.html

The difference being how Linux file systems fragment (among other
things, larger continuous blocks) and the fact that most Linux systems
are actually true multi-user systems (servers). At least at the time
this was written.

Nice attempt at Newspeak, though.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGHHy0d1ZThqotgfgRAvTLAJ4u8R5Vhj8f2SXFCwPLnMfxN319OgCfenLW
Nz9uZvllAAaLXod0lnTORN8=
=ZIpi
-----END PGP SIGNATURE-----
--
PeKaJe

BOFH Excuse #215:
High nuclear activity in your area.

William Poaster

unread,
Apr 11, 2007, 5:05:07 AM4/11/07
to

Having to defrag an Os's file system in the 21st century? Unbeleivable!
And supposing Joe Bloggs at home doesn't even have his PC turned on a 4AM
on Sunday morning, what then? Has he to run it when he *does* turn his
machine on? What a waste of time.

Furthermore, it appears Fista doesn't tell you anything about
the process, how long it takes, no indication of elapsed time, how long it
has to go etc...etc..
http://www.geekzone.co.nz/freitasm/529

http://www.thegline.com/windows/2006/11/about-vista-defrags-sudden-los.html

"But it's (defragging) something that should happen regularly in order to
keep your system performing optimally."
Shouldn't that be: your *windoze* system performing optimally?

And how about this?
"Automatic disk defrag is an absolutely essential feature in a modern
operating system."
Again, shouldn't that be: ".....in a modern *windoze* operating system."

Disk defragging is NOT required in a *truly* modern operating system like
Linux.

M$ has yet to catch up in this sphere too.

--
Contrary to popular belief, the M$ trolls & shills
*can* tell the difference between their arse
& their elbow.
They can't talk out of their elbow.

The Ghost In The Machine

unread,
Apr 11, 2007, 12:23:09 PM4/11/07
to
In comp.os.linux.advocacy, Erik Funkenbusch
<er...@despam-funkenbusch.com>
wrote
on Tue, 10 Apr 2007 22:00:07 -0500
<342qdgam...@funkenbusch.com>:

> On Wed, 11 Apr 2007 02:37:10 GMT, Freeride wrote:
>
>> On Tue, 10 Apr 2007 06:05:55 -0500, Erik Funkenbusch wrote:
>>
>>> 2) Vista does in fact automatically defrag it's drives. The defrag
>>> process, by default, runs after installation, and then, again by default is
>>> set to run at 4am every sunday. This is the default configuration. If you
>>> don't believe me, read this:
>>
>> Why does NTFS still need to be defrag? Can Microsoft not design a
>> efficient file system?
>
> NTFS is a multi-user filesystem.

In the same way that Windows is a multi-user operating system.
Yep. Got it.

> Multiuser filesystems actually work better when fragmented
> because sectors are more distributed across the disk.

That's the silliest logic I've heard in awhile. While
there are multiple factors regarding file/disk access, the
sad truth of the matter is that access to the disk is in
fact single threaded (at least, single-threaded per disk).
Therefore this dog doesn't hunt very well. (It might lift
up its head and look at you, but that's about it.)

Granted, there's issues if there are actually multiple
users, but apart from IIS server setups it's rather
unlikely that there will be multiple users per Windows
box, and in any event, an IIS setup (like an Apache setup)
performs best when the entire website fits into physical
RAM, which makes head thrashing a non-issue since no disk
I/O is occurring.

> Multiple users aren't likely to be constantly accessing the exact
> same file, so you get disk head thrashing when files are on opposite ends
> of the disk and multiple processes reading and writing them simultaneously.
>
> And i'm not the only one that says this. So do Linux users:
>
> http://www.salmar.com/pipermail/wftl-lug/2002-March/000603.html

Personally, I prefer reiserfs; it's higher performance,
at least according to the tests I did many moons ago on
my equipment. However, it may have concurrency issues of
its own, according to yttrx. I've not studied the matter.

--
#191, ewi...@earthlink.net
Murphy was an optimist.

The Ghost In The Machine

unread,
Apr 11, 2007, 12:25:21 PM4/11/07
to
In comp.os.linux.advocacy, Freeride
<free...@maillinux.org>
wrote
on Wed, 11 Apr 2007 04:02:13 GMT
<95ZSh.183158$6P2.1...@newsfe16.phx>:

It's built into the Windows offering, perhaps. Certainly
it's an external program -- it was Executive Software's
DiskKeeper or some such in a previous incarnation, AIUI.

--
#191, ewi...@earthlink.net
fortune: not found

Linonut

unread,
Apr 11, 2007, 8:05:06 PM4/11/07
to
After takin' a swig o' grog, The Ghost In The Machine belched out this bit o' wisdom:

>> Multiuser filesystems actually work better when fragmented
>> because sectors are more distributed across the disk.

What.

The.

Fuhhhhhh?

> That's the silliest logic I've heard in awhile. While
> there are multiple factors regarding file/disk access, the
> sad truth of the matter is that access to the disk is in
> fact single threaded (at least, single-threaded per disk).

Not to mention that more than one sector can be read in a time slice,
I believe.

>> Multiple users aren't likely to be constantly accessing the exact
>> same file, so you get disk head thrashing when files are on opposite ends
>> of the disk and multiple processes reading and writing them simultaneously.
>>
>> And i'm not the only one that says this. So do Linux users:
>>
>> http://www.salmar.com/pipermail/wftl-lug/2002-March/000603.html

The posting there needs some numbers. And it's limited to a couple of
old file systems.

--
It is easier to fix Unix than to live with NT.

The Ghost In The Machine

unread,
Apr 12, 2007, 10:23:11 AM4/12/07
to
In comp.os.linux.advocacy, Linonut
<lin...@bone.com>
wrote
on Wed, 11 Apr 2007 19:05:06 -0500
<odadnSOFmPIv6oDb...@comcast.com>:

> After takin' a swig o' grog, The Ghost In The Machine belched out this bit o' wisdom:
>

[Erik Funkenbusch stated]

>>> Multiuser filesystems actually work better when fragmented
>>> because sectors are more distributed across the disk.
>
> What.
>
> The.
>
> Fuhhhhhh?

That was more or less my reaction. Frankly, I'm not
entirely certain, and it probably depends on a number
of issues -- not the least of which include the usual
track-to-track, head-settle, rotational latency, and disk
usage pattern.

Regrettably, most interface boards lie through their teeth,
and disks are now variable geometry, which basically
makes exact calculations by the OS impossible without
a lot more information -- info that they're likely not
currently using.

However, this doesn't make logic such as the above sensible.

Currently, average rotational latency [*] and seek-to-seek
are roughly equal (both are about 5 ms or so, on a 5400 RPM
drive -- I'd have to look to be sure as new drives keep coming
out). Any seek will reduce one's throughput by roughly
half, if one assumes random sector reads. If one wants to
read contiguous sectors in a file, and has a fast enough
interface (nowadays, no problem; old drives, however,
had to interleave sectors), a track-to-track seek slows
things down even more.

On disk RAM cache helps a little. I frankly don't know how much.
But if one has to seek -- it won't help.

>
>> That's the silliest logic I've heard in awhile. While
>> there are multiple factors regarding file/disk access, the
>> sad truth of the matter is that access to the disk is in
>> fact single threaded (at least, single-threaded per disk).
>
> Not to mention that more than one sector can be read in a time slice,
> I believe.

Correct. However, because of said interface board there's
no real good way to know if sector K and K+1 are on the
same physical cylinder or not, without detailed knowledge
of the disk geometry.

>
>>> Multiple users aren't likely to be constantly accessing the exact
>>> same file, so you get disk head thrashing when files are on opposite ends
>>> of the disk and multiple processes reading and writing them simultaneously.
>>>
>>> And i'm not the only one that says this. So do Linux users:
>>>
>>> http://www.salmar.com/pipermail/wftl-lug/2002-March/000603.html
>
> The posting there needs some numbers. And it's limited to a couple of
> old file systems.
>

Windows is so limiting at times. :-)

[*] average rotational latency is 180 degrees of rotation.
For 5400 RPM that works out to 1/2 * (60 seconds/minute / 5400 RPM)
= 5.56 ms. For 7200 RPM one gets 4.17 ms. 10,000 RPM is
the fastest disks I've seen for consumer markets, and one
gets 3.00 ms.

--
#191, ewi...@earthlink.net
Windows. When it absolutely, positively, has to crash.

Tim Smith

unread,
Apr 12, 2007, 9:21:17 PM4/12/07
to
In article <odadnSOFmPIv6oDb...@comcast.com>,

Linonut <lin...@bone.com> wrote:
> After takin' a swig o' grog, The Ghost In The Machine belched out this bit o'
> wisdom:
>
> >> Multiuser filesystems actually work better when fragmented
> >> because sectors are more distributed across the disk.
>
> What.
>
> The.
>
> Fuhhhhhh?

That was usually given, in fact, as the main reason Unix filesystems did
not have defrag tools, even though they did become fragmented over time.
(Any file system that doesn't actively rearrange already written data
will tend toward fragmentation, in fact, under a typical general purpose
access pattern).

The idea is that a program reads some data, then computes for a while,
then reads some more data, and then computes for a while, and so on. On
a multiuser system, there will be several programs doing this. So when
your program goes off and does some computing, before issuing its next
read, the system is going to be doing a read for another process.

That other process will be reading a file that is likely totally
unrelated to your process.

Think about what that means. It means that when your process gets time
again and issues its next read, the disk head is off positioned where it
was after reading someone else's data, not yours. You will incur a seek
and some rotational latency to get back to the next section of your file.

What this means is that it really doesn't matter, on a sufficiently
active multiuser system, if that next section of your file is contiguous
with the previous section or not. As long as your fragments are large
enough that the individual I/O requests can be satisfied from within a
fragment, you'll get the same throughput you would with a contiguous
file.

However, an important thing to note is that on such a system, if your
files ARE contiguous, it won't hurt. That is, a sufficiently active
multiuser load can make it so that you can have heavy fragmentation
without a performance penalty, but performance isn't likely to be better
than contiguous, so it is probably best for filesystems to try for
contiguous files. And if you are single user, without a lot of separate
processes doing heavy I/O, the contiguous should win.

OS X takes an interesting approach here. For files smaller than some
certain size (20 meg, I think), the filesystem will defragment them if
they are fragmented. For larger files, it does not do this. (I'm not
sure if it completely ignores larger files, or if it just ignores
fragments that are over 20 meg). So, for the files that are small
enough that fragmentation will make you incur seeks in the middle of I/O
operations, it defrags them, but for files that are more likely to not
suffer a noticeable slowdown due to fragmentation, it leaves them be.

--
--Tim Smith

Tim Smith

unread,
Apr 12, 2007, 9:36:15 PM4/12/07
to
In article <fvr1f4-...@sirius.tg00suus7038.net>,

The Ghost In The Machine <ew...@sirius.tg00suus7038.net> wrote:
> That was more or less my reaction. Frankly, I'm not
> entirely certain, and it probably depends on a number
> of issues -- not the least of which include the usual
> track-to-track, head-settle, rotational latency, and disk
> usage pattern.
>
> Regrettably, most interface boards lie through their teeth,
> and disks are now variable geometry, which basically
> makes exact calculations by the OS impossible without
> a lot more information -- info that they're likely not
> currently using.
>
> However, this doesn't make logic such as the above sensible.
>
> Currently, average rotational latency [*] and seek-to-seek
> are roughly equal (both are about 5 ms or so, on a 5400 RPM
> drive -- I'd have to look to be sure as new drives keep coming
> out). Any seek will reduce one's throughput by roughly
> half, if one assumes random sector reads. If one wants to
> read contiguous sectors in a file, and has a fast enough
> interface (nowadays, no problem; old drives, however,
> had to interleave sectors), a track-to-track seek slows
> things down even more.
>
> On disk RAM cache helps a little. I frankly don't know how much.
> But if one has to seek -- it won't help.

You are looking at the problem from the wrong end. The idea that
fragmentation doesn't hurt comes from the days when a computer would
have dozens (or hundreds, or more) users on at once. When your
processes did I/O, it received part of the data from the file, and then
it was someone else's turn. By the time the system would get back to
you to let you issue another I/O request, dozens of others would have
had their turn. The disk is now essentially randomly positioned
compared to where you need it to be. So, the low level details of
track-to-track time, head-settle, etc, don't matter. It comes down to
on average each of your I/Os incurring the average access time of the
disk.

That said, fragementation usually doesn't help, so avoiding it was good,
even on those big multiuser systems.

--
--Tim Smith

Maverick

unread,
Apr 13, 2007, 1:57:02 AM4/13/07
to
Linonut wrote:

> After takin' a swig o' grog, The Ghost In The Machine belched out this bit o' wisdom:
>
>
>>>Multiuser filesystems actually work better when fragmented
>>>because sectors are more distributed across the disk.
>
>
> What.
>
> The.
>
> Fuhhhhhh?
>

Yeup. When you have multiple user accounts and multiple users logged
in, the system sets up things differently on the hard drive if that hard
drive is single actuator type (which is what we all use these days).
It is different on a multiple actuated head assembly in the bigger
units. We used to have the veritas defrager for VMS to optimize the
various user accounts that ran in the background so that it could take
samples of who was doing what and how often. It took a while, about a
week, before things were working well tho. It is one of those things
that hasn't trickled down to the PCs yet.

Hadron Quark

unread,
Apr 13, 2007, 4:58:15 AM4/13/07
to
Tim Smith <reply_i...@mouse-potato.com> writes:

> In article <odadnSOFmPIv6oDb...@comcast.com>,
> Linonut <lin...@bone.com> wrote:
>> After takin' a swig o' grog, The Ghost In The Machine belched out this bit o'
>> wisdom:
>>
>> >> Multiuser filesystems actually work better when fragmented
>> >> because sectors are more distributed across the disk.
>>
>> What.
>>
>> The.
>>
>> Fuhhhhhh?
>
> That was usually given, in fact, as the main reason Unix filesystems did
> not have defrag tools, even though they did become fragmented over time.
> (Any file system that doesn't actively rearrange already written data
> will tend toward fragmentation, in fact, under a typical general purpose
> access pattern).

Which is a con. Most distros do "disk checks" every X boots anyway.

In addition, the fragmented access is generally only potentially better
for the situation you describe - chunk & compute. Clearly its better to
have non-fragmented for the program code.

JDS

unread,
Apr 13, 2007, 2:06:44 PM4/13/07
to
On Tue, 10 Apr 2007 12:47:36 +0100, Roy Schestowitz wrote:

>
> I was not lying. The subject line perfectly aligns with the main item which
> is at the top of the OP. Vista slows things down (startup and shutdown).


I have heard at least two personal stories of how Vista is much slower to
use in every way than XP. And this on brand-spanikin' new machines.

Erik Funkenbusch

unread,
Apr 13, 2007, 4:37:00 PM4/13/07
to
On Thu, 12 Apr 2007 18:21:17 -0700, Tim Smith wrote:

> However, an important thing to note is that on such a system, if your
> files ARE contiguous, it won't hurt. That is, a sufficiently active
> multiuser load can make it so that you can have heavy fragmentation
> without a performance penalty, but performance isn't likely to be better
> than contiguous, so it is probably best for filesystems to try for
> contiguous files. And if you are single user, without a lot of separate
> processes doing heavy I/O, the contiguous should win.

Actually, not really true. Think of a worst case scenario where you are
running a file server with large files. User 1 needs file a at the
beginning of the drive, and User 2 needs file b located at the end of the
disk. The disk head will thrash back and forth as it reads this file based
on each request from the user.

Now, imagine if that file is fragmented across the disk. Then the elevator
algorithms built into most OS's these days can more efficiently prioritize
requests based on the location of the disk head and the request. Now,
multiply that by 100 or 1000 users.

Robert Parsonage

unread,
Apr 13, 2007, 6:49:19 PM4/13/07
to
Freeride wrote:

Why indeed. Unix had filesystems that rarely if ever needed defragging
long before NTFS was even conceived. In the book 'The Design and
Implementation of the 4.3 BSD Unix Operating System' (1989) there's
enough detail for any competent SW 'architect' to come up with a
filesystem where defragging wouldn't be an issue. Of course there's
lots more in that book that Microsoft should have paid attention to
when designing NT. Seems all they can do is re-invent the wheel ...
badly.

Maverick

unread,
Apr 13, 2007, 6:55:22 PM4/13/07
to
Erik Funkenbusch wrote:

That's what we had to put up with. We bought into Veritas' solution to
help alleviate the problem. It wasn't a cheap solution but it sure
helped a lot. They had a disk optimization scheme for large hard drives
that were multi-headed and multi-actuated. It ran in the background
trying to optimize the system.

0 new messages