Grupy dyskusyjne Google nie obsługują już nowych postów ani subskrypcji z Usenetu. Treści historyczne nadal będą dostępne.

NTFS file searches / tree view pathetically slow.

0 wyświetleń
Przejdź do pierwszej nieodczytanej wiadomości

SteveL

nieprzeczytany,
14 cze 2002, 14:04:2714.06.2002
do
I've got two 80 gig drives with over a million files in total.

One is FAT32, the other is NTFS.

The FAT32 drive has about 500 directories and about 660,000 files.
The NTFS drive has about 350 directories and about 450,000 files.

If I do a simple file search "for files or folders" (just the name not
the contents) over both drives the FAT drive takes about 45 seconds,
but the NTFS drive takes a whopping 3 and a half minutes, and it has
significantly less to search through. This is first-time access
though, subsequently the file directory information will be in memory
(thank god) and the search will only take about 7 seconds.

It's not the search program because any program opening a tree view
will take just as long.

The directories are nested no more than two deep on either drive. each
directory contains several thousand files.

The NTFS drive is almost new and the data unfragmented (yet you can
hear the heads moving all over the place, as though it was). There is
nothing wrong with the drive.

Ordinary access is not noticeably slower.

I didn't convert to NTFS from FAT32, just formatted (4k blocks) and
plonked the files onto it from a backup.

One of the reasons I chose NTFS was I'd read that NTFS comes into its
own when you have a lot of files and directories. This appears to be
grossly false on my system.

Can anyone suggest why NTFS should take 5 times longer than FAT32 to
search 2/3 of the number of filenames?

Thanks.

Bill Todd

nieprzeczytany,
14 cze 2002, 15:01:2614.06.2002
do

"SteveL" <Ste...@stevelon.demon.co.uk> wrote in message
news:LeqO8.239445$Gs.20...@bin5.nnrp.aus1.giganews.com...

Because it may be performing something like 5 times as many disk accesses.

NTFS will perform a file *look-up* in even far larger directories than yours
with reasonable speed and regardless of directory fragmentation, while FAT32
will get slower and slower as the directory size and/or fragmentation
increases. But you're not doing file look-ups but instead (I suspect) full
directory scans (e.g., if you're looking for something like '*.ini*; if
you're instead looking for a fully-specified file like 'accounts.txt', I
don't know why NTFS would take nearly that long).

In a scan, both FAT32 and NTFS will effectively have to read the entire
directory sequentially (in the NTFS case, it's actually reading the bottom
level of its directory b-tree, but that works out about the same if the root
level is cached). When NTFS reads its directory sequentially, it does so in
4 KB chunks; when FAT32 reads its directory sequentially, it does so in
(IIRC) 32 KB chunks (for an 80 GB partition). So for equal-size
directories, NTFS may have to do 8 times as many accesses (and 2/3 of 8 is
close enough to 5 to be suggestive).

If you use a competent disk defragmenter, it should eliminate the
fragmentation that is very likely present in both sets of directories and
allow the disk's read-ahead buffer to pre-fetch the balance of each
directory after its first access. This should speed up both scans
dramatically and likely make them much closer to equal in duration.

- bill

David J. Craig

nieprzeczytany,
14 cze 2002, 15:22:4614.06.2002
do
Maybe the slowness is due to NTFS being a substandard file system? If it
was as good and efficient as marketed, the source would be in the IFS Kit.
For many years we were told NTFS didn't have any problems with
fragmentation.

If you need security at the file system level, then NTFS is the only
solution supported by Windows.

"Bill Todd" <bill...@metrocast.net> wrote in message
news:a4rO8.242419$Kp.20...@bin7.nnrp.aus1.giganews.com...

Bill Todd

nieprzeczytany,
14 cze 2002, 17:40:5314.06.2002
do

"David J. Craig" <Dri...@yoshimuni.com> wrote in message
news:epMrEj9ECHA.2388@tkmsftngp07...

> Maybe the slowness is due to NTFS being a substandard file system? If it
> was as good and efficient as marketed, the source would be in the IFS Kit.

I suspect that the source is not included on the IFS kit precisely because
Microsoft considers it to be valuable intellectual property. By contrast
the FAT file system is relatively simple, so the FAT code exposes most of
what other file systems need to know to interact with the rest of NT without
giving away much else.

> For many years we were told NTFS didn't have any problems with
> fragmentation.

The only file systems that don't have some degree of problem with
defragmentation are those that defragment continually in the background (or
don't allow an existing file to be extended...).

- bill

SteveL

nieprzeczytany,
14 cze 2002, 21:58:2714.06.2002
do
On Fri, 14 Jun 2002 19:01:26 GMT, "Bill Todd" <bill...@metrocast.net>
wrote:

>>
>> Can anyone suggest why NTFS should take 5 times longer than FAT32 to
>> search 2/3 of the number of filenames?
>
>Because it may be performing something like 5 times as many disk accesses.
>
>NTFS will perform a file *look-up* in even far larger directories than yours
>with reasonable speed and regardless of directory fragmentation, while FAT32
>will get slower and slower as the directory size and/or fragmentation
>increases. But you're not doing file look-ups but instead (I suspect) full
>directory scans (e.g., if you're looking for something like '*.ini*; if
>you're instead looking for a fully-specified file like 'accounts.txt', I
>don't know why NTFS would take nearly that long).

Yes, I'm doing full directory scans, and there are usually wildcards
in the search (not quite as bad as *.ini though). However,
"accounts.txt" would be a wildcard search anyway because the find
program assumes it's a substring search, e.g. a search for fred.doc
would also return alfred.doc.

>
>In a scan, both FAT32 and NTFS will effectively have to read the entire
>directory sequentially (in the NTFS case, it's actually reading the bottom
>level of its directory b-tree, but that works out about the same if the root
>level is cached). When NTFS reads its directory sequentially, it does so in
>4 KB chunks; when FAT32 reads its directory sequentially, it does so in
>(IIRC) 32 KB chunks (for an 80 GB partition). So for equal-size
>directories, NTFS may have to do 8 times as many accesses (and 2/3 of 8 is
>close enough to 5 to be suggestive).

That makes sense (and yes, its 32K blocks). Shame the decrease in
slack translates to such a decrease in performance as well.

>
>If you use a competent disk defragmenter, it should eliminate the
>fragmentation that is very likely present in both sets of directories and
>allow the disk's read-ahead buffer to pre-fetch the balance of each
>directory after its first access. This should speed up both scans
>dramatically and likely make them much closer to equal in duration.

Take your point but the FAT32 drive will be far more fragmented than
the NTFS one. It's had files and directories added and removed willy
nilly over the course of months, while the NTFS one was formatted and
restored from a backup immediately before I first noticed the search
slowness. I'd expect *no* fragmentation here (and certainly the
checker program in properties said the drive didn't need it).

I suspect if I defrag the FAT32 drive the speed discrepancy would
become even more obvious:-)

Thanks for the response. I suspect I'll just have to live with it as
it is.

>
>- bill
>
>

Bill Todd

nieprzeczytany,
14 cze 2002, 23:59:5314.06.2002
do

"SteveL" <Ste...@stevelon.demon.co.uk> wrote in message
news:7bxO8.225901$%y.205...@bin4.nnrp.aus1.giganews.com...

...

> Take your point but the FAT32 drive will be far more fragmented than
> the NTFS one. It's had files and directories added and removed willy
> nilly over the course of months, while the NTFS one was formatted and
> restored from a backup immediately before I first noticed the search
> slowness. I'd expect *no* fragmentation here (and certainly the
> checker program in properties said the drive didn't need it).

Without being intimately familiar with the restore facility you're using,
I'll observe that while it's trivially easy for a (file-by-file, rather than
disk-image) restore to leave completely unfragmented *files*, it may very
well leave severely fragmented *directories* (which is of course what's
relevant here): unless it checks the size of each input directory and
preallocates the output directory to that size (or defragments each
directory after finishing restoring its files), a naive approach that just
allows the output directory to grow as files are added to it will leave its
clusters scattered throughout its files. Wouldn't hurt to *try* a defrag
run, just for fun...

- bill

SteveL

nieprzeczytany,
15 cze 2002, 09:29:5515.06.2002
do
On Sat, 15 Jun 2002 03:59:53 GMT, "Bill Todd" <bill...@metrocast.net>
wrote:


>Without being intimately familiar with the restore facility you're using,
>I'll observe that while it's trivially easy for a (file-by-file, rather than
>disk-image) restore to leave completely unfragmented *files*, it may very
>well leave severely fragmented *directories* (which is of course what's
>relevant here): unless it checks the size of each input directory and
>preallocates the output directory to that size (or defragments each
>directory after finishing restoring its files), a naive approach that just
>allows the output directory to grow as files are added to it will leave its
>clusters scattered throughout its files. Wouldn't hurt to *try* a defrag
>run, just for fun...
>
>- bill
>
>

Well I'll be...

Spot on about the directories, Bill. I defragged it, immediately
noticed the reduced rattling and the time is down to a much more
reasonable 45 seconds.

Good call.

Thanks.

Nowe wiadomości: 0