Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Poor file access performance

0 views
Skip to first unread message

jeff

unread,
Jul 7, 2002, 9:57:17 PM7/7/02
to
3.2 Netware. IBM Server ServeRaid Adapter RAID 5 array. Poor file access
performance with more than 2000 files in a single directory. Performance is
really bad. If I move the file to another location performance is as it
should be.. What should I do?

Thanks
Jeff


Barry Schnur

unread,
Jul 7, 2002, 10:38:54 PM7/7/02
to
How much storage space is being supported on the server? How much memory?

Barry Schnur
Novell Support Connection Sysop

Please post replies ONLY via the Newsgroup

jeff

unread,
Jul 7, 2002, 10:55:06 PM7/7/02
to
9.4 gig single NW partiion accross six barracuda's with 260Meg server
working memory. PIII 1GHz RCC LE chipset.

Barry Schnur <bsc...@cox.net> wrote in message
news:5ruhiukj2hvmeg1sp...@4ax.com...

Felton Green (SysOp)

unread,
Jul 8, 2002, 12:13:46 AM7/8/02
to
Hi

Several issues come to mind...

1. Do you have the controllers setup correctly for the drives?

2. Did you low level format the drives prior to installation?

3. Do you have the latest drivers?

5. Do you have the verify after write enabled?


Have you seen the performance tuning TID?


I recommend that you run the config utility and post the resulting
config.txt file here WITHOUT the serial number.


--
Felton Green (SysOp)
Novell Support Connection Forums

Barry Schnur

unread,
Jul 8, 2002, 1:04:27 AM7/8/02
to
>9.4 gig single NW partiion accross six barracuda's with 260Meg server
>working memory. PIII 1GHz RCC LE chipset.
>
OK - enough memory for this (though it sounds like the drives are a lot older
than the rest of the hardware (sounds like 2G drives here).

Has this problem just surfaced? Is the server configuration new?

Might be simply a case of some maintenance (tuning, running purge, that sort
of thing).

jeff

unread,
Jul 8, 2002, 10:09:55 AM7/8/02
to
Purged yesterday when I went on site. Yes the server is new the problem is
older than the server but relatively new. About mid march. We changed from a
P133 IBM Server to a current technology unit. Yes the drives are 2.1 gig
32550W's good performance very reliable as you know, while the storage and
controller were integrated into the new server without breaking down the
array.. The performace has changed only in directories with more than 2000
files. Netware won't give me a complete file listing; say's there's to many
files. Strip size is 16k their average file size is 30 to 190K. Drives were
formated. I have seen the aformentioned poor performance base on array's
built without formatting the drives.


Barry Schnur <bsc...@cox.net> wrote in message

news:4a7iiusskh07lim8g...@4ax.com...

Dave Kearns-NSCV

unread,
Jul 8, 2002, 12:32:47 PM7/8/02
to
When you talk about performance, are you talking about the time to do
a file copy, or the time to load a file into an application?

--
Dave Kearns
Novell Support Connection Volunteer


geoffs....@otcnetworks.invalid.com

unread,
Jul 8, 2002, 12:47:43 PM7/8/02
to
> Purged yesterday when I went on site. Yes the server is new the problem
is
> older than the server but relatively new. About mid march. We changed
from a
> P133 IBM Server to a current technology unit. Yes the drives are 2.1 gig
> 32550W's good performance very reliable as you know, while the storage
and
> controller were integrated into the new server without breaking down the
> array.. The performace has changed only in directories with more than
2000
> files. Netware won't give me a complete file listing; say's there's to
many
> files. Strip size is 16k their average file size is 30 to 190K. Drives
were
> formated. I have seen the aformentioned poor performance base on array's
> built without formatting the drives.
>
Have you tried increasing these settings:
SET MAXIMIM DIRECTORY CACHE BUFFERS =
SET MINIMUM DIRECTORY CACHE BUFFERS =

You purged, which frees the space and directory entries used by deleted
files, but it does NOT remove the unused directories from the DET. Try
running VREPAIR with these settings enabled:
Write all directory and FAT entries out to disk
Purge all deleted files

Alternatively, you can create a new directory, set the trustee and IRF to
the same as the directory in which you're seeing slow access, move (do not
copy) all the files/directories from the old directory to the new
directory, delete the old directory, and rename the new directory to have
the same name as the old one. This process will free the unused directory
entries. Repeat for each directory that has slow access. The advantage
of this method is that it does not require purging all files as the
VREPAIR method does.

If either of the above helps (and even if they don't), you should perform
regular purges of selected files and directories to minimize the overhead
of tracking deleted files. You can automate this using TOOLBOX.NLM and
CRON.NLM. I have examples of this on my web site at
http://www.otcnetworks.com/nwtips.htm#purge


Barry Schnur

unread,
Jul 8, 2002, 12:44:08 PM7/8/02
to
It is possible that there is a performance issue regarding the striping size
-- but that's not an area I am familiar with.

Have you changed any of the set commands from the defaults? If not, perhaps
the comments Felton made regarding tuning might be effective here.

jeff

unread,
Jul 8, 2002, 1:48:15 PM7/8/02
to
Loading a file into any number of apps, from this directory, results in slow
retrieval performance. If I move the file not "copy", to another directory
the performance would appear to be correct. Only retrieving files from a
directory with more than 2000 files results in greatly reduced performance.
All file sizes in this directory range from 30 -190K with no text content.

Is there a set command for "maximum directory entries", or is there a way to
alter the hashing with set commands. I didn't change netware's default block
size from 4K, so in setting the array strip to 16K, could this be causing
the hashing or DET to malfunction? There are long periods of seeks then a
short halt between seeks; as if to CRC the file entries.

Dave Kearns-NSCV <dke...@nomail.to.me> wrote in message
news:P8jW8.24$TU5...@prv-forum2.provo.novell.com...

Dave Kearns-NSCV

unread,
Jul 8, 2002, 5:23:09 PM7/8/02
to
When an app loads a file, its using DOS/Windows calls which first sort
the files alphabetically. 2000 files seems to be a "magic number" for
them (this has been a problem since at least DOS 3). Increasing the
maximum number of directory cache buffers (if they've reached the
current max) with
SET MAXIMUM DIRECTORY CACHE BUFFERS =

might help.

jeff

unread,
Jul 9, 2002, 5:09:22 PM7/9/02
to
Thanks guys for all your advice. I'll let you know.


Felton Green (SysOp)

unread,
Jul 9, 2002, 6:32:45 PM7/9/02
to
Hi

Ok... we'll be here. %^ )

0 new messages