Who/what is abusing my fileserver

1 view
Skip to first unread message

feen...@gmail.com

unread,
May 6, 2017, 10:53:45 AM5/6/17
to
Usually our TrueNAS fileservers (Really just FreeBSD with a GUI) perform well with

iostat -x

showing hundreds of megabytes/second read or written with the %b (%busy or %Utilization) at only several percent for each disk. But every few months performance goes to hell, with total throughput only 1 or 2 mbs and %b for group of disks at 99% or 100% while qlen grows from 0 or 1 to a dozen or 20 on some disks. CPU utilization stays very low. While this is happening a simple ls command can take 5 minutes. Eventually the problem solves itself.

We believe this is because a client is doing a lot of random I/O that
keeps the heads moving for very little data transfer, and that with all
that seeking none of the other clients get much attention. How do we
locate that job among the many jobs from many users on many nfs clients?
On the client computers we can find out how many bytes are transferred by
each process, but that number is small for all jobs - the one doing random
I/O doesn't get more bytes than the jobs doing sequential I/O, it just
exercises the heads more. We need more information to contact the user
doing random I/O and work with them to do something else.

Alternatively, is there some adjustment of the server that will downgrade
the priority of random access? That user might self-identify if his jobs
took forever to complete.

Daniel Feenberg
NBER

Mark F

unread,
May 7, 2017, 11:11:43 AM5/7/17
to
On Sat, 6 May 2017 07:53:44 -0700 (PDT), feen...@gmail.com wrote:

> Usually our TrueNAS fileservers (Really just FreeBSD with a GUI) perform well with
>
> iostat -x
>
> showing hundreds of megabytes/second read or written with the %b (%busy or %Utilization) at only several percent for each disk. But every few months performance goes to hell, with total throughput only 1 or 2 mbs and %b for group of disks at 99% or 100% while qlen grows from 0 or 1 to a dozen or 20 on some disks. CPU utilization stays very low. While this is happening a simple ls command can take 5 minutes. Eventually the problem solves itself.
>
> We believe this is because a client is doing a lot of random I/O that
> keeps the heads moving for very little data transfer, and that with all
Could also be for error recovery on a couple of blocks.
I don't know about TrueNAS, but many filesystems/operating systems
don't fix ECC problems until there is a complete failure and the disks
themselves try to avoid actually rewriting data, possibly with
location, to fix the problems.

You could scan the disks and see if any performance problems arise.
You should save the SMART data before and after the scan to see if
there is any evidence of excessive error correction taking place, but
not all disks (or SSDs) report such information. You might see
counts for on the fly error recovery (which will seldom be zero
even when no real problems) and perhaps second or even third
level of recovery counts even if the drive never goes into
a full (taking several minutes) retry method.

The SMART data may even include a count of sectors known to be bad
but not being fixed.
Reply all
Reply to author
Forward
0 new messages