Hi VxWorks Users:
I've been doing some performance testing writing data to / from SCSI disks
on embedded CPUs and I have come to the conclusion that the VxWorks
implementation of NFS is very, very slow. My hosts are sun 4s and ultrasparcs,
all connected to vxworks CPUs thru an Ethernet switch. The hosts can transfer
files at the rate of about 700 kbytes/sec (pretty close to the max on
10Mbit/sec Ether I'd say) between each other but when they try to transfer
to the VxWorks disks (one at a time) it bogs down to 100 kbytes/sec.
I've tried a vxworks 'copy' from a CPU's local disk device to the same
device and I get a throughput of about 1.3 Mbytes/sec so it doesn't appear
to be a SCSI blocking/access problem. When a UNIX host copies data to/from
the vxworks NFS disk, the host disk led barely comes on and the vxworks disk
led is constantly on (so much so I thought I'd crashed the disk!). When
a vxworks 'copy' is executed from the vxworks shell '->' to transfer a
file from the local disk to the UNIX NFS disk, the local disk led is barely
on and the UNIX disk is constantly on...this indicates to me that vxworks
is smashing the big chunks of disk data into 'tiny' pieces for transport
to the UNIX host.
Since I'm not an NFS expert...is this just 'the way it is' with NFS (I don't
think so since the UNIX machines smoke)? Or, is the vxworks implementation
of NFS just that bad? I've searched the VxWorks documentation and
config files to no avail. Is there something I've missed? I'm running
VxWorks version 5.2 -- if somebody knows that this has been fixed in a more
recent version or if there is some configuration I need to change I'd
appreciate a shout (I tried to re-instate our maintenance with WRS a while
ago but they never respond to my RFQ!).
Thanks in advance,
Brent.
--MimeMultipartBoundary--
1) is your vxworks box the nfs client or nfs server?
2) what read and write size are you using for NFS?
I don't have access to VxWorks source code and I don't know how well
or how poorly the NFS code is implemented. There are _many_ variables
that come into play for a high performance NFS server. The UNIX NFS
servers are probably tuned like crazy -- it is part of their bread
and butter. There are many tricks to make NFS servers fast which range
from how the server responds to NFS requests (e.g., delayed writes) to
what type of filesystem is used as the persistent store of the server.
Any way, the first thing to verify is what read/write size you are using
for NFS.
If your VxWorks box is the NFS client, then make sure that you mount the
UNIX nfs server with a read/write size of 8192. I assume that you can
do that because presumably VxWorks supports ip fragmentation and
reassembly. One reason the NFS client can be slow is if it makes write
requests synchronously. That is, it writes 8192bytes and then waits for
a response from the server. NFS clients on many commercial
implementations of UNIX don't do that.
Finally, I don't know what you mean by a VxWorks "copy". (Sorry, I have
not worked with VxWorks since 1992). I'll assume you are referring to
VxWorks' implementation of the file copy program. It could also be that
this "copy" program is pretty dumb and slow.
Hope this helps.
i'm not saying what you see in your performance tests is
invalid. just that, your figures might be distorted a little
by the fact that you're doing disk I/O.