10 S nodes (48GB RAM)
--
You received this message because you are subscribed to the Google Groups "Isilon Technical User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isilon-user-gr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
While tackling down a "slow SMB read" problem reported by multiple OSX users I came across some findings I'd like to share and discuss with you.
1.What version of Onefs?
v6.5.5.22
2.Have you tested yet with 10.9 ,Just curious what iperf shows. Is it on par with windows smb2?
We have a bunch of 10.8 transfer machines. Sole purpose is to transfer content from thunderbolt and FireWire drives to our Isilon cluster specifically NL nodes via smartpool folder policy and we get 60MB plus over smb and as you know better with NFS
All Mac versions should be running with TCP delayed acks disabled.sudo sysctl -w net.inet.tcp.delayed_ack=0
I can't really comment from personal experience, but perhaps this is useful.
when I asked our implementation engineer about recommendations for accessing the cluster from a mac, he consulted some higher-level support team. The final answer: I was told in no uncertain terms "use NFS, not SMB".
Please upgrade to OneFS 7.x to be able to handle "large MTU" as windows calls it. Basically onefs 6.5 uses SMBv2.002 (which is what Vista SP1 and server 2008 non-R2 uses) and 7.0 and 7.1 use SMBv2.1 which started support for 1M windows.
Have you checked the Isilon & Mac Best Practices paper?
It suggests a couple of good tweaks and is more
recent than 2005 ;-)
On Thu 21 Nov '13 md, at 22:08 st, Youssef Ghorbal <youssef...@gmail.com> wrote:
>
>
> - iperf with -F reading a 40GB on NL node : ~8MB/s tops
> - iperf with -F reading a 40GB on X node : ~8MB/s tops
> - iperf with -F reading a 40GB on S node : ~75MB/s tops
> - iperf with -F reading a 40GB on SSD disk : ~110MB/s tops
Nevertheless:
that's truely amazing (assuming that the network paths
to the nodes and the nodes' background loads
are equivalent for NL/X nodes vs. S nodes).
A few thoughts:
Can you send the outputs of
isi statistic client -nall --long
for these scenarios? (using SMB)
By analysing oprates, request sizes, latencies
and throughput rates in one context I hope we
can see what is going on,
and what makes the big difference here.
> We had these same figures even between two nodes
> on the same Isilon cluster (and even using the infiniband backend)
> and no matter what is the iperf client and server (X, NL, S)
That’s weird because you also said that with NFS everything is fine.
What happens if you simple read the file(s)
locally on the cluster with ‘cat’ or ‘dd bs=1024k’?
You can even do that as a cross-test:
run the cat or dd command on an S node
accessing data on the X or NL pool, and vica versa.
That would confirm the problem is
on the disk side (or the actual target node pool.)
(The same is posssible with SMB mounting of course.)
Finally I would try to check the
cache and prefetch hit rates,
to see the difference between your actual
S nodes and X/NL nodes:
Maybe your S nodes have enough RAM for caching,
while the X/NL nodes’s caches are too busy.
Or prefetching is very bad on the X/NL nodes,
disabled? or too much fragmentation?
isi_cache_stats (-v) interval
is a great tool for this (reports for one node though),
and requires a pretty idle cluster to see a
clear signal from the test load.
And totally agree with Saker, OneFS 7 + OSX 10.9 is a completely new game…
Yeah, for me NFS (and SMB2) will work fine because they use pipelining. Instead of sending one read request (offset + bytes to read) they send many at once. The first read will be done on the disk, and while sending this read request on the wire the system already loaded the other requests from disk (into RAM ) so they are served from cache when they are sent on the wire. That's what I call deterministic read ahead.What happens if you simple read the file(s)
locally on the cluster with ‘cat’ or ‘dd bs=1024k’?dd bs=1024k if=/my/file of=/dev/nullOn NL/X files I get ~50MB/sOn S files I get ~270MB/sOn SSD files I get ~500MB/s=> It's much better in deed. In fact the network is not in action here.