Thanks for the input, Sven.
Yes, this is the upper limit (currently; we are trying to get dual 10GbE feeders to the system) but in testing on various combinations, we're seeing that while we can push about 1.3 GB/s to a single 24spindle RAIDZ2 via dd on the storage server, in real life we're seeing about half of that bc we assume that the mix of large and small files is causing too much overhead.
We'll be trying to chop up the single vdev into multiples to see if that helps, but we're still worried about pushing all the data thru a single PCIe3 bus. Theoretically it has plenty of bandwidth, but in Real Life?
So the question is:
If we add more JBODs to the storage server, each on its own HBA, will total IO increase or will there still be a bottleneck in either the ZFS/BeeGFS layer or via contention in the PCIe3 bus?
'It depends' is almost certainly the answer, but as a general rule of thumb..?
Regardless, we'll try it out and report back.
Harry
---
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[m/c 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
415 South Circle View Dr, Irvine, CA, 92697 [shipping]
XSEDE 'Campus Champion' - ask me about your research computing needs.
Map to Office | Map to Data Center Gate
[the command line is the new black]
---