G'day Andrew,
Would love to help but perhaps you can elaborate on exactly what you need help with?
- Are you trying to build and/or use the open storage pod architecture to meet your technical requirements?
- Have you looked at other solutions before going down this path (e.g from name brand vendors?)
- What do you define as 'fast read/write' speed - i.e what does your appication actually work best with ? a firehose (as much as you can throw at it) or some optimal value (above which you get no benefit)
Doing some math (based around your 200TB figure), you may be better served with a chassis that can do ~90 drives (@4T) to give you around ~280T usable - I'm using a rough rule of thumb that you'll get get 75% usable of the raw total. There's a few chassis out there now that can do this density. Whether that meets your performance figure is another matter.
Of course, if you instead spread this into a Ceph cluster (i.e multiple servers) you'd likely get a lot better overall performance.
I'm also not sure why you need to limit yourself to 10GbE anymore. Mellanox do really cost effective 40GbE and 56GbE (if you buy one of their switches) and if I were working with 100GB files I'd prefer to do it at 5Gbyte/sec rather than 1Gbyte/sec.. if you can read/write that fast that is.. (but 90 or more drives should be able to give you that..)
You haven't talked about how exactly you'd plan to backup this data (when and where, how)
You need to think about data integrity possibly (so something with ZFS if you don't go down the Ceph path?)
How long do you plan to keep this bit of kit around ? Do you need to grow it.
What happens if you lose power to it and have to restore the whole thing from your backups (it might take a few days.. weeks?)
How long do you think it would take to fsck a whole filesystem..
Sorry if I haven't answered your initial question (?) but hopefully have allowed you to think about the scope a bit better.
So, some quick and dirty example scenarios to think about:
1 x openstorage pod (or vendor pod) server with all your disks in it, connected with 1x or more than 1x 10GbE or 40GbE interfaces (the monolithic server approach) with ZFS underlying and NFS on top (ugh)
N x ceph servers with disks spread around and some investment in networking it all together (the distributed approach) using CephFS, Ceph volumes or Objectstorage..
regards,
-jason
-----
M:
+61 402 489 637 E:
jason....@gmail.com
> --
> You received this message because you are subscribed to the Google Groups "OpenStoragePod" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
openstoragepo...@googlegroups.com.
> For more options, visit
https://groups.google.com/d/optout.