Pod building newbie with questions and sharing some of my learnings so far...

508 views
Skip to first unread message

The O.G. X

unread,
May 17, 2013, 7:29:37 AM5/17/13
to opensto...@googlegroups.com
I recently started building a 3.0 Pod as a prototype project for work. It's been very interesting and the more I get into it, the more questions I have. I'm hoping the community here can help me with some of the questions.

First thing I want to share is about the PSU. The PSU listed by BackBlaze and ProtoCase seems to be very hard to source, perhaps due to all the Pod builders out there? We ended up ordering the 850W version of the PSU made by Zippy, however we did not use it. As we started looking into how to wire the entire system up, we realized that we'd be better off with a fully modular PSU. We went for the Seasonic X-850, which was by the way, cheaper than the Zippy and 80Plus Gold rated efficiency. The nice thing about the Seasonic is that the cables are fully modular AND you can buy the 18-pin + 10-pin connectors to build custom harness. This is what we did and basically built power wiring harness that is to exact length and specification so we didn't have any excess wires to deal with. The results were great and a very clean build. It also made me realize I wish there were more "holes" or areas to zip-tie wires in the chassis to keep them neatly tucked away. We were really happy with this decision, although it took some time to build the harness ourselves. I'm happy to share the details if anyone else is attempting the same. 

Another thing we are doing is regarding boot drive. The 3.0 Pod now has holes for 1x 3.5" drive or 2x 2.5" (laptop or SSD) drives for the boot drive location. We actually are not using this for boot drive and instead we added 2 CF-SATA adapter that sit in the empty PCI slots. We use 16GB or 32GB Compact Flash cards in these slots for  the boot drive. In Linux, we mirror the two CF cards for some level of redundancy. This makes it very very simple matter to pull the CF cards out to swap boot drives. Right now, we have a couple of sets of CF cards with a variety of OSes, CentOS5/6, OpenFiler, etc. and it takes 5 seconds to swap the CF cards and reboot.

Now, that means that the holes for the boot drive isn't being used, so that gave us the idea of putting a pair of SSDs in that location for use as additional high IOPS storage. We added two 256GB Samsung 840PRO SSDs. They are currently connected to the motherboard's SATA controller, which is SATA-II only and not really fast enough for the SSDs.

So, the last part really got us thinking about alternative options for SATA connectivity which leads to some of my questions:

1. Especially for the SSDs, we realize we need some SATA-3 controllers that can really handle the bandwidth. For example SATA-3 controllers on a 1x lane  PCI-E 2.0 wouldn't work. This can be done via add-on PCI-E card or onboard SATA-3 on the motherboard. So, this got us thinking, what are some good alternative SATA-3 controllers we can use that would have a faster PCI-E connection (at least 2x lane or 4x lane PCI-E 2.0 or above) or a motherboard with SATA-3 implemented correctly (connected to the PCI-E bus on the motherboard in a way that can handle full SATA-3 specs)?? Such a controller could also open up the pipe to the backplanes. Also, such a controller, if connected to the backplanes, must be able to support the port multipliers too. Anyone have any suggestions on this?

2. We plan to make the storage available via the network. The current X9SCL-F mobo has 2x 1Gbps connections, which even if fully utilized will only get us about 120MB/s on each 1Gbps link. We're exploring the idea of finding a mobo with 10Gbps/copper links and with some network stack tuning and jumbo frames we're hoping can give us 1GB/s speeds. Of course, this points us to #1 again, since we would need the 45x HDD to be able to provide that level of bandwidth or this is all pointless. Anyway, has anyone tried any other motherboards with 10Gbps connections?

3. One thing I didn't mention yet, the backblaze hardware specs for 12V fans that have no speed control. So they run at full speed all the time. We experimented with some PWM fans for the 3 rear fans that blow over the motherboard. This seems to be ok. But it got us thinking about using PWM fans for the 3 front fans... but we are not sure if that would be sufficient cooling since the PWM would be coming from the motherboard (but powered directly from PSU). Any thoughts on this topic?

TIA for any tips/suggestions.
-James

Francis Au

unread,
May 17, 2013, 7:45:52 AM5/17/13
to opensto...@googlegroups.com

Take a look at the OpenCompute Project.

--
You received this message because you are subscribed to the Google Groups "OpenStoragePod" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openstoragepo...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

patt...@gmail.com

unread,
May 18, 2013, 7:05:34 PM5/18/13
to opensto...@googlegroups.com
unless it's going to sit under your desk, running the fans at full tilt is not a problem. The Bakblaze design is made to be as cheap-shit as possible; frankly to the point of absurdity IMO. If you want to get real performance you have to spend a few dollars more and get non-shit components. The only *right* answer to dense storage is to use use SAS backplanes and SAS controllers. You can use SATA drives in the slots if you like.

IMO you're FAR better off going with the Supermicro 36-drive server case + 48drive jbod unit(s). You get a proper motherboard with plenty of bandwidth to run 10Gb ethernet or 40Gb Infiniband and a handful of professional (eg. LSI) disk controllers. The power supplies aren't a total farce either. You don't need hardware RAID and these days with Linux md-raid + md-cache (auto-tiering) or bCache you'll have one heck of a good performing unit that will get surprisingly close to the EMC/NetApp/3Par's of the world. The one weakness of the SM case design is not enough air-flow. There are MUCH better professional cases out there but they'll cost you $8000 each for a 60+ drive unit. You can use HP's MDS600 ($1500 off ebay) but it'll only do 3GB SAS which in this day and age is probably sub-optimal and they can be picky about controllers - LSI works but otherwise stick to HP. You could also get adventurous with a drill and swiss-cheese the SAS backplanes SM uses to help increase flow. It's a simple single-layer board with decently concentrated traces...

patt...@gmail.com

unread,
May 18, 2013, 7:14:39 PM5/18/13
to opensto...@googlegroups.com
hmm, it looks like HP didn't abandon the MDS600 afterall. Hopefully the new models which run SAS6 use LSI as the chipset supplier instead of the no-name they used in the first generation. Look for part number QQ695A. But I'm sure it won't be all that cheap even on the secondary market.

patt...@gmail.com

unread,
May 19, 2013, 10:08:30 AM5/19/13
to opensto...@googlegroups.com
Since the I/O module is self-contained I would bet you could simply replace the 3G I/O module with QQ696A  which is the 6G version. HP tends to be pretty good about modularizing their equipment. It'll still cost you $1500 for the new modules though since you'll have to buy them new.

Ouroboros

unread,
May 19, 2013, 7:41:15 PM5/19/13
to opensto...@googlegroups.com
There's a new SuperMicro chassis, looks like the 36 disk version but packs 72 by double packing each new extended length disk tray.

Currently they are only listing the single path SAS version, but if their typical versioning stays true, a proper dual path version should be out soon (the classic difference between E16 and E26 versions). Couple that with a SuperMicro motherboard with dual path SAS output, you can get a ghetto rig monster going somewhat cheap-ish by continuous daisy chaining, with a SuperMicro warranty, for what that is worth.
 Higher performance probably requires a good LSI HBA/RAID card with 6 outputs at least, 2 to every major backplane segment (don't know how the dual path version will work out though, it might end up needing 12 outputs if you want to avoid daisy chaining and get high performance).


On Sun, May 19, 2013 at 11:08 PM, <patt...@gmail.com> wrote:
Since the I/O module is self-contained I would bet you could simply replace the 3G I/O module with QQ696A  which is the 6G version. HP tends to be pretty good about modularizing their equipment. It'll still cost you $1500 for the new modules though since you'll have to buy them new.

--

John White

unread,
May 20, 2013, 12:36:17 PM5/20/13
to opensto...@googlegroups.com
HP has an off-the-shelf Gen 8 Proliant which packs 60 LFF drives in 4.3U, the SL4540. SAS backplane.

The O.G. X

unread,
May 24, 2013, 5:06:18 AM5/24/13
to opensto...@googlegroups.com
OK. I guess this is totally unexpected... so maybe i have the wrong expectations. i thought this group was about building an open design storage server/pods? if I wanted to buy an off the shelf solution, i wouldn't be here.... i haven't seen the price on the HP gear, but i don't imagine that's cheap? plus, do they accept generic OE hard drives or will they only take HP drives with HP firmware? I recall the later to be true way back when I worked with HP gear many years ago.... not sure if that's still the case.

is no one here interested in building their own storage systems? not what i was expecting...

The O.G. X

unread,
May 24, 2013, 5:36:31 AM5/24/13
to opensto...@googlegroups.com


On Saturday, May 18, 2013 4:05:34 PM UTC-7, patt...@gmail.com wrote:
You don't need hardware RAID and these days with Linux md-raid + md-cache (auto-tiering) or bCache you'll have one heck of a good performing unit that will get surprisingly close to the EMC/NetApp/3Par's of the world.

Thanks to your comments, we're now looking into bCache,dm-cache,flashcache, and ZFS caching on the SSDs.

Lucas Bañuelos

unread,
May 30, 2013, 9:07:56 AM5/30/13
to opensto...@googlegroups.com
Hello, i am interested in cooperate with you, i can i help you.

Regards.

Lucas


2013/5/24 The O.G. X <theorig...@gmail.com>


On Saturday, May 18, 2013 4:05:34 PM UTC-7, patt...@gmail.com wrote:
You don't need hardware RAID and these days with Linux md-raid + md-cache (auto-tiering) or bCache you'll have one heck of a good performing unit that will get surprisingly close to the EMC/NetApp/3Par's of the world.

Thanks to your comments, we're now looking into bCache,dm-cache,flashcache, and ZFS caching on the SSDs.

--
You received this message because you are subscribed to the Google Groups "OpenStoragePod" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openstoragepo...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
Lucas Bañuelos
Reply all
Reply to author
Forward
0 new messages