Hi,
So, it turns out that IB SDP is deprecated; see this post:
http://comments.gmane.org/gmane.network.openfabrics.enterprise/5371
Another couple interest posts specifically to IBoIP, SDP, and DRBD:
From Florian Haas:
--snip--
We serve customers that use both, and in general recent distributions
support both OFED (for IB) and 10 GbE quite well. If your main pain
point is latency, you'll want to go with IB; if it's throughput,
you're essentially free to pick and choose -- although of course _not_
having to install any of the OFED libraries may be a plus for 10 GbE.
Cost of switches is usually not much of a factor in the decision, as
most people tend to wire their DRBD clusters back-to-back, but if
you're planning on a switched topology you may have to factor that in,
also.
Both IB and 10 GbE do require a fair amount of kernel and DRBD tuning
so that DRBD can actually max them out. Don't expect to be able to use
your distro's standard set of sysctls, and default DRBD config, and
then everything magically goes a million times faster.
Generally speaking, also don't expect too much of a performance boost
when using SDP (Sockets Direct Protocol) over IB. In general, we've
found that the performance effect in comparison to IPoIB is negligable
or even negative, but that's fine -- chances are you'll likely max out
your underlying storage hardware with IPoIB anyhow. :) SDP is also
currently suffering from a module refcount issue that is fixed in git
(
http://git.drbd.org/gitweb.cgi?p=drbd-8.3.git;a=commit;h=c2c2067c661c7cba213b0301e2b39f17c1419e51)
but as yet unreleased, so that's a bit of an SDP show-stopper too...
but as pointed out, IPoIB does do the trick nicely.
--snip--
http://lists.linbit.com/pipermail/drbd-user/2012-April/018331.html
From a Linbit employee (Kavan Smith):
--snip--
It all depends on what you are looking to accomplish and what hardware
you are comparing.
Newer Infiniband QDR cards are rated at 40 Gbit/s which are a bit
quicker than the 10GbE alternative.
Today, IPoIB provides great performance, but would be better with native
RDMA support. 10GbE is a great solution, but you really miss out on low
latency high bandwidth capabilities that Infiniband brings to the table.
Right now, IPoIB is the best solution with DRBD if you want to exceed
current 10GbE benchmarks.
We have a tech guide that will be announced this week, but since you
asked so kindly, please check this out at your leisure:
http://www.linbit.com/en/education/tech-guides/infiniband-and-drbd-technical-guide/
Also...stay tuned... :)
In regards to how DRBD is going to support this in the future:
LINBIT is working to develop native RDMA support for Infiniband (this
will make DRBD on Infiniband much much quicker), but we still need
assistance from the community to make this feature-set possible.
HA and DRBD experts. We could always use the help! -
feedback at
linbit.com if you would like to assist in developing or
sponsoring this feature for the DRBD Community. Don't just claim you're
an expert, show it! :)
--snip--
http://lists.linbit.com/pipermail/drbd-user/2012-May/018335.html
So, it sounds like IPoIB can work quite well, especially if tuned
correctly. Yes, its not RDMA, but if your application doesn't support
RDMA, what else are you going to do? Initially there was no SRP
support for VMware ESXi 5.x, but it was added back in eventually:
http://communities.vmware.com/thread/393784?start=30&tstart=0
Perhaps SRP support in Windows Server 2012 will be added at some point
in the future.
--Marc