Good. This should be your exploratory phase. I have this preference as well. But to my knowledge this is not the standard way to use storage at all. IMO shared SAS on commodity hardware with open source frameworks is for one thing - a clustered filesystem.
But let's not lose sight of why we are trying to achieve here, we have a need to satisfy: We hope to build an HA SAN using two controllers (servers) running open source software. There's a reason not many projects like this exist - they are fucking hard to do - which is why the market is primarily served by proprietary products with very expensive price tags on them.
If you want something easier, and standard and better organized I'd highly recommend ESOS commander licensing. It is the only way to get a "standard" HA esos deployment without all the fuss.
RAID cards.... I don't see how they could possibly work in an HA deployment unless they were meant to do so - lsi synchro (not defunct) had this - but the crux of the matter is that how can you possibly have a raid card co-ordinate a parity array without stepping on the opposite card's toes? If you can answer that please let me know.
I use mdadm with LVM on top, its lightweight in comparison to ZFS in terms of stopping and starting an array but they are functionally similar. I recently deployed ZFS at home and I am so impressed by it I would love to experiment there - I just cant anymore since I'm already locked into my mdadm/lvm setup.
I'd love it if someone conclusively benchmarked both and posted results.
In either case coming back to our goals here and how to satisfy them... with a few exceptions you are trying to manage the sharing of storage types that are fundamentally unshareable. In all my research the limited factor seems to be parity - neither mdadm nor zfs can support a shared parity array. Is it a fundamental incompatibility? who knows, maybe.
mdadm does support shared RAID10 arrays, and on top of that you can lay lvmlockd VGs which allow either exclusive activated LVs (with the majority of LVM features in tact) or shared activated LVs (with almost no features in tact). But what lvmlockd does get it you is the ability to take your mdraid10 array and chop it into whatever LVs you want and have each server activate LVs exclusively as they wish.
mdraid5, well there you can only activate the array on one server at a time and by extension the VG you create with that array can only have LVs activated on the server that has the array currently active, this is the limitation present on zfs (except for zfs it is ALWAYS the case, no matter what vdevs you use in the pool) I am in this case quating a VG to a zpool.
So, if you want a simple setup, make one (or multiple) md arrays out of your disks, create a VG on that array and slice out LVs as you see fit! In your cluster config you would basically have
-start md array
-activate vg (on same node array is active)
-activate lvs (on same node vg is active, unless you are autoactivating all lvs when vg activates)
during a failover you work backwards
-deactivate lvs
-deactivate vg
-stop md array
-start md array on new server
-activate vg on new server
-activate lvs
THIS entire process, under most circumstances occurs fast enough that clients connected to targets dont freak out.
But hell, maybe instead of doing this, cut half your disks into one array, and the other half in a second. repeat the vg steps. in cluster, make server1 active for array1 and standby for array2, on server2 active for array2 and standby for array1.
I really recommend looking at how quickly the ZFS ocf agent can fail a pool though, im kinda starting to fall in love with it and it simplifies things in some ways because all you have to do is export pool and import pool to failover.
Herein lies the problem. No the target/lun mapping are not sync'ed you have to configure it all manually on each server. There is a silver lining to this though. There is an initial configuration that is tricky because you need complimentary but not completely identical configurations on each host. This means you cant simply sync the .conf and expected it to work. Once the initial config is done however, and your alua groups are configured correctly, adding additional LUNs is easy. I won't get into the details of how and why it is, you'll just have to take my word for it.
Part of what makes it easy is again Marc's incredible work on his ocfs... All state information is handled by the clustering software. In your conf all devices start inactive and all alua target groups start offline - this is your base .conf... when pacemaker starts up with a properly configured ocf, IT will handle activating luns and marking Target ports active or standby for you based on the cluster configuration and the current state of the cluster.
If this hasn't frightened you, then we can discuss more. I suggest before we proceed if you have vmware or hyperv that you create two esos vms with 4 or 5 x10 GB shared virtual disks (in addition to os disk). This will allow you to get a better feel for how the things work and whether or not it's a solution you are ready to fuck around with.
Otherwise, there's always esos commander which automates and GUI-ifies all this insane madness for you.
Andrei