My environment uses free ESXi 5.1 with iSCSI, and I don't mind using a VM based measurement, so I use the VMware IO Analyzer OVA VM appliance, which is basically IOmeter packaged up nicely. I had originally wanted to build a proper pod for the the environment, but the stars didn't line up to make it happen.
I am now using a single Supermicro 847 chassis loaded with Nexenta (OpenSolaris variant) which uses ZFS, exporting iSCSI over 4x1Gbit with SSD ZIL/L2ARC and a single pool of 12 disks (2 RAIDZ2). The Nexenta has plugin packages for IOmeter and bonnie++ internally, but since this is network storage I would rather see the performance at the VM. The VMware analyzer appliance is clocking max 60K RW IOps, though to be fair, there is some VAAI acceleration along with iSCSI ATS command support improving performance. The new 847D chassis (OEM only unfortunately and single path SAS at that) that holds 72 disks in 4U seems to be a monster but I bet it runs really hot.
While ZFS loves main memory, it never hurts to have good internal capacitor backed SSD's for ZIL if you are using a high ZFS version that allows for removing ZIL (use enterprise SATA/SAS SSD but even then, read the spec sheet for power protection info) to speed up and smooth out the HDD writes, and L2ARC is usually very helpful provided you can store the mapping info fully in main memory (if you can't, performance can drop). As a general rule, you shouldn't combine ZIL/L2ARC on the same SSD via partitions (you should dedicate a whole device)(if you were going to combine, partition a SSD for multiples of the same usage type, so say 2 SSD with multiple ZIL on one device and multiple L2ARC on the other), but using whole devices typically leads to needing at least 2 SSD's per ZFS pool (though you can specialize the SSD types, using smaller/faster SSD's for ZIL typically in the 40GB or less range, and bigger/slower/non-enterprise SSD's for L2ARC typically in the 200GB+ range).
I can see the merit of AoE export to the hypervisors, which RAID mirror the AoE feeds, as an alternative to trying to run HA clustering software on the pods themselves. ESXi doesn't natively have the capability of RAID mirroring network storage at the hypervisor, and there is no generic AoE driver (Coraid has an AoE driver but that is tied to their HBA network cards).