Performance tuning for ESXi iSCSI initiator?

558 views
Skip to first unread message

Dan Swartzendruber

unread,
Jul 24, 2015, 2:09:34 PM7/24/15
to esos-users

So I have 8 7200 1TB SAS NL drives in raid10.  ESOS exports this lun to ESXi over 10gbe.  A windows guest with storage on that datastore running crystaldiskmark gets about 350MB/sec seq read and write.  I also have a windows server 2008r2 guest with the same exact 10gbe HW using PCI passthrough and talking to ESOS with the microsoft iSCSI initiator over that passed in device.  It gets 900MB/sec read and 800MB/sec write (I assume the read has caching artifacts?) and the latter is taking advantage of writeback mode.  This is on a HW raid10 using an areca 1883 in writeback mode.  But the thing is, shouldn't ESXi be getting the same benefits?  So I looked at the default ESXi software iSCSI initiator parameters and they were insanely low for 10tbe (R2T = 1, 256KB for send and receive buffers).  I found a tuning guide for oracle zfs that talked about R2T = 8 and 16MB-1 for the others, so I did that.  Re-ran the first test and the writes went to 500MB/sec.  Nice improvement, but could be better, and no real gain in the read performance.  Looking at the SCST parameters, I see:

                MaxBurstLength 1048576
                MaxOutstandingR2T 32
                MaxRecvDataSegmentLength 1048576
                MaxXmitDataSegmentLength 1048576

So that's like 1MB, correct?  It's hard to imagine anything on ESOS end being at fault, since the exact same code talks to ESXi and server 2008r2.  I'm wondering: both server 2008r2 and ESXi are using the same 10gbe adapter, but it's possible windows has some tuning parameter(s) set for it that ESXi doesn't?

Dan Swartzendruber

unread,
Jul 24, 2015, 2:51:15 PM7/24/15
to esos-users, dsw...@druber.com

Another possible variable here is the windows guest with inferior performance is dealing with a VMDK on an vmfs5 partition.  Maybe some alignment issue?

Marc Smith

unread,
Jul 24, 2015, 4:40:06 PM7/24/15
to esos-...@googlegroups.com, dsw...@druber.com
Just to confirm what I think you're saying:
- Your exporting two different LUNs (SCST device mapped as LUN)? Or the same SCST device (LUN) to both initiators?
- If the LUNs are different SCST devices, are they the same type (eg, both vdisk_fileio pointing to similar block devices, or both vdisk_fileio pointing to virtual disk files)?

If both cases are the same back-end storage + SCST configuration, then you're saying the only difference is using the ESXi iSCSI initiator, which you put a VMFS volume on, then you create a VMDK file vs. the Windows iSCSI initiator inside of the guest OS and getting a iSCSI volume that way and putting NTFS on it?

If that is what you're saying, I believe in the SCST documentation there is a blurb about this exact scenario and they identify using an iSCSI initiator directly inside of the guest OS is always the top performer... yours seems to be by a large difference, but it seems there is a decent amount of overhead in the first scenario (VMFS + VMDK). I wonder if you did the same benchmark test with multiple test VMs running on top of the VMFS volume (not direct from inside guest) you could squeeze more performance out overall (not necessarily individually). Any performance benchmarks I've seen with VMware ESXi they never use just one VM to push numbers, they have a number of test instances running and they measure the performance as a whole.

That being said, maybe the tuning your looking for isn't in storage parameters per say, but the resource scheduling done in the hyper-visor -- maybe its intentionally throttling the one VM so there isn't a chance for contention? Just guesses, I don't know anything for sure.


--Marc

On Fri, Jul 24, 2015 at 2:51 PM, Dan Swartzendruber <dsw...@druber.com> wrote:

Another possible variable here is the windows guest with inferior performance is dealing with a VMDK on an vmfs5 partition.  Maybe some alignment issue?

--
You received this message because you are subscribed to the Google Groups "esos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to esos-users+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Dan Swartzendruber

unread,
Jul 24, 2015, 4:45:26 PM7/24/15
to esos-users, dsw...@druber.com

Sorry for being unclear.  The windows 2008r2 LUN is a different file from the vsphere LUN, but they are on the same RAID10 volume.  Thinking more about it, I seem to recall comments made elsewhere that vsphere is designed to make a bunch of guests run fast in the aggregate, not one guest run fast.  My concern was not about a specific guest (which would be amenable to being tweaked), but all of them in general (to improve the 'experience' lol...)  I guess it's not really a concern, I was hoping for low-hanging fruit :)  I have 3-4 100GB SSDs, which I might throw into a single raid0 volume on the same controller, and play around with enhanceio, which I notice you have supported by default :)
Reply all
Reply to author
Forward
0 new messages