ESOS SCST Target support on ESXi 6.5 or above - Do not use nv_cache=1 vdisk_block option!

625 views
Skip to first unread message

최재훈

unread,
May 7, 2018, 8:37:48 AM5/7/18
to esos-users


Hi! again!


I had been installed new ESXi 6.7 at last week.

But ESXi 6.7 deny to create a VMFS6 datastore with 128KB zvol thin volume on ZoL.



ESXi 6.7 accept to create a VMFS6 datasotre with 4KB zvol thin volume on ZoL.

ESXi 6.5 or above support In-Guest unmap, but that show me a huge performance impact on my ESOS Target whenever in-guest unmap activated.

* ie) Whenever excutes disk defragment on Windows Server 2012 R2 or above, they show me a long time unmap durations.



* esxtop - v - f - L shows a unmap command rates on ESXi 6.7 console ( UMP lists)

Some weeks later I found a configuration issue, then performance was improved - but lower then 1MB or 128KB zvol thin volume previously created on ESXi 6.0.

vdisk_blockio option nv_cache=1 + ZFS on Linux combination will show a terrble performance impact to ESOS users.


scstadmin -open_dev SRP_LUN11 -handler vdisk_blockio -attributes filename=/dev/Storage/ESOS01_SRP_LUN11,thin_provisioned,nv_cache=1


After remove option nv_cache=1 on 4k zvol then all works good, but ESXi 6.5 or above don't support VMFS6 with 128KB zvol for performance anymore.

When I used 128KB zvol, my SRP Target shows me a 6GB/s or above throughput.
But when I used 4KB zvol, my SRP Target shows me a only 2.xGB/s.


If ESOS SCST Target export to 512e or 512n (like a FreeNAS or OmniOS) Sector Format with ZoL to ESXi initiator that's a very good to ESOS users.

Is there any solution?

Regards,
Jaehoon Choi
Auto Generated Inline Image 1
Auto Generated Inline Image 2

Marc Smith

unread,
May 7, 2018, 4:15:10 PM5/7/18
to esos-...@googlegroups.com
I believe this is related to the conversation we had some time back regarding logical block size vs. physical block size and how SCST handles those settings. I'd recommend working on a patch for upstream SCST, or talk to the SCST developers about this functionality... allowing a user-settable physical block size would be the solution. But I'm not sure of the negative implications in allowing this in SCST (if any). IIRC the other project you referenced was LIO that had this ability.

--Marc



Regards,
Jaehoon Choi

--
You received this message because you are subscribed to the Google Groups "esos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to esos-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

최재훈

unread,
May 7, 2018, 10:01:58 PM5/7/18
to esos-users
I think this problem cause from SCST's VAAI unmap behavior with ZoL, not a block size conversation between ESXi initiator and SCST target.

I made a ZFS spars volume on ZoL, also create SCST thin volume with this command.

(example)

01. create thin block device

scstadmin -open_dev SRP_LUN11 -handler vdisk_blockio -attributes filename=/dev/Storage/ESOS01_SRP_LUN11,thin_provisioned

02. assign device product id for VAAI xcopy between defferent LUNs and SCST Targets
echo SCST_BIO > /sys/kernel/scst_tgt/devices/SRP_LUN11/prod_id


Regards,
Jaehoon Choi

Regards,
Jaehoon Choi
To unsubscribe from this group and stop receiving emails from it, send an email to esos-users+...@googlegroups.com.

최재훈

unread,
Jun 14, 2018, 2:59:06 AM6/14/18
to esos-users
Update on 06-14-2018

Another tests were completed, now.

nv_cache=1 option isn't affect In-Guest unmap performance.

Now I disabled In-Guest Unmap on Windows Guest and I only using Automatic Datastore Unmap every 12 hours with VMFS 6.

All works good with 4k zvol that help random iops but not for throughput performance with 128k zvol.

Initialy ZoL hava a configuration that options zfs zfs_prefetch_disable=1 in /etc/modprobe.conf.

When change options zfs zfs_prefetch_disable=0 that help ZoL random read iops.

Regards,
Jaehoon Choi

inbusine...@gmail.com

unread,
Feb 26, 2024, 8:42:20 PMFeb 26
to esos-users
I found SCST 3.8.0 release note that new SCST version include lb_per_pb_exp=0 option.

That can help SCST target to export 128kb or above zvol to ESXi 7.0.x or above.

Latest TrueNAS SCALE includes SCST 3.8.0 and I test it successfully.

But TrueNAS SCALE always display different disk location randomly after system reboot.

ESOS always show me a same disk location every system reboot and lsscsi command support that help identify disk location.

I'll wait that new ESOS distro with SCST 3.8.0.

Jaehoon Choi
Reply all
Reply to author
Forward
0 new messages