Vmware Storage Maps Failed To Serialize Response

0 views
Skip to first unread message
Message has been deleted

Gifford Brickley

unread,
Jul 16, 2024, 5:45:02 AM7/16/24
to deoterboso

Many larger enterprises, service providers, and cloud deployments often reach the vSphere limit of 64 hosts per VMFS or NFS datastore. With the release of update 3, we have increased the number of hosts that may connect to a VMFS-6 or NFS datastore from 64 to 128. This will alleviate the need for special approval for a larger number of hosts accessing VMFS or NFS datastores. Note: This is not a hosts-per-cluster increase; this is a number of hosts that can access a single VMFS or NFS datastore.

In vSphere 7, VMware updated the Affinity Manager, which handles first writes with thin or lazy thick provision. The new Affinity Manager, 2.0, maintains a map of all free storage Resource Cluster. Resource Clusters are available space for new writes, which enables quicker first writes.

vmware storage maps failed to serialize response


Download https://imgfil.com/2yLTKh



In U3, we add additional enhancements to Affinity 3.0 which now adds support for CNS persistent volumes or FCDs (First Class Disks). We have also added support for the higher number of vSphere hosts per cluster.

With the potential scale vVols offers, ensuring operational efficiency is key. As engineering continues to enhance and develop vVols, we have enhanced the procedure for processing large numbers of vVol snapshots by making snapshot operations into a batch process. By grouping large amounts of snapshot operations, we reduce the serialize actions used for snapshots making the process more efficient and reducing the effect on the VMs and storage environment.

Onc function RDM can currently provide over other shared disk applications is being able to hot extend shared disks. In vSphere 7 Update 1, we have validated the support for online disk/LUN expansion for pass-through RDM used with Windows Server Failover Clustering (WSFC).

There are numerous customers using clustered applications; Oracle RAC for example. As NVMeoF continues to gain support, especially for database instances, we want to ensure we validate the various deployments.

With VMware Cloud Foundation (VCF), your management domain requires vSAN, which can easily be managed using policy-based management or SPBM. SPBM allows simplified operational management of your storage capabilities. Although you can use tag-based policies with external storage for VCF, it is not something that scales easily and requires quite a bit of manual operations. When you think about the possible scale VCF enables, manually tagging datastores can become daunting. Subsequently, being able to programmatically manage all your VCF storage simplifies your operations, freeing valuable time for other tasks.

The VCF engineering team has been diligently working internally and with our storage partners to enable vVols as principal storage. With the 4.1 release, we support NFS 3.x, FC, and limited iSCSI protocols for vVols. For iSCSI, there are a few pre-tasks that must be completed. Setting up your SW iSCSI initiator on all your hosts in the new WLD, and your VASA must be listed as a Dynamic Target.

NVMe continues to become more and more popular because of its low latency and high throughput. Industries, such as Artificial Intelligence, Machine Learning, and IT, continue to advance, and the need for increased performance continues to grow. Typically, NVMe devices are local using the PCIe bus. So how can you take advantage of NVMe devices in an external array? The industry has been advancing external connectivity options using NVMe over Fabrics (NVMeoF). Connectivity can be either IP or FC based. There are some requirements for external connectivity to maintain the performance benefits of NVMe as typical connectivity is not fast enough.

With NVMeoF, targets are presented as namespaces, which is equivalent to SCSI LUNs, to a host in Active/Active or Asymmetrical Access modes. This enables ESXi hosts to discover, and use the presented NVMe namespaces. ESXi emulates NVMeoF targets as SCSI targets internally and presents them as active/active SCSI targets or implicit SCSI ALUA targets.

This technology maps NVMe onto the FC protocol enabling the transfer of data and commands between a host computer and a target storage device. This transport requires an FC infrastructure that supports NVMe.

To enable and access NVMe over FC storage, install an FC adapter supporting NVMe in your ESXi host. There is no configuration required for the adapter; it will automatically connect to an appropriate NVMe subsystem and discovers all shared NVMe storage devices. You may, at a later time, reconfigure the adapter and disconnect its controllers or connect other controllers.

This technology uses Remote Direct Memory Access (RDMA) transport between two systems on the network. The transport enables in memory data exchange, bypassing the operating system or processor of either system. ESXi supports RDMA over Converged Ethernet v2 (RoCE v2).

To enable and access NVMe storage using RDMA, the ESXi host uses an RNIC adapter on your host and a SW NVMe over RDMA storage adapter. You must configure both adapters to use them for NVMe storage discovery.

In vSphere 7, VMware added support for SCSI-3 Persistent Reservations (SCSI-3 PR) at the virtual disk (VMDK) level. What does this mean? You now have the ability to deploy a Windows Server Failover Cluster (WSFC), using shared disks, on VMFS. This is yet another move to reduce the requirement of RDMs for clustered systems. With supported HW, you may now enable support for clustered virtual disks (VMDK) on a specific datastore. Allowing you to migrate off your RDMs to VMFS and regain much of the virtualization benefits lost with RDMs.

When you navigate to your supported datastore, under the Configure tab, you will see a new option to enable Clustered VMDK. If you are going to migrate or deploy a Microsoft WSFC cluster using shared disks, then you would enable this feature. Once the feature is enabled, you can then follow the Setup for Windows Server Failover Clustering documentation to deploy your WSFC on the VMFS6 datastore.

In cases where customers are using numerous pRDMs in their environment, host boot times or storage rescans can take a long time. The reason for the longer scan times is each LUN attached to a host is scanned at boot or during a storage rescan. Typically, RDMs are provisioned to VMs for Microsoft WSFC and are not directly used by the host. During the scan, ESXi attempts to read the partitions on all the disks but it is unable to for devices persistently reserved by the WSFC. Subsequently, the longer it can take a host to boot or rescan storage. The WSFC uses SCSI-3 persistent reservation to control locking between the nodes of the WSFC which, blocks the hosts from being able to read them.

With the release of vSphere 7, setting the Perennially Reserved flag to true was added to the UI under storage devices. There has also been a field added to show the current setting for the Perennially Reserved flag.

Setting the Perennially Reserved flag on your pRDMs used by your WSFC is recommend in the clustering guides. When set, ESXi no longer tries to scan the devices and this can reduce boot and storage rescan times. I have added links below to resources on clustering guides and the use of this flag. Another benefit of flagging RDMs is you can easily see which devices are RDMs and which are not.

Some of the benefits of deploying VMs as Thin Provisioned VMDKs are the effective use of space, and space reclamation. A thin VMDK is a file on VMFS, where Small File Blocks (SFB) are allocated on-demand at the time of first write IO. There can be an overhead cost to this process, which can affect performance. In some cases, for maximum performance, it is recommended Eager Zero Thick (EZT) disks are utilized to avoid the overhead of allocating space for new data.

vFRC currently has a minimal customer base. With VAIO, it allows 3rd party vendors to create custom caching solutions. When you upgrade to vSphere 7, you will receive a warning message that vFRC will no longer be available. "vFRC will be gone with this upgrade, please deactivate vFRC on a VM if using it."

CBRC 1.0 has a maximum cache size of 2GB, whereas 2.0 has a maximum of 32GB. As of vSphere 6.5, CBRC 2.0 is the default for Content-Based Read Cache. Starting in vSphere 7, CBRC 1.0 has been removed to ensure it is not used, especially in Horizon environments. This will also eliminate the building and compiling of un-used code.

Because vVols uses array-based replication, it is very efficient. Array-based replication is a preferred method of replicating data between arrays. With vVols and SPBM, you can easily manage which VMs are replicated rather than everything in a volume or LUN. With the release of Site Recovery Manager 8.3, you can now manage your DR process with SRM while using the replication efficiency and granularity of vVols and SPBM.

With vVols and SRM, you can have independent vVols replication-groups/SRM protection-groups for a single VM, application, or group of VMs. Another benefit is each replication-group/protection-group can have different RPOs, and all use array-based replication.

A feature that has been requested for a while is finally available; support for vVols datastores in vROps! With the release of vROps 8.1, you can now utilize vROps monitoring on your vVols datastores the same as any other datastore. Giving your alerting, planning, troubleshooting, and more for your vVols datastores. For more information here's the link to vROps.
Make sure to read about the new release on the vROps 8.1 announcement blog.

VMware Cloud Foundation allows organizations to deploy and manage their private and public clouds. VCF currently supports vSAN, VMFS, and NFS principal storage. Customers are asking for the support of vVols as principle storage, and while the VCF team continues to evaluate and develop that option, it is not available. In the meantime, vVols can be used as supplemental storage after the Workload Domain build has completed. Support for vVols as supplemental storage is a partner supported option.

7fc3f7cf58
Reply all
Reply to author
Forward
0 new messages