Vmware 7.0u3

0 views
Skip to first unread message

Landers Piechotka

unread,
Aug 5, 2024, 2:15:06 PM8/5/24
to abstocringee
vSphereMemory Monitoring and Remediation, and support for snapshots of PMem VMs: vSphere Memory Monitoring and Remediation collects data and provides visibility of performance statistics to help you determine if your application workload is regressed due to Memory Mode. vSphere 7.0 Update 3 also adds support for snapshots of PMem VMs. For more information, see vSphere Memory Monitoring and Remediation.

Use vSphere Lifecycle Manager images to manage a vSAN stretched cluster and its witness host: Starting with vSphere 7.0 Update 3, you can use vSphere Lifecycle Manager images to manage a vSAN stretched cluster and its witness host. For more information, see Using vSphere Lifecycle Manager Images to Remediate vSAN Stretched Clusters.


vSphere Cluster Services (vCLS) enhancements: With vSphere 7.0 Update 3, vSphere admins can configure vCLS virtual machines to run on specific datastores by configuring the vCLS VM datastore preference per cluster. Admins can also define compute policies to specify how the vSphere Distributed Resource Scheduler (DRS) should place vCLS agent virtual machines (vCLS VMs) and other groups of workload VMs.


Integrate NSX-T plugin with vSphere: In vSphere 7.0 Update 3 we provide the possibility to easily install NSX-T plugin directly from vSphere Client. We also ensure the seamless authentication between vSphere and the installed plug-in. vSphere admins can trigger the installation flow from the dedicated NSX page accessible from the main navigation menu. Once the installation is completed, they can continue with the post-install configuration on the same page. Please note that the simplified installation is supported for NSX-T version 3.2.0 and later.


Improved interoperability between vCenter Server and ESXi versions: Starting with vSphere 7.0 Update 3, vCenter Server can manage ESXi hosts from the previous two major releases and any ESXi host from version 7.0 and 7.0 updates. For example, vCenter Server 7.0 Update 3 can manage ESXi hosts of versions 6.5, 6.7 and 7.0, all 7.0 update releases, including later than Update 3, and a mixture of hosts between major and update versions.


MTU size greater than 9000 bytes: With vCenter Server 7.0 Update 3, you can set the size of the maximum transmission unit (MTU) on a vSphere Distributed Switch to up to 9190 bytes to support switches with larger packet sizes.


Zero downtime, zero data loss for mission critical VMs in case of Machine Check Exception (MCE) hardware failure: With vSphere 7.0 Update 3, mission critical VMs protected by VMware vSphere Fault Tolerance can achieve zero downtime, zero data loss in case of Machine Check Exception (MCE) hardware failure, because VMs fallback to the secondary VM, instead of failing. For more information, see How Fault Tolerance Works.


For internationalization, compatibility, installation, upgrade, open source components and product support notices, see the VMware vSphere 7.0 Release Notes.

For more information on vCenter Server supported upgrade and migration paths, please refer to VMware knowledge base article 67077.


Before upgrading to vCenter Server 7.0 Update 1, you must confirm that the Link Aggregation Control Protocol (LACP) mode is set to enhanced, which enables the Multiple Link Aggregation Control Protocol (the multipleLag parameter) on the VMware vSphere Distributed Switch (VDS) in your vCenter Server system.


If the LACP mode is set to basic, indicating One Link Aggregation Control Protocol (singleLag), the distributed virtual port groups on the vSphere Distributed Switch might lose connection after the upgrade and affect the management vmknic, if it is on one of the dvPort groups. During the upgrade precheck, you see an error such as Source vCenter Server has instance(s) of Distributed Virtual Switch at unsupported lacpApiVersion.

For more information on converting to Enhanced LACP Support on a vSphere Distributed Switch, see VMware knowledge base article 2051311. For more information on the limitations of LACP in vSphere, see VMware knowledge base article 2051307.


In the vSphere Client, when you navigate to the Updates tab of a container object: host, cluster, data center, or vCenter Server instance, to check VMware Tools or VM Hardware compliance status, you might see a status 500 error. The check works only if you navigate to the Updates tab of a virtual machine.


The SNMP firewall ruleset is a dynamic state, which is handled during runtime. When a host profile is applied, the configuration of the ruleset is managed simultaneously by Host Profiles and SNMP, which can modify the firewall settings unexpectedly.


The NoAccess or NoCryptoAdmin roles might be modified during exports of a host profile in a 7.0.x vCenter Sever system and the import of such a host profile might fail with a reference host error. In the vSphere Client, you see a message such as There is no suitable host in the inventory as reference host for the profile Host Profile.


This issue is resolved in this release. However, you must edit the host profile xml file for versions earlier than vCenter Server 7.0 Update 3, and remove the privileges in the NoAccess or NoCryptoAdmin roles before an import operation.


The CNS QueryVolume API enables you to obtain information about the CNS volumes, such as volume health and compliance status. When you check the compliance status of individual volumes, the results are obtained quickly. However, when you invoke the CNS QueryVolume API to check the compliance status of multiple volumes, several tens or hundreds, the query might perform slowly.


After patching or upgrading your system to vCenter Server 7.0 Update 2, all I/O filter storage providers might display with status Offline or Disconnected in the vSphere Client. vCenter Server 7.0 Update 2 supports the Federal Information Processing Standards (FIPS) and certain environments might face the issue due to certificates signed with the sha1 hashing algorithm that is not FIPS-compliant.


In a mixed-version vCenter Server 7.0 system, such as vCenter Server 7.0 Update 1 and Update 2 transitional environment with Enhanced Linked Mode enabled, tasks such as image, host or hardware compliance checks that you trigger from the vSphere Client might show no progress, while the tasks actually run.


Prior to vSphere 7.0 Update 2, vSphere DRS has no awareness of read locality for vSAN stretched clusters and the DRS Awareness of vSAN Stretched Cluster feature requires all hosts in a vCenter Server system to be of version ESXi 7.0 Update 2 to work as expected. If you manage ESXi hosts of version earlier than 7.0 Update 2 in a vCenter Server 7.0 Update 2 system, some read locality stats might be read incorrectly and result in improper placements.


This issue is resolved in this release. The fix ensures that if ESXi hosts of version earlier than 7.0 Update 2 are detected in a vSAN stretched cluster, read locality stats are ignored and vSphere DRS uses the default load balancing algorithm to initial placement and load balance workloads.


If you use both vSphere Auto Deploy and vCenter Server High Availability in your environment, rsync might not sync quickly enough some short-lived temporary files created by Auto Deploy. As a result, in the vSphere Client you might see vCenter Server High Availability health degradation alarms. In the /var/log/vmware/vcha file, you see errors such as rsync failure for /etc/vmware-rbd/ssl. The issue does not affect the normal operation of any service.


In rare cases, vSphere Storage DRS might over recommend some datastores and lead to an overload of those datastores, and imbalance of datastore clusters. In extreme cases, power-on of virtual machines might fail due to swap file creation failure. In the vSphere Client, you see an error such as Could not power on virtual machine: No space left on device. You can backtrace the error in the /var/log/vmware/vpxd/drmdump directory.


Configuration of the vSphere Authentication Proxy service might fail when NTLMv2 response is explicitly enabled on vCenter Server with the generation of a core.lsassd file under the /storage/core directory.


The default name for new vCLS VMs deployed in vSphere 7.0 Update 3 environment uses the pattern vCLS-UUID. vCLS VMs created in earlier vCenter Server versions continue to use the pattern vCLS (n). Since the use of parenthesis () is not supported by many solutions that interoperate with vSphere, you might see compatibility issues.


If you configure vCenter Enhanced Linked Mode and customize the rhttpproxy settings from the default ports 80 for HTTP and 443 for HTTPS, you might see an error such as You have no privileges to view object when you first log in to the vSphere Client.


In the vSphere Client, when you navigate to Monitor > Tasks, you see an error such as vslm.vcenter.VStorageObjectManager.deleteVStorageObjectEx.label - A specified parameter was not correct: in the Status field. The issue occurs in vSphere with Tanzu environments when you deploy a backup solution that uses snapshots. If the snapshots are not cleaned up, some operations in Tanzu Kubernetes clusters might not complete and cause the error.


In rare cases, you might not be able to delete services such as NGINX and MinIO from supervisor clusters in your vSphere environment from the vSphere Client. After you deactivate the services, the Delete modal continuously stays in processing state.


If you try to enable or reconfigure a vSphere Trust Authority cluster on a vCenter Server system of version 7.0 Update 3 with ESXi hosts of earlier versions, encryption of virtual machines on such hosts fails.


If you create a vSphere Lifecycle Manager cluster and configure NSX-T Data Center on that cluster by using the NSX Manager user interface, the configuration might fail as the upload of an NSX depot to the vSphere Lifecycle Manager depot fails. In the NSX Manager user interface, you see an error such as 26195: Setting NSX depot(s) on Compute Manager: 253b644a-4ea5-4025-9c47-6cd00af1d75f failed with error: Unable to connect ComputeManager. Retry Transport Node Collection at cluster. The issue occurs when you use a custom port to configure the vCenter Server that is associated with the NSX-T Data Center as a compute manager in the NSX Manager.

3a8082e126
Reply all
Reply to author
Forward
0 new messages