InvSphere 7.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.
The typical way to apply patches to ESXi 7.x hosts is by using the vSphere Lifecycle Manager. For details, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images.
You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file from VMware Customer Connect. From the Select a Product drop-down menu, select ESXi (Embedded and Installable) and from the Select a Version drop-down menu, select 7.0. For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Starting with 7.0 Update 2, ESXi supports Posted Interrupts (PI) on Intel CPUs for PCI passthrough devices to improve the overall system performance. In some cases, a race between PIs and the VMkernel scheduling might occur. As a result, virtual machines that are configured with PCI passthrough devices with normal or low latency sensitivity might experience soft lockups.
In rare cases, VM events might report the template property, which indicates if a virtual machine is marked as a template, incorrectly. As a result, you might see the template property as true even if the VM is not a template VM or as false, when a VM is marked as a template.
Rarely, due to the fast suspend resume mechanism used to create or revert a VM to a snapshot, the internal state of the VMX process might reinitialize without notification to the upper layers of the virtual infrastructure management stack. As a result, all guest-related performance counters that VMware Tools provides stop updating. In all interfaces to the ESXi host, you continuously see the last recorded values.
When the rhttpproxy service performs multiple operations on incoming URIs, it might miscalculate the buffer offset of each connection, which potentially leads to errors such as buffer overflows and negative reads. As a result, the service fails.
By default, modifying hypercall options by using commands such as vm Get-AdvancedSetting -Name isolation.tools.autoInstall.disable works only when the VM is powered off. For powered on VMs such calls trigger the error The attempted operation cannot be performed in the current state (Powered on). This is expected.
After you update an ESXi 7.0 Update 3c host to a later version of 7.0.x or install or remove an ESXi 7.0 Update 3c VIB and reboot the host, you might see all security advanced options on the host revert to their default values. The affected advanced settings are:
Security.AccountLockFailures
Security.AccountUnlockTime
Security.PasswordQualityControl
Security.PasswordHistory
Security.PasswordMaxDays
Security.SshSessionLimit
Security.DefaultShellAccess
The command esxcli hardware pci list, which reports the NUMA node for ESXi host devices, returns the correct NUMA node for the Physical Functions (PF) of an SR-IOV device, but returns zero for its Virtual Functions (VF).
After an upgrade to ESXi 7.0 Update 2 or later, when you migrate Windows virtual machines to the upgraded hosts by using vSphere vMotion, some VMs might fail with a blue diagnostic screen after the migration. In the screen, you see the error OS failed to boot with no operating system found. The issue occurs due to a fault in the address optimization logic of the Virtual Machine File System (VMFS).
If the client application has no registered default request handler, requests with a path that is not present in the handler map might cause the execution of vmacore.dll to fail. As a result, you see the ESXi host as disconnected from the vCenter Server system.
Some allocation reservation operations might go over the limit of 128 parallel reservation keys and exceed the allocated memory range of an ESXi host. As a result, ESXi hosts might fail with a purple diagnostic screen during resource allocation reservation operations. In the error screen, you see messages such as PSOD BlueScreen: #PF Exception 14 in world 2097700:SCSI period IP.
If you run an unclaim command on a device or path while virtual machines on the device still have active I/Os, the ESXi host might fail with a purple diagnostic screen. In the screen, you see a message such as PSOD at bora/modules/vmkernel/nmp/nmp_misc.c:3839 during load/unload of lpfc.
When TXT is enabled on an ESX host, attempts to power-on a VM might fail with an error. In the vSphere Client, you see a message such as This host supports Intel VT-x, but Intel VT-x is restricted. Intel VT-x might be restricted because 'trusted execution' has been enabled in the BIOS/firmware settings or because the host has not been power-cycled since changing this setting.
Due to a rare issue with handling AVX2 instructions, a virtual machine of version ESX 7.0 Update 3f might fail with ESX unrecoverable error. In the vmware.log file, you see a message such as: MONITOR PANIC: vcpu-0:VMM fault 6: src=MONITOR ....
The issue is specific for virtual machines with hardware versions 12 or earlier.
If you use an ESXi .iso image created by using the Image Builder to make a vSphere Lifecycle Manager upgrade baseline for ESXi hosts, upgrades by using such baselines might fail. In the vSphere Client, you see an error such as Cannot execute upgrade script on host. On the impacted ESXi host, in the /var/log/vua*.log file, you see an error such as ValueError: Should have base image when an addon exists.
The error occurs when the existing image of the ESXi host has an add-on, but the Image Builder-generated ISO provides no add-on.
Many parallel requests for memory regions by virtual machines using the Data Plane Development Kit (DPDK) on an ESXi host might exceed the XMAP memory space on the host. As a result, the host fails with a purple diagnostic screen and an error such as: Panic Message: @BlueScreen: VERIFY bora/vmkernel/hardware/pci/config.c:157.
After an upgrade of ESXi hosts to ESXi 7.0 Update 3 and later, you might no longer see some performance reports for virtual machines with NVMe controllers. For example, you do not see the Virtual Disk - Aggregate of all Instances chart in the VMware Aria Operations.
In rare cases, such as scheduled reboot of the primary VM with FT encryption that runs heavy workloads, the secondary VM might not have sufficient buffer to decrypt more than 512 MB of dirty pages in a single FT checkpoint and experience a buffer overflow error. As a result, the ESXi host on which the secondary VM resides might fail with a purple diagnostic screen.
Even when a device or LUN is in a detached state, the Pluggable Storage Architecture (PSA) might still attempt to register the object. PSA files a log for each path evaluation step at every path evaluation interval of such attempts. As a result, you might see multiple identical messages such as nmp_RegisterDeviceEvents failed for device registration, which are not necessary while the device or LUN is detached.
If you change a device configuration at runtime, changes might not be reflected in the ESXi ConfigStore that holds the configurations for an ESXi host. As a result, the datastore might not mount after the ESXi host reboots.
Starting from ESXi 6.0, mClock is the default I/O scheduler for ESXi, but some environments might still use legacy schedulers of ESXi versions earlier than 6.0. As a result, upgrades of such hosts to ESXi 7.0 Update 3 and later might fail with a purple diagnostic screen.
Starting with ESXi 7.0 Update 1, the configuration management of ESXi hosts moved from the /etc/vmware/esx.conf file to the ConfigStore framework, which makes an explicit segregation of state and configuration. Tokens in the esx.conf file such as implicit_support or explicit_support that indicate a state, are not recognized as valid tokens, and are ignored by the satp_alua module. As a result, when you upgrade ESXi hosts to ESXi 7.0 Update 3d or later by using a host profile with tokens indicating ALUA state, the operation might fail with a purple diagnostic screen. In the screen, you see an error such as Failed modules: /var/lib/vmware/configmanager/upgrade/lib/postLoadStore/libupgradepsadeviceconfig.so.
A helper mechanism that caches FB resource allocation details working in background might accidentally stop and block FB resource allocation during I/O operations to the ESXi host. In some cases, this issue might affect other processes working on the same file and block them. As a result, the ESXi host might become unresponsive.
vSAN File Service requires hosts to communicate with each other. File Service might incorrectly use an IP address in the witness network for inter-communication. If you have configured an isolated witness network for vSAN, the host can communicate with a witness node over the witness network, but hosts cannot communicate with each other over the witness network. Communication between hosts for vSAN File Service cannot be established.
If an ESXi host is in a low memory state, insufficient heap allocation to a network module might cause the port bitmap to be set to NULL. As a result, the ESXi host might fail with a purple diagnostic screen when attempting to forward a packet.
This issue is resolved in this release. The fix makes sure that bit vectors in the portsBitmap property are set only when heap allocation is successful. However, you still need to make sure that ESXi hosts have sufficient RAM to operate and forward packets successfully.
Windows 2012 and later use SCSI-3 reservation for resource arbitration to support Windows failover clustering (WSFC) on ESXi for cluster-across-box (CAB) configurations. However, if you configure the bus sharing of the SCSI controller on that VM to Physical, the SCSI RESERVE command causes the ESXi host to fail with a purple diagnostic screen. SCSI RESERVE is SCSI-2 semantic and is not supported with WSFC clusters on ESXi.
3a8082e126