There has been a lot of great technical content from both VMware and the broader community since the announcement of vSphere 8, which happened a few weeks ago. I know many of you are excited to get your hands on both vSphere 8 and vSAN 8 and while we wait for GA, I wanted to share some of my own personal experiences but also some of the considerations for those interested in running vSphere 8 in their homelab.
As with any vSphere release, you should always carefully review the release notes when they are made available and verify that all of your hardware and the underlying components are officially listed on the VMware HCL, which will be updated when vSphere 8 and vSAN 8 GA's. This is the only way to ensure that you will have the best possible experience and a supported configuration from VMware.
UPDATE (10/05/23) - ESXi 8.0 Update 2 requires CPU processors that support XSAVE instruction or you will not be able to upgrade and means you will hardware with a minimum of an Intel Sandy Bridge or AMD Bulldozer processor or later.
ProTip: With that said, there is a workaround for those that wish to forgo official support from VMware or for homelab and testing purposes, you can add the following ESXi kernel option (SHIFT+O):
ProTip: If you are using Intel 12th Generation or newer consumer CPUs, an additional workaround is required due to fact that ESXi does not support the new big.LITTLE CPU architecture in these CPUs, which I had initially discovered when working with Intel NUC 12 Extreme (Dragon Canyon).
The following ESXi kernel option cpuUniformityHardCheckPanic=FALSE still needs to be appended to the existing kernel line by pressing SHIFT+O during the initial boot up. Alternatively, you can also add this entry to the boot.cfg when creating your ESXi bootable installer. Again, you need to append the entry and do not delete or modify the existing kernel options or you will boot ESXi into ramdisk only. If this entry is not added, then booting ESXi with processors that contain both P-Cores and E-Cores will result in a purple screen of death (PSOD) with following message "Fatal CPU mismatch on feature".
Note: Once ESXi has been successfully installed, you can permanently set the kernel option by running the following ESXCLI command: localcli system settings kernel set -s cpuUniformityHardCheckPanic -v FALSE before rebooting OR you can reboot host and take out the USB and manually edit EFI\boot\boot.cfg and append the kernel option and this will ensure subsequent reboots will contain the required kernel option.
Similarly for I/O devices such as networking, storage, etc. that are not be supported with vSphere 8, the ESXi 8.0 installer will also list out the type of device and its respective Vendor & Device ID (see screenshot above for an example).
To view the complete list of unsupported I/O devices for vSphere 8, please refer to the following VMware KB 88172 article for more information. I know many in the VMware Homelab community makes use of the Mellanox ConnectX-3 for networking, so I just wanted to call this out as that is no longer supported and folks should look to using either the ConnectX-4 or ConnectX-5 as an alternative.
ProTip: A very easy and non-impactful way to check whether your existing CPU and I/O devices will run ESXi 8.0 is to simply boot an ESXi 8.0 installer using USB and check whether it detects all devices. You do NOT have to perform an installation to check for compatibility and you can also drop into the ESXi Shell (Alt+F1) using root and no password to perform additional validation. If you are unsure whether ESXi 8.0 will run on your platform, this is the eaiest way to validate without touching your existing installation and workloads.
For those that require the use of the Community Networking Driver for ESXi Fling to detect onboard networking like some of the recent Intel NUC platforms, folks should be be happy to learn that this Fling has been officially productized as part of vSphere 8 and custom ESXi ISO image will no longer be needed. For those that require the USB Network Native Driver for ESXi Fling, a new version of the Fling that is compatible with vSphere 8 will be required and folks should wait for that to be available before installing and/or upgrading to vSphere 8.
Last year, VMware published revised guidance in VMware KB 85685 regarding the installation media for ESXi which includes ESXi 8.0 and specifically when using an SD or USB device. While ESXi 8.0 will continue to support installation and upgrade using an SD/USB device, it is highly recommended that customers consider a more reliable installation media like an SSD, especially for the ESXi OSData partition. Post-ESXi 8.0, USB installation and upgrades using an SD/USB device will no longer be supported and it is best to have a solution now than to wait for that to happen, if you ask me.
While this is not an exhaustive list of hardware platforms that can successfully run ESXi 8.0, I did want to share the list of systems that I have personally tested and hope others may also contribute to this list over time to help others within the community.
With all the new capabilities in vSphere 8, it should come as no surprise that additional resources are required for the vCenter Server Appliance (VCSA). Compared to vSphere 7, the only change is the amount of memory for each of the VCSA deployment sizes, which has increased to an additional 2GB of memory. For example, in vSphere 7, a "Tiny" configuration would required 12GB of memory and in vSphere 8, it now will require 14GB of memory.
ProTip: It is possible to change the memory configurations after the initial deployment and from my limited use, I have been able to configure a Tiny configuration with just 10GB of memory without noticing any impact or issues. Depending on your usage and feature consumption, you may need more memory but so far it has been working fine for a small 3-node vSAN cluster.
Using Nested ESXi is still by the far the easiest and most efficient way to try out all the cool new features that vSphere 8 has to offer. If you plan to kick the tires with the new vSAN 8 Express Storage Architecture (ESA), at least from a workflow standpoint, make sure you can spare at least 16GB of memory (per ESXi VM), which is the minimum required to enable this feature.
Note: If you intend to only use vSAN 8 Original Storage Architecture (OSA), then you can ignore the 16GB minimum as that only applies to enabling vSAN ESA. For basic vSAN OSA enablement, 8GB (per ESXi VM) is sufficient and if you plan to run workloads, you may want to allocate more memory but behavior should be the same as vSphere 7.x.
A bonus capability that I think is worth mentioning is that configuring MAC Learning on a Distributed Virtual Portgroup is now possible through the vSphere UI as part of vSphere 8. The MAC Learning feature was introduced back in vSphere 6.7 but was only available using the vSphere API and I am glad to finally see this available in the vSphere UI for those looking to run Nested Virtualization!
You mentioned the HP 530sfp+ but this card is not on the ESXi 8 HCL. So how would this be a good replacement for the ConnectX-3? If it's not supported, we're in the same boat. Can you verify that this card does indeed work with 8? Otherwise, this post could lead people to purchase something and be in the same boat as the X-3.
ESXi is not aware of the big.LITTLE CPU architecture that contains P/E cores. You will need to apply the same workaround as vSphere 7.x with ESXi kernel boot option: cpuUniformityHardCheckPanic=FALSE to allow ESXi to boot and install
Hi,
I installed ESXi 8.0u2 on my HP Elite Mini 800 with Intel i9 12900 processor and i219-LM network card and, to get around the Efficiency Core problem, I followed your procedure (cpuUniformityHardCheckPanic=FALSE).
The system starts correctly but when I try to upload a file larger than 2-3 Kbyte the connection freezes and the system also becomes less responsive, this problem also occurs with version 7.03n.
On vmkwarning.log and vmkernel.log I found many warning messages.
Do you have any suggestions on this problem.
Hi William,
This is a great post, and I am really looking forward to the vSphere 8.0 GA. Could you please also suggest homelab setups that we can use for testing out smart nics and DPU configurations?
Truly appreciate all the knowledge you've always shared with us. My question. Does vsphere 8 require DPUs for installation? I've search many blogs and no one has directly answered the question or speaks of V8 without add hardware cost.
vSphere 8 does NOT require DPU for installation. vSphere 8 is the first release to support DPU, if you have a need for it which can help by offloading services that would typically run on your ESXi (x86) and onto DPU. See vSphere DSE for more details -new-vsphere-8#sec21112-sub1
This worked gret with vSphere 7 but with vSphere 8 beta, I need to add "allowLegacyCPU=true" somewhere because my server is a bit too old, but if I put as it is at the end of the line, it doesn't seems to work and install get in a loop after showing the unsupported CPU warning.
Hi Franck, I can confirm I am seeing the same. I have previously used allowLegacyCPU=true on ESXi 7.0 without issue. However it seems to have no effect with 8.0, I always get a warning prompt that I have to press enter past. This is preventing automated deployments in my lab currently ?
Further update.....Digging about in the UPGRADE\PRECHECK.PY and comparing it to ESXi 7.0 I can see that a new CPU_OVERRIDE message has been added in addition to CPU_WARNING and CPU_ERROR messages. CPU_OVERRIDE is not evaluated against allowLegacyCPU which I suspect is not the intended behaviour and possibly a bug. If the CPU falls into the CPU_OVERRIDE condition the allowLegacyCPU boot option will not do anything.
c80f0f1006