KBID 0000777 Problem Everytime you go to power on a virtual machine you get asked the following question, Virtual Machine Message Virtualized Intel VT-x/EPT is not supported on this platform. Continue without virtualized Intel VT-x/EPT? And if you choose No it throws up an error. You need to choose Yes before it will let you power on the VM. Solution This is a problem of my own making, a while ago I was trying to enable Hyper-V on a...
There are a ton of new features with the latest release of vSphere 5.1, but the one "unsupported" feature I always test first is "Nested Virtualization" (aka Nested ESXi) and with the latest release, it seems to have gotten even better. You will still need to have the same physical CPU prerequisites as you did in the past to run "Nested Virtualization" as well as nesting 64-bit VMs.
You will need to login with your root credentials and then look for the "nestedHVSupported" property and if it states false, it means you maybe able to install nested ESXi or other hypervisor, but you will not be able to run nested 64-bit VMs, only 32-bit VMs, assuming you have either Intel-VT or AMD-V support on your CPUs.
There are some changes with Nested Virtualization in vSphere 5.1 also officially known as VHV (Virtual Hardware-Assisted Virtualization). If you are using vSphere 5.0 to run Nested ESXi or other nested Hypervisors, then please take a look at the instructions in this article. With vSphere 5.1, there have been a few minor changes to enable VHV.
Note: If this box is grayed out, it means that your physical CPU does not supported Intel VT-x + EPT or AMD-V + RVI which is required to run VHV OR that you are not using Virtual Hardware 9. If your CPU only supports Intel-VT or AMD-V, then you can still install nested ESXi, but you will only be able to run nested 32-bit VMs and not nested 64-bit VMs.
Step 4 - It is still recommended that you change the guestOS Version to VMware ESXi 5.x after you have created the VM shell, as there are some special settings that are applied automatically. Unfortunately with the new vSphere Web Client, you will not be able to modify the guestOS after creation, so you will need to use the C# Client OR manually go into the .VMX and update guestOS = "vmkernel5"
If you have followed my previous article about How to Enable Support for Nested 64bit & Hyper-V VMs in vSphere 5 you may recall a diagram about the levels of "Inception" that can be performed with nested ESXi. That is, the number of times you could nest ESXi and still have it be in a "functional" state. With vSphere 5.0, the limit that I was able to push was 2 levels of nested ESXi. With latest release of vSphere 5.1, I have been able to push that limit now to an extraordinary 3 levels of inception!
You might ask why would someone want to do this ... well I don't have a good answer other than ... because I can? ? VHV is one of the coolest "unsupported" feature in my books and I'm glad it is working beyond what it was designed for.
For proper networking connectivity, also ensure that either your standard vSwitch or Distributed Virtual Switch has both promiscuous mode and forged transmit enabled either globally on the portgroup or distributed portgroup your nested ESXi hosts are connected to.
One thing to note, don't do the vhv.enable on the physical host if you plan on running VSE on it. It will complain that it doesn't support Virtual Hardware-Assisted Virtualization. Just enable it per VM.
I have enabled Intel-VT in BIOS on PowerEdge 1950 server. I added vhv.enable = "TRUE" as well - yet, I cannot select "Hardware Virtualization" box to enable VHV when creating virtual machine using web client. Where am, I wrong. Appreciate your input.
Same thing here Dell Precision T5400 with dual X5460, "hardware Virtualization" is grayed out, and when I start a VM I get a VM Message stating "Virtualized Intel VT-x/EPT is not supported on this platform".
HP DL380 G5 with 1 of Dual-Core 5160 Intel CPU
In Bios, "Intel Virtualisation" is enabled.
But "Hardware Virtualisation" on vSphere Web Client 5.1 is greyed out for any newly created HW V.9 Virtual Machine.
Having the same problem on Dell 1950 -
In Bios, "Intel Virtualisation" is enabled.
But "Hardware Virtualisation" on vSphere Web Client 5.1 is greyed out for any newly created HW V.9 Virtual Machine.
Used to work ok on vSphere 5.0.
Yes thank you
You just need to create the VM in the webclient and then
The option is not grayed out.
If you create the vm in the viclient and then goto the webclient you can not select it.
Still strange that there is a difference between the web and
Windows client. But on the other hand probably going away from Ms windows dependency
1. Enabled vhv on physical esxi 5.1
2. Enabled promiscuous mode to accept on physical esxi 5.1
3. Created Virtual machine using web client 5.1 - Guest OS - Other Linux x64.
4. At Customize hardware - Expanded CPU to select Hardware Virtualization - Greyed out.
Login using the root credentials and then look for "nestedHVSupported" and see what it says. This basically lists the capabilities of the host and this should tell you whether vSphere believes you can run VHV.
It looks like the issue is that you don't have EPT support. I know you pointed to
ark.intel.com but I was told that even their site is not always 100% accurate and these actual bits are detected and that is why you see nestedHVSupported = false.
What this means is you won't be able to check the "Hardware Virtualization" box and you WILL be able to install nested ESXi or other hypervisors, just that you won't be able to run 64-bit guestOSes, only 32-bit just like previous releases.
Failed to install nested Red Hat Enterprise Virtualization Hypervisor v6.3 on ESXi v5.1:
I enabled promiscuous mode on the portgroup and used Web Client to create a new VM HW v9 as Linux/Other Linux (64-bit) and successfully enabled Hardware Virtualization.
When I boot VM from ISO and select "Install or Upgrade" option - it just stays frozen there.
Any thoughts?
Thank you!
Found the workaround: hit to get further to boot: prompt, enter "install" and the installation will continue.
Ended up with fully installed and configured RHEV Hypervisor and RHEV Manager VMs on ESXi 5.1, but inner-guest VMs (Win7 & WinXP) will not boot at all (staling with "Wait For Launch" message)...
Any hint would be appreciated.
prezha
prezha was missing a VERY important word. You must press the "Escape" key at the install screen. That will present you with a "boot:" prompt at which point you must enter "install". Wizard continues from then on as normal.
hi, i have installed server 2k8 r2 Hyper v on ESXi 5.1, i am able to create vm in server manager but when i tried to power on it i am getting the following error message "the virtual machine could not be started because the hypervisor is not running", checked the HyperV services are running. thanks in advance.
Have you been able to create a filer at level 1 and present shared storage to subsequent levels? I cannot get this to work. I'd like to run iSCSI over vmxnet3 from level 2 to level 3 so that I don't lose as much disk I/O, ideally.
Has this changed again? The advanced option setting "vhv.enable='TRUE'" for the VM does not persist from the vSphere Client. However, after tweaking the option in the Web Client, the exposed configuration parameter was "featMask.vm.hv.capable='Min:1'" and the nested feature works... (vSphere 5.1)
In my nested ESXi, everything is OK. I have a problem after build 50 ESXi VMs. 50 ESXi VMs can ping outside and anywhere but can't ping each other. I don't know the reason why, I use E1000 & E1000E Nic for testing but the result is the same. Can you help me?
William, thank you for sharing this information. I'm a newbie on VMware and recently started playing around with ESXi 5.5 on two self built, i7 980X 6-core & i7 950 4-core based servers. I need virtualization for my network labs (exam prep). On the fly I try to pick up some knowledge of Windows Server, Linux and VMware.
Looking for a way to share my hardware with others I stumbled upon your article about nested 64 bit on vSphere v5. After following all the leads and reading some more documentation I was able to install ESXi 5.5 as guest on my ESXi host. But I got the "HARDWARE_VIRTUALIZATION WARNING: Hardware Virtualization is not a feature of the CPU ..." message.
- I create my guest based on VMware Hardware Version 9 in accordance with your articles
- I do some tweaking in ESXi shell
- I finalize my configuration with vSphere Client 5.5 (that comes with ESXi 5.5); this INDEED removes vhv.enable="TRUE" from my guest's VMX file
- when I'm done with configuring I add vhv.enable="TRUE" through ESXi shell; as long as I refrain from further reconfiguration through vSphere Client it will stay there (I guess)
- After bootup of my ESXi 5.5 guest, " =ha-host&doPath=capability" shows me "nestedHVSupported boolean true" so I think I'm OK with this procedure
Now trying to script the same steps from a Packer build server and upload the image to ESX 5.1, the vhv.enable flag is being stripped from my .vmx! Everything looks good before the upload with ovftool 4.0, but when I check the contents on the ESX server the settings are gone.
I have an ESXi 5.5 Update 3 and I created a Windows 10 VM, following the 3 steps above for configuring an Hyper-V guest on ESXi. Windows 10 is at the latest build/update level. Then I enabled the Hyper-V feature on Windows 10 and created an Hyper-V VM. The Hyper-V VM is not able to start.
I'm using Esxi 6.x and I want to enable KVM in one of our guest, web client is not working hence only option is vSphere Client, using earlier vmx files I set parameter for featMask.vm.hv.capable = "Min:1", however it's still not working.
I there any thing else I need to setup.
Hello everyone. I have been running nested Hyper-V inside of ESXi6 for sometime now as well as I also have a ESXi5.5 nested inside of my same ESXi6 environment. The question that I have is that I am able to install the Hyper-V role no problem and create VMs etc. The issue is that I have not been successful in trying to install the Windows Server RDS Virtualization Host role on a nested Windows Server 2016 Technical Preview 4 release VM in my ESXi6 environment. It keeps giving me an error that Hardware Virtualization is not present etc. Now keep in mind I have made all of the changes needed and this very server has Hyper-V installed and running VMs no problem at all. It is just the RDS VDI role that can not be installed. I have not been able to find any information pertaining to this being possible. If someone knows how to get this to work it would be great. Thanks
3a8082e126