SLAT or Second Level Address Translation is a technology that works with Hyper-V. It is supported by both Intel and AMD processors. It is called Extended Page Table (EPT) in Intel processors and Rapid Virtualization Indexing (RVI) in the AMD processors. In this post we will see what is SLAT, how to check if the computer supports SLAT and how to enable Second Level Address Translation in BIOS.Second Level Address Translation (SLAT)SLAT is supported on Nehalem architecture processors and newer for Intel, and Barcelona processors and newer for AMD.The special thing about these processors is the fact that they have the Translation Lookaside Buffer or TLB. These processors support Physical memory translation. That type of cache contains all the recently used mappings from the page table of the processors. The built-in cache is used to determine the mapping information by the TLB by a Virtual address needs to be converted to a Physical address. If this data is not found, a page error occurs, and the operating system looks up for the mapping information on the page table. If the relative mapping record is found, it directly goes to be written in the TLB, and the translation of the address takes place.This use of Hyper-V relies more on virtual resources and virtual functions and hence reduces the overhead of the translation of the physical guest address to a real physical address. Hence, a lot of physical resources are saved and can be utilized for other functions.How to check if the computer supports SLATThere are two ways by which you can check if your computer supports SLAT:
Hyper-V has specific hardware requirements, and some Hyper-V features have additional requirements. Use the details in this article to decide what requirements your system must meet so you can use Hyper-V the way you plan to. Then, review the Windows Server catalog. Keep in mind that requirements for Hyper-V exceed the general minimum requirements for Windows Server 2016 because a virtualization environment requires more computing resources.
Download Zip https://t.co/VBqmsqlkVU
If you're already using Hyper-V, it's likely that you can use your existing hardware. The general hardware requirements haven't changed significantly from Windows Server 2012 R2 . But, you will need newer hardware to use shielded virtual machines or discrete device assignment. Those features rely on specific hardware support, as described below. Other than that, the main difference in hardware is that second-level address translation (SLAT) is now required instead of recommended.
For details about maximum supported configurations for Hyper-V, such as the number of running virtual machines, see Plan for Hyper-V scalability in Windows Server 2016. The list of operating systems you can run in your virtual machines is covered in Supported Windows guest operating systems for Hyper-V on Windows Server.
A 64-bit processor with second-level address translation (SLAT). To install the Hyper-V virtualization components such as Windows hypervisor, the processor must have SLAT. However, it's not required to install Hyper-V management tools like Virtual Machine Connection (VMConnect), Hyper-V Manager, and the Hyper-V cmdlets for Windows PowerShell. See "How to check for Hyper-V requirements," below, to find out if your processor has SLAT.
Hardware-assisted virtualization. This is available in processors that include a virtualization option - specifically processors with Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD-V) technology.
The firmware tables must expose the I/O MMU to the Windows hypervisor. Note that this feature might be turned off in the UEFI or BIOS. For instructions, see the hardware documentation or contact your hardware manufacturer.
Devices need GPU or non-volatile memory express (NVMe). For GPU, only certain devices support discrete device assignment. To verify, see the hardware documentation or contact your hardware manufacturer. For details about this feature, including how to use it and considerations, see the post "Discrete Device Assignment -- Description and background" in the Virtualization blog.
Second Level Address Translation (SLAT), also known as nested paging, is a hardware-assisted virtualization technology which makes it possible to avoid the overhead associated with software-managed shadow page tables.
AMD has supported SLAT through the Rapid Virtualization Indexing (RVI) technology since the introduction of its third-generation Opteron processors (code name Barcelona). Intel's implementation of SLAT, known as Extended Page Table (EPT), was introduced in the Nehalem microarchitecture found in certain Core i7, Core i5, and Core i3 processors.
ARM's virtualization extensions support SLAT, known as Stage-2 page-tables provided by a Stage-2 MMU. The guest uses the Stage-1 MMU. Support was added as optional in the ARMv7ve architecture and is also supported in the ARMv8 (32-bit and 64-bit) architectures.
In order to make this translation efficient, software engineers implemented software based shadow page table. Shadow page table will translate guest virtual memory directly to host physical memory address. Each VM has a separate shadow page table and hypervisor is in charge of managing them. But the cost is very expensive since every time a guest updates its page table, it will trigger the hypervisor to manage the allocation of the page table and its changes.
In order to make this translation more efficient, processor vendors implemented technologies commonly called SLAT. By treating each guest-physical address as a host-virtual address, a slight extension of the hardware used to walk a non-virtualized page table (now the guest page table) can walk the host page table. With multilevel page tables the host page table can be viewed conceptually as nested within the guest page table. A hardware page table walker can treat the additional translation layer almost like adding levels to the page table.
Using SLAT and multilevel page tables, the number of levels needed to be walked to find the translation doubles when the guest-physical address is the same size as the guest-virtual address and the same size pages are used. This increases the importance of caching values from intermediate levels of the host and guest page tables. It is also helpful to use large pages in the host page tables to reduce the number of levels (e.g., in x86-64, using 2 MB pages removes one level in the page table). Since memory is typically allocated to virtual machines at coarse granularity, using large pages for guest-physical translation is an obvious optimization, reducing the depth of look-ups and the memory required for host page tables.
Rapid Virtualization Indexing (RVI), known as Nested Page Tables (NPT) during its development, is an AMD second generation hardware-assisted virtualization technology for the processor memory management unit (MMU).[1][2] RVI was introduced in the third generation of Opteron processors, code name Barcelona.[3]
A VMware research paper found that RVI offers up to 42% gains in performance compared with software-only (shadow page table) implementation.[4] Tests conducted by Red Hat showed a doubling in performance for OLTP benchmarks.[5]
Extended Page Tables (EPT) is an Intel second-generation x86 virtualization technology for the memory management unit (MMU). EPT support is found in Intel's Core i3, Core i5, Core i7 and Core i9 CPUs, among others.[6] It is also found in some newer VIA CPUs. EPT is required in order to launch a logical processor directly in real mode, a feature called "unrestricted guest" in Intel's jargon, and introduced in the Westmere microarchitecture.[7][8]
According to a VMware evaluation paper, "EPT provides performance gains of up to 48% for MMU-intensive benchmarks and up to 600% for MMU-intensive microbenchmarks", although it can actually cause code to run slower than a software implementation in some corner cases.[9]
Mode Based Execution Control (MBEC) is an extension to x86 SLAT implementations first available in Intel Kaby Lake and AMD Zen+ CPUs (known on the latter as Guest Mode Execute Trap or GMET).[10] The extension extends the execute bit in the extended page table (guest page table) into 2 bits - one for user execute, and one for supervisor execute.[11]
MBE was introduced to speed up guest usermode unsigned code execution with kernelmode code integrity enforcement. Under this configuration, unsigned code pages can be marked as execute under usermode, but must be marked as no-execute under kernelmode. To maintain integrity by ensuring all guest kernelmode executable code are signed even when the guest kernel is compromised, the guest kernel does not have permission to modify the execute bit of any memory pages. Modification of the execute bit, or switching of the guest page table which contains the execute bit, is delegated to a higher privileged entity, in this case the host hypervisor. Without MBE, each entrance from unsigned usermode execution to signed kernelmode execution must be accompanied by a VM exit to the hypervisor to perform a switch to the kernelmode page table. On the reverse operation, an exit from signed kernelmode to unsigned usermode must be accompanied by a VM exit to perform another page table switch. VM exits significantly impact code execution performance.[12][13] With MBE, the same page table can be shared between unsigned usermode code and signed kernelmode code, with two sets of execute permission depending on the execution context. VM exits are no longer necessary when execution context switches between unsigned usermode and signed kernel mode.
bcf7231420