New to the forums, but a long time fan. Great stuff! Not sure if this is simple or not (probably is and I just messed up --- I hope), but I'm using Microsoft's Solution Accelerator for integrating MDT 2010 Update 1 with SCCM 2007 SP2 and I'm able to create a bootable task sequence and PXE boot a Hyper-V virtual machine and actually get it to start blowing down an image, but it dies with a 0x80004005 error. When I look at the SMSTS.log, it seems like it's driver related (I attached the log). But the errors that caught my eye were:
(The customsettings.ini was modified to include a "DoCapture=Yes" parameter per the MS doc, but I don't recall specifying an unattend setting.) Other than that, the other logs were clean when I nabbed them from the WinPE environment before the VM rebooted following the failure.
Anyway, I tried extracting the drivers from the Hyper-V Integration Components *.cab files and then creating a driver package to be included in the task sequence, but that just renders the subsequent ISO unbootable (I get an error like, "Corrupt vmbus.sys" or something).
It's weird because I thought for Hyper-V vm's you didn't need to inject drivers. (Unless the issue is something else. I'm kinda lost at this point.) Is anyone using MDT 2010 with SCCM 2007 SP2 integration to create task sequences and bootable media, and then using the resulting ISO to PXE boot and install a reference image in a Hyper-V virtual machine?
Just an FYI, decided to just rebuild from scratch just to eliminate any confounding variables, and that appears to have cleared those driver errors. Getting a new one regarding IIS in 2008, but at least this is documented ( ). Will work the issue from here. Appears to be related to the 2008 IIS version disallowing *.config files to pass through.
Thanks to everyone for listening, though! For anyone who was having this issue, I have to confess I did screw up the inital build by installing SCCM 2007 and then integrating MDT before upgrading to SP2. Screw up on my part, but I was too lazy to go back and redo it. This time, I installed SCCM, upgraded to SP2, then went so far as to put on R3 (figured I'd go all in), then after rebooting and letting the dust settle, I installed MDT and ran the integration script.
In the first article we investigated the use of physical memory maps as a way of distinguishing between real and virtualized hardware. While hardware discrepancies are a rich source of VM detection tricks, it is also worth looking at the guest-side software used by virtualization solutions.
The trick, put simply, involves detecting kernel mode drivers by the threads that they create. Since drivers may create a certain number of threads, with varying predictable properties, these attributes can be used to fingerprint them and build heuristics that are useful for detection.
While looking through various system information in Process Explorer, I noticed that thread information for the System process (PID 4) was shown even when it was run without administrative privileges. This information includes the starting address of the thread, which Process Explorer helpfully translates to a name and offset using symbols.
If you multiply 112 by 3 you get 336, which feels too coincidental to be by accident. The vmbus.sys driver seems to spawn three threads per CPU core, all at the same starting address, plus two other threads with addresses a little lower down. To confirm this, I powered the VM down and changed the number of CPUs it was given, then tried again. The thread count at this offset was always three times the number of logical CPUs.
To make sure I fully understood the driver behavior, I threw vmbus.sys into Ghidra. The imports section of the driver showed that both PsCreateSystemThread and PsCreateSystemThreadEx were imported, either of which could be used to start threads. The first API is documented, but the Ex variant is not. It is, however, mentioned in the Xbox OpenXDK code, and the signature should be compatible. By looking through the call sites I could see that this thread creation behavior was coming from AwInitializeSingleQueue. After some quick reverse engineering I was presented with this:
This function creates at most three threads per queue, as per the condition on line 43. The thread start points to AwWorkerThread. This queue creation function is exclusively called by AwInitializeQueues, which I also reverse engineered:
Detecting these driver threads programmatically involves some research. We need to figure out how the system thread information is being read by Process Explorer, use that same approach to extract the starting address information, then utilize that in a signature-based approach to detecting the VM driver.
One of the most common use-cases for NtQuerySystemInformation is to get a list of the processes running on the system, using the SystemProcessInformation information class. A developer might wish to retrieve such a list for any number of benign reasons, so it is rarely ever considered to be a red flag in terms of application behavior. The official documentation on MSDN describes this particular information class as follows:
These structures contain information about the resource usage of each process, including the number of threads and handles used by the process, the peak page-file usage, and the number of memory pages that the process has allocated.
Geoff has meticulously documented the SYSTEM_PROCESS_INFORMATION and SYSTEM_THREAD_INFORMATION structures for both 32-bit and 64-bit versions of Windows going all the way back to Windows 3.1. This saved me a ton of reverse engineering work. Thanks, Geoff!
There are two particularly useful fields here. The first is StartAddress, which contains the memory address at which the thread was started. For kernel and driver threads, this is the address in kernel memory space. The second useful field is ClientId, which contains the ID of the thread and its parent process.
By iterating through these structures we can group them by the thread start address and look for any group that happens to contain the right number of threads. On my VM I found two candidate thread groups:
Why the Windows inbuilt VPN miniport driver needs so many threads is a question best left to performance analysts, but this tells us that our simple count-based detection heuristic is a little bit too crude.
In order to make the heuristic even more resistant to future changes, I modified it to look for any thread group whose count was any even multiple greater than two of the logical CPU count. So, on our 112 logical CPU system, it would consider any thread group with 224, 336, 448, 560, etc. threads.
Just for fun, I built a version of this executable with no console output and renamed some of the functions to more innocuous names, then ran it through a number of online threat analysis tools. None of them detected anything more suspicious than the process enumeration.
We have Windows Server 2012 with Hyper-V virtual machine installed and we have a problem each time we need to shut down the VM, ie it stucks and when we have much lower network traffic (later at night), it shuts down in half an hour, but during the day it takes many many hours to shut down. The VM is connected to the host using virtual switch and it has static IP address (must have). I read that disabling the option "Allow management operating system to share this network adapter" fixes this issue, but on the other side, it shadows the VM in the network, and the VM needs to be connected to our LAN because we have devices connected to the service we're using in the VM, where we have installed CentOS. It's especially a problem when Windows need to make an update and it gets stuck during the process of closing the Hyper-V processes long time, so we always need to hard reset the server (which is unacceptable). I googled this problem and still haven't find a solution, so is there a proper fix to this problem?
The replication works 100% UNTIL I make a backup with the Windows backup (full backup) to an attached USB disk (This backup completes successfully). As soon as the backup finished the following events are logged:
After this, replication AND backup fails until I shutdown all virtual machines - at this point (turned off) the merge succeeds and after I start the machines again everything works fine until the next backup.
I am experiencing an ongoing issue with Hyper-V 2016, when I try to expand a VHDX via Failover Cluster Manager or Hyper-V Manager the expansion does not complete. Furthermore, when I attempt to stop the virtual machine the machine hangs at stopping and becomes unmanageable, this is happening on Hyper-V 2016 and the virtual machine is running Windows Server 2016
When I launch the HyperV Manager I type the ip address of the HyperV Server and the administrator credentials. Clicking on the Connect button it asks me if I want to Enable credentials delegation and I allow. Then it returns the error:
"Delegation of credentials to the server 192.168.1.XXX could not be enabled. CredSSP authentication is currently disabled on the local client. You must be running with administrator priviledges in order to enable CredSSP"
- on client side I installed Windows 10 1909, ran powershell commands "Start-Service WinRM" and "Set-Item WSMan:\localhost\Client\TrustedHosts -Value "SERVER IP ADDRESS"", then enable local group policy on the computer:Computer Configuration > Administrative Templates > System > Credentials Delegation > Allow delegating fresh credentials with NTLM-only server authentication and add "wsman/SERVERIPADDRESS" in the Show tab, then installed the HyperV feature and disabled the W10 firewall
Hi,
We are having som network issues regarding our Nested Virtualization setup.
* The Physical host have no problem regarding retransmission of TCP packages to the network
* The Nested Virtualization host have no problem regarding retransmission of TCP packages to the network
* The VM on the Nested Virtualization host have problem regarding retransmission of TCP packages to the network (3-6% retransmission - the rest of the packages is behaving just fine).