0xc004f00f Windows 7

0 views
Skip to first unread message

Theodor Urena

unread,
Aug 4, 2024, 11:37:37 PM8/4/24
to walkjonyre
UnderAutomatic VM Activation client information, the host machine name is the other node in the cluster (not the one it's currently running on). I don't think it matters but thought I'd mention it. Server Manager also shows "Not activated" for product activation. Any help would be appreciated, thanks!

I have been trying to upgrade our windows server 2012 installation that is running hyper-v, but have a problem with the physical nic. The issue is that upon finalising the upgrade, WinSer2012R2 presents a BSOD (IRQL_NOT_LESS_OR_EQUAL) and the only hardware device I cannot remove prior to the upgrade is the NIC so I assume this is giving me the grief.


I am having an odd issue with enabling replication between my Hyper-V hosts. We don't have a central CA so I am using a TechNet blog post "Hyper V Replica Certificate Based Authentication Makecert" (Sorry, my account has not been verified so I can't post links)




The Replica server's name for Hyper-V does not match the received certificate's subject common name (CN) or subject alternative name (DNS Name): The certificate's CN name does not match the passed value (0x800B010F)."


2 x Server and Client vSwitch (Incl Management) Teamed. Their client and server are on the same VLAN, I have advised against this but they are happy with how it is!

2 x iSCSI Multipathed (Not Teamed)

1 x Live Migration

1 x Cluster Network


I've recently replaced a PERC H310 RAID controller with a PERC H710p RAID controller on a Dell PowerEdge R820 running Windows Server 2012 R2 and started experiencing this problem. Everything I've investigated so far doesn't point to this being the issue, but wanted to mention it in case someone encountered something similar.


We were having terrible disk performance until the RAID card was swapped out, and as a precaution I moved all existing VMs off of this Hyper-V host server. Once everything was back up and running, I began a live migration back to the affected host server and it blue screened. Tried again and same result. I then tried copying the VM images manually through a UNC share and hit the same problem. It doesn't always happen at the same time during the copy. I've had blue screens happen 4-5 GB into a transfer, and 200 GB into a transfer.


I've updated the RAID controller driver and firmware to the latest available from Dell, and have installed the latest BIOS and chipset driver. The server has Broadcom 5720 series NICs, updated with the latest drivers and firmware provided by Dell. All Windows/Microsoft updates have been applied.


After all these firmware/driver updates the blue screens still kept occurring during network transfers. All the minidumps show a 0x133 DPC_watchdog_violation error, where the DPC time allotment is 500 ticks and the blue screens happening at 501 ticks. Running the minidumps through Windows Debugger pointed to tcpip.sys, netio.sys, and vmswitch.sys initially. Since tcpip.sys and netio.sys aren't typically the issue, I looked around for anything related to vmswitch.sys being the problem.


I disabled the Hyper-V vSwitch in the OS and did a transfer 100% successfully, with the traffic running through the same NIC the vSwitch was configured to use. Once I re-enabled the vSwitch and transferred more files......the blue screen came back.


In researching, I found reports of this issue when a Hyper-V host was using NIC Teaming. We don't have any of the NICs on this server teamed, but I figured it wouldn't hurt to apply the latest hotfix that addressed the issue.3031598 - even after applying it I was getting more bluescreens. I couldn't find a way to use the updated vmswitch.sys that came with the hotfix (6.3.9600.17714); I tried deleting and recreating the vSwitch, but the old driver (6.3.9600.16384) is what gets applied and searching the OS for an updated driver doesn't turn up anything. I also can't find any info online about manually updating the driver after applying a hotfix.


I'm fairly certain vmswitch.sys is the issue, but I don't know where to go from here. Are there any NIC or vSwitch settings I can adjust to help with this? Has anyone encountered a similar issue? Can anyone lend a hand in diagnosing this issue? I found some good resources on debugging and troubleshooting this issue further (2 URLs below), but this has gone from "this is a good learning experience" to "this needs to get done" in the few weeks I've been troubleshooting.


I'm trying to rename a VM, and I've succeeded in all Fields, except on the name in the summary of the failover cluster (Zsap03 is the name I want and that is appearing in the menu and everywhere, and it is appearing Zsap03V1 on the virtual machine summary)


I turned on Hyper-V in Windows 2012 Standard so that I could install Windows 8.1 in a VM and test. The first problem I had was that the VM could not get to the outside world. I believe it had a 10.100.1.x address. I changed the filtering setting on the virtual switch so that it could see the outside world and rebooted 8.1. After reboot, it had a proper address and was able to see the outside world. Today, though, most of the computers are getting 10.100.1.x addresses! That means they don't work. I have tried a number of things to no avail. I just completely uninstalled hyper-v and rebooted the server. I will be checking shortly to see if that fixed things. Does Hyper-V have its own DHCP server? If so, how do I turn it off? I assume uninstalling will get rid of it, unless it is using the built-in Windows DHCP server in some weird way that is not showing up in the config.


I try to enable VMQ on its nodes. All is okay, but 4 TMG VMs don't receive VMQ for NLB virtual adapters and lost connection thru NLB. Other NLBs (SharePoint, Exchange, AD RMS, CRL) work as expected. TMG VMs get VMQ only for 3rd, non-NLB adapter (TMG intra-array communication).


When starting an 8th VM I get the error indicated in the title. Additional message: "The system was unable to create the memory contents file on 'E:\Hyper-V\....xxxx.bin' with the size of 1025MB. There is plenty of space on all disks and when I shutdown another VM, this VM starts just fine. The same goes with other combinations of VM's; there somehow is a limit a run into at about half the host physical memory size.

The eventlog has no additional entries related to this error (except the ones I already stated).


I am experiencing an intermittent issue with the backup of a Exchange 2013 VM. Normally, backups take less than an hour on both the Host and Exchange VM. The host is backed up first, then the Exchange VM several hours later. After x amount of days a backup will fail and the VM's status changes to Running-Critical with the backup still in progress. At this point it appears to create a 4KB AVHD and becomes inaccessible until that is merged back in. Looking in the event logs I can't see anything in particular that is triggering it, though I may not be looking in the right place. I get event IDs 18190 and 18200 at 5 minute intervals on the host during this period. On the VM the last events are that the virtual disk service has stopped and that the Exchange databases have been frozen by VSS. This is at the time the host backup takes place.


We do regular reboots every week for our virtual machines, but we've found that sometimes, a virtual guest will suddenly stop requesting additional memory. At the time of writing, I have identified 10 machines - on different hosts, and even different clusters which are exhibiting this behaviour. They're also running OS's ranging from Windows 7 to Windows 2008 R2 and Windows 2012.


Rebooting the virtual machine does not resolve the issue, but live migrating the virtual machine from the host it is on to another host suddenly allows it to demand more memory. One can then live-migrate it back to the original host and the problem is no longer there.




I do not know any diagnostic program which can give me more information regarding the memory status of a server - specifically one which can tell me more information about Memory demand, and why it's not working.


Considering that the problem is resolved when I live migrate the affected machine, it would suggest that the issue is with the host, but as live migrating it back no longer exhibits the issue - that seems to negate that theory. Additionally, the other 103 virtual machines which run on this host seem to not show this issue.


We changed the MTU from 1500 to 1472 with HV VMs. We doubt whether need to change the MTU with HV host server NICs as well. If needed please help on the steps that we need to follow on interface as well on team


Hello,



I have two Windows 2008 x64 servers, running Hyper-V. I have about 15 VM's on each server. I manage the VM's all through the Hyper-V Manager. When I try to connect to Hyper-V Manager and manage my VM's I get "Connecting to Virtual Machine Management Service." This times out after a few minutes and then the Hyper-V Manager displays: The operation on computer 'localhost' failed.


Why am I seeing both bridged NICS on the new Hyper-V server? I would prefer that the network connection displayed the network to which the server is connected to, just like you can see on the current Hyper-V Server.


So our networking team asked me what version NTP our domain controllers are servicing and I thought it would be a walk in the park looking up a compatibility list for this but to my surprise that is nowhere near the truth so it seems.


I have run into vast amounts of this question and almost all are answered with the links to how you setup NTP in your environment but those do not answer the initial (seemingly) simple question, WHAT version NTP is exactly being used by which server version.


But the text used in this article comes from another link and in there the exact text that should point out that NTP version 3 with parts of NTP version 4 is used has been removed from that article: -us/previous-versions/windows/it-pro/windows-server-2003/cc773013(v=ws.10)?redirectedfrom=MSDN

3a8082e126
Reply all
Reply to author
Forward
0 new messages