Hp Ethernet 10gb 2-port 560flr-sfp+ Adapter Driver Download

0 views
Skip to first unread message

Mark Tracy

unread,
Aug 5, 2024, 2:13:26 PM8/5/24
to difounlige
NICresetting has been occurring as far back as I can trace and it would generate one of the following warnings in the System Event Log. The NIC resetting has occurred during general operations and during Hyper-V Live Migrations of VMs. The last one occurred on 5/17/2020 at 8:57am during Live Migration of VMs to the node that NIC reset occurred on.

The network interface "HPE Ethernet 10Gb 2-port 560FLR-SFP+ Adapter #2" has begun resetting. There will be a momentary disruption in network connectivity while the hardware resets.

Reason: The network driver did not respond to an OID request in a timely fashion.


The network interface "HPE Ethernet 10Gb 2-port 562FLR-SFP+ Adapter" has begun resetting. There will be a momentary disruption in network connectivity while the hardware resets.

Reason: The network driver detected that its hardware has stopped responding to commands.


This is what is happening. The VMMS service is so broken that I have to shut down every VM on the node with the issue. I then try to restart the node but it gets stuck trying to shutdown and I have to force a power off and when it resets the VMs move to a different node and start back up. Not good.


I continued to have 252 events and 10400 nic resets particularly during live migrations after switching to a converged networking model. I decided to move the live migration traffic to a separate team of nics in an attempt to avoid live migrations causing Hyper-V to go into an unusable state. Nics resets had stopped during live migration since I made the change in May. My HPE engineer also recommended setting the "Maximum Number of RSS Queues" to a higher number to help aleviate the 252 events.


On 8/24/at 2:55 PM on Node 4, one of the 10Gb Nics of the team for the Hyper-V-VmSwitch reset (10400 event), no issues occurred with Hyper-V or the Cluster because it was only 1 nic of the team. One thing to note was that this was the 1st day of classes for the fall semester on our campus.


As you can see by the list of events below, I continued to have some 252 and 10400 events but they did not break the Hyper-V Virtual Machine Management service until 9/23/20. On this day 2 nodes of the cluster, Virtualsrv3 and Virtualsrv4 experienced Nic resets on both 10Gb Nics of the Nic team used by the Hyper-V-VmSwitch.


I have had support cases open with Microsoft and HPE but no one has been able to find the answer to why this continues to happen. Microsoft said to increase the "Receive Buffers" Nic setting from 512 to 2048 but that did not help either.


I had a case open with Microsoft last year for this and it didn't solve any issues so I am hesitant to open another one at a cost of $330/hour when many hours will be spent without much happening as this is a very complicated issue and no answers have ever been posted to fix it. That is why I currently have a case open with our hardware vendor in hopes something will come out of that. I posted this here in hopes that someone who encountered this would respond with some helpful insight. I have a lot to share with anyone that can help in this situation.


Monarch, Have you been able to get any information as to why this is happening. I am also experiencing the exact same issues using Intel servers and Intel x710 network adapters. I have not been able to figure anything out as to what is causing this.


Ken, Monarch,

we have the exactly issue too, with intel x722 and windows server 2019 in an almost the same infrastructure. Did you find any solution?

We are going crazy because it seems to happen only when a VM is live migrated but not every time.

Any reply is appreciated


Dell's advice also included advice to disable Proset which we didn't have. The full advice from Dell is below:

To implement the workaround, run the following commands on each cluster node (as admin):


The first command disables Intel DMIX, also known as Intel PROSet. The network driver continues to function; only the PROSet feature is disabled. Due to its parameters, this command will return an error if run from a PowerShell prompt but will run correctly from a command prompt.


We had the same exact issue with Poweredge R640, Intel X710 and Windows 2019 cluster. I worked with Dell for about 2 months trying to get the issue resolved before the root cause was determined. Paul B.'s instructions above from Dell will give you a workaround for the issue. There was no mention of this issue in any Intel tech notes. However, I was told by Dell that the issue was supposed to be resolved in an upcoming driver release. That was in 9/22. I've found no mention of this fixed in any driver release notes so I can't say if it still exists with the current driver version.


I've been working cross silo to troubleshoot an issue with packet loss and had a quick question. From the "esxcli network nic stats get -n vmnic" command we're seeing "Receive Over Errors." Does anyone know what this terminology means? My bread and butter is network engineering and I've never come across that term. I've looked all over the internet and can't find an answer.


I am trying to calculate the best balance for HA and DR with VADC. The end users will have the View Client configured to connect to one and only one virtual desktop. That desktop has a mapped relationship with a physical phone on that users desk. I am using View Connection broker only as a management platform so I can use instant clones to save time/space. My initial thought was to have 1/2 running in one DC1 and the other half running in DC2 with a disabled pool in each that mirrors the other. So in case of a failure in one DC that entirety of effort to recover would be to enable the pool in the existing DC, the instant clones would build out quick and everyone could reconnect and during such an event a max of 1/2 the workforce would have an issue.


I thought I'd post just a small "FYI" note to those interested in Log Insight. I've created some free blueprints and shared with the community on VMware code that automates the installation of the Log Insight agent for Linux and Windows in VMs deployed from vRealize Automation. One nice thing about these blueprints is it downloads the agent directly from your Log Insight environment, so no need to stage the installation binaries out in your estate somewhere. Find the download links below.


I am trying to get benchmark results for VMware View 7 and I am facing the same issue people have been reporting since 2013. The PDF report does not generate results. When running the report script manually on the View Planner Appliance it will only generate reports when local mode is specified even though all tests are run in remote mode as is required for benchmarking. Yes the setup passes the compliance check and everything is setup as described in the View Planner documentation.


One thing which I am trying to look into is the fact that the view planner agent shows failed for the status for the pre-install check though it appears to install just fine. There is no install log as the documentation suggests there should be. There are no installation requirements listed in the manual of either so no where to check what the pre-install check is actually looking for.


I was working on retiring a few data stores in my esxi 5.5 cluster to repurpose and noticed in one datastore, there is a few folders with server names that are no longer there or have been renamed and migrated. When I browse these folders, there's one file and it's always imcf-. they are all the same size (728.76KB). what are these files and can they be deleted?


Customer's leasing contract will expire and old (actually over kill) hardware will be fetched away. Right now (finite HW) they had 3 ESXi hosts and apx. 10-20 VMs. There is a vCenter server VM and a Veeam server VM.


I am been running macOS Sierra successfully for quite a while. As i keep getting updates in Mac every now and then, I tried to update it to new workstation since then i am not able to login to the workstation. when i click on macOS Sierra, i keep getting black screen? Here is my screenshot


The last time I setup the entire vSphere infrastructure was back in my ex company in June 2012. I've used dell equallogic PS4110XV 10GB and dell poweredge servers with 10GB sfp+ adapters. I remember that the original esxi installer cd download from vmware does not have the 10GB sfp+ adapters. And I have to download the dell customized vsphere 5.1 installer to make it work.


I will be joining a new company next time and they have 3 brand new HP DL380G9 but not configured for virtualization. Meaning that I will need to purchase extra processor and upgrade the ram to 64GB plus adding the HP Ethernet 10Gb 2-port 560FLR-SFP+ Adapter into each of the 3 servers. Will be using the essential plus license for these hosts.


My question is whether do I also need to download and use the HP customized vsphere 6.5 installer to get these HP servers work? Does anyone have experience using the DL380G9 with HP Ethernet 10Gb 2-port 560FLR-SFP+ Adapter on vSphere 6.5? I am particular worried about the HP Ethernet 10Gb 2-port 560FLR-SFP+ Adapter driver on the vsphere 6.5 installer.


As a new joiner I do not want to propose something which does not work causing the new company to lose confidence in my abilities. As I've said, my ex company uses only dell poweredge servers and not HP servers.


I will also be proposing using 2 x dell networking x4012 for the 10gb sfp+ connections. I understand that the x4012 does not have stacking features compared to the 8024F which I've used previously in stacking mode. But the budget does not allow so we have to make do with the x4012. If i do not stack the x4012 switches, will it work? Does any knows of the x4012 have jumbo packet MTU 9000?


I have configured LACP on one of my hosts connected to a Cisco SG300-20 (firmware 1.4.1.3) in L3 mode. Everything on both sides looks good. However, the virtual machine is not able to ping the default gateway on the SG300 nor is the SG300 able to ping the IP of the virtual machine. I know the IP configuration of the virtual machine is good because when I move it to a standard port group on the same VLAN all is good.

3a8082e126
Reply all
Reply to author
Forward
0 new messages