Hp Nc552sfp Driver

0 views
Skip to first unread message

Muredac Ford

unread,
Aug 5, 2024, 9:07:33 AM8/5/24
to arverewa
Indeedit is possible. Rock 5B allows direct plug of a M2-M 2280 SATA controller. Easy to find modules exist based on ASM1166 and JMB585 chips. No specific drivers required, all these modules work out-of-the-box with all Armbian or Debian/Ubuntu CLI releases I have tested.

This is working quite fine with software RAID and I am about to finalize a NAS including surveillance station based on Rock 5B board in a Fractal Design Node 304 NAS case. In the end, this is not really cheaper compared to a low power x86 ITX board, but it is perfectly possible!


Fibre Channel over Ethernet (FCoE) software has been removed from Red Hat Enterprise Linux 8. Specifically, the fcoe.ko kernel module is no longer available for creating software FCoE interfaces over Ethernet adapters and drivers. This change is due to a lack of industry adoption for software-managed FCoE.


In Red Hat Enterprise Linux 8, the e1000 network driver is not supported. This affects both bare metal and virtual environments. However, the newer e1000e network driver continues to be fully supported in RHEL 8.


With this update, the tulip network driver is no longer supported. As a consequence, when using RHEL 8 on a Generation 1 virtual machine (VM) on the Microsoft Hyper-V hypervisor, the "Legacy Network Adapter" device does not work, which causes PXE installation of such VMs to fail.


Going by the drivers on this page for it ( =5154775) there is a 64bit driver available for red hat which is linux based so I think it should work as long as you have the PCI-e x8 lane available or x4 as backwards compatible. Wait for someone else to confirm but that is my thought process on it


Thanks Connor Moloney for the link. Look in the os support page, looks will work like you said. I think i will have to try.Did you know if i can have both onboard motherboard 1gb connection and the pci 10gbe running in the same time?


Many devices outside the HCL in fact work very well with XCP-ng. Being outside the HCL means that there have been not tests to ensure that they work. Most of the hardware support depends on the Linux kernel and thus support for hardware outside the HCL depends on how well the drivers are supported by the Linux kernel included in XCP-ng.


There are several USB 5Gbps NICs based on this chipset available on the market. A dedicated kernel module driver is available to add support to XCP-ng for (supposedly) all NICs based on Marvell (originally Aquantia) AQC111U over USB3. The driver should not be confused with the generic AQC111 that supports the whole family of NICs based on the AQC111 chipset, but NOT the ones connected over USB3. The kernel module provides support only for AQC111U-based NICs.


Despite the AQC111U-based adapters support the IEEE 802.3bz standard (AKA 5BASE-T) and will correctly negotiate with compatible peripherals the communication at 5Gbps, the actual bandwidth will not exceed 3.5Gbps due to the overhead of the incapsulation of the ethernet protocol over the 5Gbps connection via USB 3.0 (AKA USB 3.1 Gen 1).


Upgrades using the installation ISO will not retain the alternate driver package, so remember to re-install it after the upgrade if it's still needed (the main driver may have been updated too and make the alternate driver useless in your case).


Upgrades using the yum method will retain the alternate driver package, unless we stop providing it (usually, because the main driver will have been updated too). If the alternate driver is retained, it may have changed versions, so you may still need to consider going back to the main driver. If after an upgrade no driver works correctly for your system anymore, open a bug report.


Additional kernel modules are a lot like alternate drivers (most of the above section applies to them) except that they don't replace an existing driver from the system. They add a new one that didn't exist at all.


We provide an "alternate Linux kernel" on XCP-ng 8.0 and above, named kernel-alt. It is kernel 4.19, as the main kernel, but with all updates from the Linux 4.19 branch applied. By construction, it should thus be stable. However it receives less testing so we cannot fully guarantee against regressions (any detected regression we'd work on a fix quickly, of course). We also backport security fixes from the main kernel to the alternate kernel when needed.


This will boot the installer with the alternate kernel and also install the alternate kernel on the system in addition to the main one (but will not make the alternate kernel the default boot option for the installed system).


This will install the kernel and add a grub boot menu entry to boot from it. It will not default to it unless you change the default in grub.cfg (/boot/grub/grub.cfg or /boot/efi/EFI/xenserver/grub.cfg depending on whether you are in BIOS mode or UEFI mode).


This "Fix" was a no go for us. We have 20 servers with 2 10Gb NC523SFPs and we continue to have an issue with random NIC flapping. One of the 2 nics will randomly lose connection to the switch for about 2 seconds, long enough to trigger that network redundancy was lost. We are running DL380 G7s and we have two separate datacenters with 10 Esx hosts in each one. We even tried putting two NC522SFPs in a server and when it "flapped" the only way I could get the network connection to stop bouncing was to reboot the server.


I can believe that. I know a number of customers still have ongoing problems with their NC522 and NC523 NIC's and are still experiencing some disconnections, although much less frequently and more minor on the whole. The real solution would be to change to NIC's that don't have these problems. I use the Intel X520 dual port cards and have not had any issues. But in a lot of cases it may not be possible to change. I was made aware recently of another firmware revision for these cards, not sure about the driver. So it's even more important with these cards to keep up to date with firmware and drivers, but really these problems should not happen with equipment that has been tested and certified to work.


I tried going to support.hp.com, but the drivers are not accessable that way. Instead, go to and then in the search box on the top right of the page, put the card in that you are looking for. A list where you can select your operating system should come up. Click "vmware esx/esxi 4.1". Download the HP Qlogic P2P Flash Update Kit. You have to create a CD/DVD to boot off of, and it will patch the firmware from that.


Hi Simon, The customers I've helped with issues related to this advisory got the firmware and drivers directly from HP support (and had HP Techs update them). It is a bit of a process to go through when doing the update though as you have to boot from CD or USB stick. HP definitely do support the servers if it is one they have certified that is on the VMware HCL. Your colleagues in NZ IT experienced exactly the same issue and I helped them resolve it.


Hi Simon, Just so you are aware there are a bunch of other BIOS settings that are recommended in addition to the drivers and firmware. It's recommended that you configure static high performance for power management and enhanced cooling. One of the main reasons for this is that the cards are overheating under load, which is one of the problems. With these settings and the new drivers / firmware the symptoms should be greatly reduced, although I'm not 100% sure it's a complete fix.


HP techs have confirmed to me that for some customers their firmware and drivers fixed the flapping, but for others it continues. Our solution was to replace our NC523SFP cards with Intel x520-DA2 cards. We have had no more issue and are now able to move on with our migration plans.


HP does have some techs configuring firmware dumps for the network cards to try to capture what is causing the problem, but we had already replaced the nics when HP sent this to me. When calling them, make sure you ask about this.


Hi Simon, Thanks for your update today and for the link to the download for the service pack ( _packs/en/index.html), which I have included in the post now. I hope this fixes the problems. Please post a comment back here if it is successful.


I notice HP haven't updated the advisory I linked to above which still suggests people go to Qlogic for the firmware, which is strange considering that the Qlogic firware hangs on these servers (our 6 core CPU builds anyway), and the HP SPP is bootable and does update the firmware succesfully.


Hi J, thanks for the feedback, it's unfortunate how many customers have been impacted by these problems. The first release of vSphere 5 from HP had issues with the OneConnect Emulex NIC's (mainly in blade infrastructures). This has been fixed in the latest patch releases. For a while the OneConnect wasn't on the HCL and wasn't supported by HP. So when the time is right to upgrade to ESXi 5 it would pay to run a test for a bit with the OneConnect cards and make sure you use the latest patch release.


We have never had reliable networking with these servers (ESX 4.1, ESXi 4.1, ESXi5.0) and it is constantly a battle between ESXi versions and updates, driver updates, firmware updates and hope to keep things running. The only reliable way to ensure network connectivity is to reboot the servers every 4 weeks or so, before the network cards poop themselves.

3a8082e126
Reply all
Reply to author
Forward
0 new messages