Qlnativefc Driver Vmware

0 views
Skip to first unread message

Cortney Ruic

unread,
Jul 31, 2024, 7:18:13 AM7/31/24
to techktechtonews

DELL has not verified any Firmware after 09.01.07 and Driver above version 4.1.44.0 for QLE256X FC HBAs because this card has been EOL since September 2021.
DELL will not verify drivers and firmware for EOL devices and hence, the same will not be updated in the VMware IO Compatibility site.

qlnativefc driver vmware


Download Zip >>> https://3contnajumo.blogspot.com/?fm=2zV1xQ



The retail version of the QLogic QLE2560 card supports latest driver version 5.3.80.0 and the same has been confirmed by QLogic engineers. Hence, if we select the vendor as QLogic on the VMware IO Compatibility site, the supported qlnativefc driver version will be 5.3.80.0. However, if we select the vendor as DELL, the VMware IO Compatibility site will list the OEM firmware version 09.01.07 and qlnativefc driver versions as 4.1.14.0, 4.1.22.0 and 4.1.44.0.

Base on Qlogic engineering perspective, Customer can update QLE256X driver to 5.3.80.0. If used normally, it can be used all the time. If there is a problem with the QLE256X driver, it is recommended to downgrade to 4.1.44.0, otherwise VMware TS will think that it does not meet the HCL and refuse technical support.

I Have HP Systems Insight Manager 7.1 Update 1 installed on a Windows 7 x64 system. I have discovered several VMWare ESXi 5.5 hosts running on HP ProLiant DL380 G7 servers. All of these servers were installed using the HP custom .ISO for VMWare ESXi 5.5, so they are running the HP agents.

I have WBEM credentials configured and it is correctly reading the hardware with the exception that it is throwing the following warning/error on all systems under the Health Status --> WBEM header on teh System Status page:

! FC HBA

If I click on the FC HBA link it generates the following detailed information:

FC HBA Ports
Port Status Port Name Controller ID Port Type Port Type Description Current State
Stopped Fibre Channel Port 1 QLogic QLE8152 Other Fabric There is a minor problem that is causing limited interference.
Stopped Fibre Channel Port 2 QLogic QLE8152 Other Fabric There is a minor problem that is causing limited interference.

FC Port Statistics
Controller ID Bytes Transmitted Bytes Received Pkts Transmitted Pkts Received CRC Errors Link Failures
QLogic QLE8152 0 Bytes 0 Bytes 35741347 24021341 1 0
QLogic QLE8152 0 Bytes 0 Bytes 35741562 24021616 1 0

I can see entries for the following under Storage Adapters in vSphere:

ISP81xx-based 10 GbE FCoE to PCI Express CNA
vmhba1
vmhba2

I assume this is the hardware causing the FC HBA events in HP SIM. I am looking for a way to stop getting these events or ignore them.

Possible solutions:
1. Filter/ignore the FC HBA within HP SIM (it is not an option under "WBEM Health Inclusion Status"
2. Disable the hardware on the VMWare ESXi 5.5 server so that it no longer registers with VMWare or responds to HP SIM WBEM query.

Anyone have steps to accomplish either of these or an alternate solution to accomplish stopping the FC HBA events?

What appears to be happening, as best I can tell, is that the VMWare ESXi kernel is detecting the QLogic QLE8152 10GB NIC as both being an iSCSI NIC device (loading system driver "qlge" for it) as well as being a Fibre Channel device (loading system driver "qlnativfc" for it). HP Systems Insight Manager sees the Fibre Channel driver loaded, but does not see any successful connectivity for it, so generates an alert (similar to what it does for NICs that aren't connected, but with no option in the "WBEM Health Inclusion Status" to Ignore it).

7. Re-establish an SSH connection after the host reboots and verify that the qlnativefc system driver is both disabled and did not load by running the same list command from step 3. Output should look something like:

NOTE: Not entirely sure why it started loading a new qla2xxx driver after I disabled the qlnativefc driver...could be a lower detect order driver in the kernel that matches the same hardware id string as the qlnativefc driver. In any event, having it load did not cause the same HP SIM alerts for "FC HBA".

We had a similar problem today, but only one FC adapter on one host (out of 16 identical ESXi hosts, each with a dual port "ISP2532-based 8Gb Fibre Channel to PCI Express HBA") reported this error "a minor problem that is causing limited interference".

A reboot of the ESXi host fixed the problem so far, but it's not a solution if that happens again. Especially because in vShpere Client there was NO alarm (hardware status all green) in this case! Only HP SIM reported the disabled adapter!!

NPIV is the process where a Fiber Channel switch port can advertise multiple Port WWPNs for the same fiber channel connection. This allows you to have a single switch port represent the WWPN of an ESXI host port and also a virtualized WWPN assigned directly to a VM. Cisco UCS also does this, with the physical ports of the edge device representing the different virtual WWPNs of the different virtual HBAs on the service profiles provisioned on the Cisco hardware.

This is useful when you want to identify a particular VMs fiber channel traffic by identifying it by its own unique WWPN. Also, with Hitachi Server Priority Manager (QOS), you have to assign QOS limits by selecting a WWPN to assign a quota to. Without NPIV, the best you could do is limit the ESXi server HBA to a limit, thereby limiting ALL vms on the ESXi host.

We use VirtualWisdom, which is a Fiber Channel performance and health monitoring software package made by VirtualInstruments. With NPIV, we can now trace IOPS, MB/s and latency right to the VM as opposed to ESXi HBA port.

Connect your RDM to it at this time. When we are done, the VM should be accessing your RDM via your NPIV-generated wwpns. If this fails for some reason, it will fall back on the wwpns of the ESXi host HBAs. Remember, it will ONLY see RDMs this way, not .vmdk disks sitting in a datastore. This ALWAYS go through the ESXi host wwpns.

There are a number of ways to verify NPIV is working. I used my VirtualInstruments software to verify it could see the NPIV wwpn. I created a Host entity and added the two NPIV wwpns as HBA_port objects to it, then graphed it:

Translated, this means VMK_NOT_FOUND. Basically, it means no LUN paths could be found via the NPIV wwpns. In my case, this was due to a bad driver. On my Dell PowerEdge 710/720 servers, I had to install the qla-2xxx driver as opposed to the qlnativefc driver to get NPIV to work. I have a separate post forthcoming that details this procedure.

Basically, you can see it creates vmhba64 (This is your virtual NPIV adapter. The number after vmhba varies). It tries to scan for your RDM LUN (Target 3, LUN id 1) and fails. After several retries, it gives up and deletes the vmhba.

The lines that start with GID_PT show the target ports that the NPIV wwpn sees (this is a separate discovery process than the ESXi HBA). You notice it only sees two of the target ports.

Once I saw this, I traced the extra wwpns back to the storage system I realized I was connected to and remove it from the zoning of the ESXi HBA ports. After a rescan of the storage on the ESXi cluster and a reboot of the hosts to be safe, I booted the VM up again and viola!

Make sure you are meeting all of the requirements for NPIV. VERIFY you are zoning your NPIV wwpns to ALL STORAGE SYSTEMS the cluster can see, even if it will not use any storage from the storage systems. Be aware that if you connect any new storage systems to a cluster with one or more NPIV-enabled VMs, you will need to add the new zones so the NPIV wwpns can see the new storage system target ports, or it will probably start failing.

This is really great info. One question: Does the VM involved need to *only* have RDM mapped drives to it? Or can it be a mix of regular mapped storage through the hosts (say the OS drive) and then the needed drives (SQL dat, index, log, and quorum drives) mapped as RDM?

are virtual wwns somehow bound to physical HBA or wwns can jump from one HBA to another after VM or host reboot?
can wwns be bound to specific HBA, in case, for example, when you have 4 HBA but want to use only specific 2.

We are running vSphere 6.0u2 and I am connected to two fabrics, all current model Cisco switches that are up to date. In this environment we are going 100% virtualized. Most of our Windows SQL clusters are being implemented via always-on, so they are not an issue; however, we have one cluster that needs to be implemented using traditional shared disk clustering. I was going to implement this with RDMs; however, I was debating using NPIV instead. Please correct me if I am wrong, but using NPIV, can I not just bypass vSphere all together and go straight to the SAN? I should be able to do the zoning using the WWPNs from the NPIV adapters and connect them directly to storage, thereby presenting LUNs to the VM without having to present them to vSphere first. If I have to first present them as RDMs to the VM, then that kind of defeats the purpose in my case.

Yeah, I was hoping it would be similar to iSCSI. I assumed that if I just zoned the LUN/SAN to both the physical HBA and the virtual HBA that I would then be able to present it to the VM and work with it as if it were zoned to a physical host. I may have to look into FCoE for possible future projects then to see if it can be done that way.

Thanks for the ur brief description, also have a question does the provided article, Tape library FC drives, I been having an issue integrating Tape Libray with the virtualized server via SAN switch.
1.Server: Dell R930 Rack server
2. Virtualization: ESXi 6.5U2 also have vcenter 6.5 as centralized management
2. SAN switch: Brocade 3000.
3. Tape Library: dell TL2000
Also, have DEllEMC networker on my environment. Please if you can find any documents please share on the topics at hand.

93ddb68554
Reply all
Reply to author
Forward
0 new messages