There was some chatter a while back regarding MSI support:
One might hope that this would enable MSI-x support as well, but apparently, not. My guess is that someone added a kluge to permit writing to the MSI enable regions in PCI config space for a device, neglecting to add the kluges for MSI-x as well.
I find that I need to flag the mellanox as 'permissive' or else I get errors flagged on the dom0 console log regarding attempts to write to PCI config space. Once I do so, I am still unable to load the device driver for the ConnectX 4-LX device (mlx5_core). It fails when it attempts to allocate the set of MSIX vectors sized based on the # of CPUs online.
The driver assumes MSIx are available. No fall-back to MSI. No fall-back to INT A/ INT B.
Are there some other magic knobs I need to tweak?
MSIx and Xen does raise some interesting issues. I would like to have to option of spinning up a domU with, say, 20 VCPU and, knowing that the Mellanox will assign queues to MSIx and I can assign MSIx to CPUs, I would like to have dom0 bind the vCPU to real CPU so that the interrupt mapping works correctly. This would be for some network performance work I have to do occasionally.
I am also keen to enable the VF devices in the adapter (using some domU instance to enable) so that these VF instances can be passed to other domU instances. Also want to see if I can get hardware offload working and OVS working in qubes. Just for fun.
Q: if a domU kernel enables VF devices in a mapped PF device instance, will the dom0 kernel discover the VF devices? IE: what is the mechanism whereby a kernel discovers the need for a bus-walk?
This has to work correctly for Xen, no?