[IVSHMEM]: Best way to discover BDF in IRQ handler

29 views
Skip to first unread message

Luca Cuomo

unread,
Dec 11, 2017, 8:58:47 AM12/11/17
to Jailhouse
Hi,

i'm running a ivshmem-demo with a Linux root-cell and a BM cell which are connected via multiple PCI device.

LINUX BM

PCI dev (bdf: 00e.0) <-> PCI dev (bdf: 00e.0)
PCI dev (bdf: 00f.0) <-> PCI dev (bdf: 00f.0)

and so on..

The link between devices is correctly established and i can see interrupt handled in both cell.

But, if in the root cell i can distinguish the device which received IRQ using UIO driver, in the BM cell the IRQ handler does not give any kind of information.

Which is the best way (clean and fast) to acquire such information?

Thanks in advance.

Best regards,

--Luca

Jan Kiszka

unread,
Dec 11, 2017, 9:47:01 AM12/11/17
to Luca Cuomo, Jailhouse
On 2017-12-11 14:58, Luca Cuomo wrote:
> Hi,
>
> i'm running a ivshmem-demo with a Linux root-cell and a BM cell which are connected via multiple PCI device.

What is a "BM cell"?

>
> LINUX BM
>
> PCI dev (bdf: 00e.0) <-> PCI dev (bdf: 00e.0)
> PCI dev (bdf: 00f.0) <-> PCI dev (bdf: 00f.0)
>
> and so on..
>
> The link between devices is correctly established and i can see interrupt handled in both cell.
>
> But, if in the root cell i can distinguish the device which received IRQ using UIO driver, in the BM cell the IRQ handler does not give any kind of information.
>
> Which is the best way (clean and fast) to acquire such information?

Which architecture are you on? Depending on that, you either get
separate MSI-X interrupts anyway (x86, and only on few ARM64 systems) or
need to assign separate legacy INTx to the devices. So there should be
no dispatching problem from that point of view.

Jan

--
Siemens AG, Corporate Technology, CT RDA ITP SES-DE
Corporate Competence Center Embedded Linux

Luca Cuomo

unread,
Dec 11, 2017, 10:32:33 AM12/11/17
to Jailhouse
Il giorno lunedì 11 dicembre 2017 15:47:01 UTC+1, J. Kiszka ha scritto:
> On 2017-12-11 14:58, Luca Cuomo wrote:
> > Hi,
> >
> > i'm running a ivshmem-demo with a Linux root-cell and a BM cell which are connected via multiple PCI device.
>
> What is a "BM cell"?

Sorry, BARE METAL
>
> >
> > LINUX BM
> >
> > PCI dev (bdf: 00e.0) <-> PCI dev (bdf: 00e.0)
> > PCI dev (bdf: 00f.0) <-> PCI dev (bdf: 00f.0)
> >
> > and so on..
> >
> > The link between devices is correctly established and i can see interrupt handled in both cell.
> >
> > But, if in the root cell i can distinguish the device which received IRQ using UIO driver, in the BM cell the IRQ handler does not give any kind of information.
> >
> > Which is the best way (clean and fast) to acquire such information?
>
> Which architecture are you on? Depending on that, you either get
> separate MSI-X interrupts anyway (x86, and only on few ARM64 systems) or
> need to assign separate legacy INTx to the devices. So there should be
> no dispatching problem from that point of view.
>
Currently i'm on x86, but i'll do the same thing on ARM64.

Henning Schild

unread,
Dec 11, 2017, 12:17:51 PM12/11/17
to Luca Cuomo, Jailhouse
Am Mon, 11 Dec 2017 07:32:33 -0800
schrieb Luca Cuomo <l.c...@evidence.eu.com>:
Ok, you are probably using the simple demo. This demo does allow
multiple devices and they all use different IRQ vectors, but in the end
they all call the same handler.

Just modify the example to have as many handlers as MAX_NDEV. Now these
4 handlers would know which index to use into devs[MAX_NDEV], and there
is the bdf.

--
static void irq_handler(int index)
{
d = devs[index];
printk(...d->bdf)
}

static void irq_handler0(void)
{
irq_handler(0);
}

....

irq_handler(MAX_NDEV - 1);
}
--

Henning

Jan Kiszka

unread,
Dec 11, 2017, 12:18:50 PM12/11/17
to Luca Cuomo, Jailhouse
OK, BM means you need to hard-code things or provide the inmate other
forms of configuration information to match devices with connections and
select free IRQ vectors.

Luca Cuomo

unread,
Dec 18, 2017, 3:43:50 AM12/18/17
to Jailhouse
Ok, first of all thanks for your reply and i apologize for the delay of my answer.
What you say is correct indeed.
But i'm wondering if is suitable a scenario like the one i'm going to describe (for the moment on x86 platform) :

* Let's suppose there are several PCI devices which connect the root cell (Linux) with the Bare Metal Cell
* Let's suppose that the Bare Metal cell can discover how many devices are mapped as ivshmem memories (in the same way that is now performed by ivshmem-demo)
* as you said before i can install the same routine on different IRQ and in the end this routine is the only one that is called
* In the context of this routine can i discover which device received the interrupt by inspecting the IntrStatus of the ivhsmem registers of each device?
* If yes, which is the best way to reach this register? mmio_read32()?

Thanks in advance, again.

--Luca

Jan Kiszka

unread,
Dec 18, 2017, 4:20:21 AM12/18/17
to Luca Cuomo, Jailhouse
Henning suggested a pattern to achieve this: Install different routines
for the different devices/interrupts, let them call a common function,
possibly with parameters for the shared work.
Reply all
Reply to author
Forward
0 new messages