Running ivshmem-demo in Jetson TK1.

593 views
Skip to first unread message

hari....@gmail.com

unread,
Apr 18, 2017, 12:49:31 AM4/18/17
to Jailhouse
Hi,
I have managed to up jailhouse and also create a non root Linux cell on Jetson TK1. I would like to first run and see the ivshmem-demo in Jetson TK1 but it's only available for x86 architecture. What can I do to run it on Jetson TK1?

Regards,
Harikrishnan

Ralf Ramsauer

unread,
Apr 18, 2017, 5:30:37 AM4/18/17
to hari....@gmail.com, Jailhouse
Hi,

you can boot Linux with the ivshmem-net driver in your non-root cell.
Find the ivshmem-net driver for Linux here [1]. It will run on ARM.

Ralf

[1]
http://git.kiszka.org/?p=linux.git;a=shortlog;h=refs/heads/queues/jailhouse

Hari Krishnan

unread,
Apr 18, 2017, 8:41:15 AM4/18/17
to Jailhouse
Hi,
Thanks for the reply.

I am a newbie and am still unclear of how to exactly run ivshmem-demo which is available in inmates->demos->x86.

According to the documentation for inter-cell communication, "You can go ahead and connect two non-root cells and run the ivshmem-demo. They will send each other interrupts".
1) Do we need two non root cells or can this work with a root cell and a non root cell?

2) If it is possible to communicate between a root cell and a non root cell, how can we " connect" these two cells?

3) How should I run ivshmem-demo?
As in, for a demo uart-demo for arm, I had a uart-demo.bin file which I had loaded in a non root cell,jetson-demo.cell.
How can I proceed to run ivshmem-demo on Jetson TK1? Could you help me in a more comprehensive manner?

Regards,
Harikrishnan

Jan Kiszka

unread,
Apr 18, 2017, 9:09:57 AM4/18/17
to Hari Krishnan, Jailhouse
On 2017-04-18 14:41, Hari Krishnan wrote:
> Hi,
> Thanks for the reply.
>
> I am a newbie and am still unclear of how to exactly run ivshmem-demo which is available in inmates->demos->x86.
>
> According to the documentation for inter-cell communication, "You can go ahead and connect two non-root cells and run the ivshmem-demo. They will send each other interrupts".
> 1) Do we need two non root cells or can this work with a root cell and a non root cell?

The root cell is also a cell, so, yes.

>
> 2) If it is possible to communicate between a root cell and a non root cell, how can we " connect" these two cells?

Look at the existing configs/, e.g. for the qemu-vm.c and the
linux-x86-demo.c. They both contain the config fragments (PCI device,
memory region) to establish a cell-to-cell link. A more complex scenario
(multiple links) can be found under configs/zynqmp-zcu102.c.

>
> 3) How should I run ivshmem-demo?
> As in, for a demo uart-demo for arm, I had a uart-demo.bin file which I had loaded in a non root cell,jetson-demo.cell.
> How can I proceed to run ivshmem-demo on Jetson TK1? Could you help me in a more comprehensive manner?

Before going into details here, let me ask you what your goals are: Is
the purpose to understand the details or more to achieve a certain
functionality? Do you want to establish a low-level ivshmem link between
the root cell and some bare-metal or a non-Linux OS in a non-root cell?
Or are you looking for ivshmem-net, a network link over ivshmem?

Documentation of ivshmem is in flux because the whole interface is in
flux. There is, e.g., a branch wip/ivshmem2 which contains some more
modifications to the virtual PCI device but also a specification of the
same [1]. Not yet set in stone, though.

Jan

[1]
https://github.com/siemens/jailhouse/blob/wip/ivshmem2/Documentation/ivshmem-v2-specification.md

--
Siemens AG, Corporate Technology, CT RDA ITP SES-DE
Corporate Competence Center Embedded Linux

Hari Krishnan

unread,
Apr 20, 2017, 12:19:44 PM4/20/17
to Jailhouse, hari....@gmail.com
> Before going into details here, let me ask you what your goals are: Is
> the purpose to understand the details or more to achieve a certain
> functionality? Do you want to establish a low-level ivshmem link between
> the root cell and some bare-metal or a non-Linux OS in a non-root cell?
> Or are you looking for ivshmem-net, a network link over ivshmem?

Hi Jan,

Thanks again for the reply.
Although I would look for implementing specific functionality later, I am currently trying to achieve a low-level ivshmem link between the root cell and another non-root cell. I want to see an interrupt sent from one root cell received and acknowledged by the other cell and sent another interrupt back to the root cell and provide acknowledgement for the same. I believe such a setup is written for in the ivshmem-demo. I have been able to run the uart-demo in Jetson TK1 but I am finding difficulty with ivshmem-demo. Could you help me establish a low level ivshmem link between two cells in Jetson TK1 and run the ivshmem-demo? Does the ivshmem-demo work for arm architecture? What moifications should I make to make it run in Jetson TK1?

Regards,
Harikrishnan

Jan Kiszka

unread,
Apr 21, 2017, 1:56:03 AM4/21/17
to Hari Krishnan, Jailhouse
OK, understood, learning by doing - makes quite some sense.

There will be probably some details to sort out left and right, but the
basic steps should be like this:

- replicate the ivshmem-demo from x86 to inmates/demos/arm and hook it
up in the Makefile

- resolve build issues, maybe provide missing implementations for
inmates/lib/arm

- augment the jetson-tk1-demo.c config with an ivshmem device and a
shared mem region, using to configs/jetson-tk1-linux-demo.c as
reference

- check out https://github.com/henning-schild/ivshmem-guest-code,
validate on x86 that it still works as described in ivshmem-guest-
code/README.jailhouse (if not, report and/or fix)

- make ivshmem-guest-code build for arm, specifically the pieces
described in README.jailhouse

And just ask, if you run into troubles.

Jan

Hari Krishnan

unread,
Apr 26, 2017, 1:30:26 PM4/26/17
to Jailhouse, hari....@gmail.com
Hi,
Thanks for the reply.
So as you said,
1)I've augmented the jetson-tk1-demo.c config with an ivshmem device and a
shared mem region, using to configs/jetson-tk1-linux-demo.c as reference.

2)I replicated the ivshmem-demo from x86 to inmates/demos/arm and hooked it up in the Makefile
I tried to cross compile the same and I have encountered a few errors.
From what I've observed, the errors are mainly regarding the pci related functions.
How can I proceed with this?
PFA the error log.
Thanks and regards,
Harikrishnan

error_log.txt

Henning Schild

unread,
Apr 27, 2017, 4:58:47 AM4/27/17
to Hari Krishnan, Jailhouse
Am Wed, 26 Apr 2017 10:30:25 -0700
schrieb Hari Krishnan <hari....@gmail.com>:
As Jan said, you will have to move the pci library into the ARM inmate
as well. You will basically need a version of inmates/lib/x86/pci.c
that uses mmio instead of pio. So you will have to change the two
functions pci_(read|write)_config to use mmio.

Henning

> Thanks and regards,
> Harikrishnan
>

jonas

unread,
Apr 27, 2017, 10:44:57 AM4/27/17
to Jailhouse, hari....@gmail.com, henning...@siemens.com
> > Hi,
> > Thanks for the reply.
> > So as you said,
> > 1)I've augmented the jetson-tk1-demo.c config with an ivshmem device
> > and a shared mem region, using to configs/jetson-tk1-linux-demo.c as
> > reference.
> >
> > 2)I replicated the ivshmem-demo from x86 to inmates/demos/arm and
> > hooked it up in the Makefile I tried to cross compile the same and I
> > have encountered a few errors. From what I've observed, the errors
> > are mainly regarding the pci related functions. How can I proceed
> > with this? PFA the error log.
>
> As Jan said, you will have to move the pci library into the ARM inmate
> as well. You will basically need a version of inmates/lib/x86/pci.c
> that uses mmio instead of pio. So you will have to change the two
> functions pci_(read|write)_config to use mmio.
>
> Henning
>

Hi,

I'm also experimenting with ivshmem between the root-cell and a bare metal cell. In my case, however, on BananaPi M1.

Could you elaborate on modifying the functions pci_(read|write)_config to use mmio instead of pio?

I guess it's a matter of accessing the appropriate memory mapped PCI configuration space of the (virtual) PCI devices available to the guest/inmate instead of accessing PCI_REG_ADDR_PORT and PCI_REG_DATA_PORT using functions(out|in)[bwl]?

Best regards - Jonas Weståker

Henning Schild

unread,
Apr 27, 2017, 11:30:40 AM4/27/17
to jonas, Jailhouse, hari....@gmail.com
Am Thu, 27 Apr 2017 07:44:56 -0700
schrieb jonas <jo...@retotech.se>:
Exactly mmio = memory mapped IO, pio = port IO (in|out). The outs and
ins will not work, instead the whole config space will be in physical
memory. The location can be found in the root-cell configuration
.pci_mmconfig_base.
Some more information can be found here.
http://wiki.osdev.org/PCI

The method currently implemented is called method #1 on that wiki. Make
sure to keep your access aligned with the size that is requested.

Code that is similar to what you will need can be found in the
hypervisor. hypervisor/pci.c include/jailhouse/mmio.h

Henning

Jan Kiszka

unread,
Apr 28, 2017, 2:28:12 AM4/28/17
to Henning Schild, jonas, Jailhouse, hari....@gmail.com
And as this base address is different for each board, and we do not have
a device tree parser in our inmate library yet, I would suggest to make
this value an inmate command line parameter for now.

Jan

> Some more information can be found here.
> http://wiki.osdev.org/PCI
>
> The method currently implemented is called method #1 on that wiki. Make
> sure to keep your access aligned with the size that is requested.
>
> Code that is similar to what you will need can be found in the
> hypervisor. hypervisor/pci.c include/jailhouse/mmio.h
>
> Henning
>
>
>> Best regards - Jonas Weståker
>>
>

Jonas Westaker

unread,
May 2, 2017, 11:35:25 AM5/2/17
to Jailhouse, jo...@retotech.se, hari....@gmail.com, henning...@siemens.com
> > Hi,
> >
> > I'm also experimenting with ivshmem between the root-cell and a bare
> > metal cell. In my case, however, on BananaPi M1.
> >
> > Could you elaborate on modifying the functions
> > pci_(read|write)_config to use mmio instead of pio?
> >
> > I guess it's a matter of accessing the appropriate memory mapped PCI
> > configuration space of the (virtual) PCI devices available to the
> > guest/inmate instead of accessing PCI_REG_ADDR_PORT and
> > PCI_REG_DATA_PORT using functions(out|in)[bwl]?
>
> Exactly mmio = memory mapped IO, pio = port IO (in|out). The outs and
> ins will not work, instead the whole config space will be in physical
> memory. The location can be found in the root-cell configuration
> .pci_mmconfig_base.
> Some more information can be found here.
> http://wiki.osdev.org/PCI
>
> The method currently implemented is called method #1 on that wiki. Make
> sure to keep your access aligned with the size that is requested.
>
> Code that is similar to what you will need can be found in the
> hypervisor. hypervisor/pci.c include/jailhouse/mmio.h
>
> Henning
>
>
> > Best regards - Jonas Weståker
> >

Thanks for the fast response.
I've got a bit further in porting ivshmem-demo.c from x86 to arm, but a few new questions arise:
When scanning the configuration area of the (virtual) PCI device the followning is reported: "IVSHMEM ERROR: device is not MSI-X capable" - is this a problem?

jailhouse/inmates/lib/x86/mem.c:map_range() is used to map the IVSHMEM region and registers. Got any pointers to code doing the equivalent for ARM?

What is the expected behaviour when accessing unmapped memory in an inmate?

(E.g., I can see the inmate/cell gets shut down when touching memory outside .pci_mmconfig_base + 0x100000):
# Unhandled data read at 0x2100000(2)
FATAL: unhandled trap (exception class 0x24)
pc=0x00000ff4 cpsr=0x60000153 hsr=0x93400006
r0=0x00001834 r1=0x0000000d r2=0x00000000 r3=0x00006ed1
r4=0x02100000 r5=0x00000000 r6=0x00000002 r7=0x0000ffff
r8=0x00001000 r9=0x00000000 r10=0x00000000 r11=0x00000000
r12=0x00000000 r13=0x00006f80 r14=0x00000fc4
Parking CPU 1 (Cell: "ivshmem-demo")

What memory areas are made available by Jailhouse for a cell/inmate to access?

BR - Jonas

Jan Kiszka

unread,
May 2, 2017, 12:12:04 PM5/2/17
to Jonas Westaker, Jailhouse, jo...@retotech.se, hari....@gmail.com, henning...@siemens.com
The demo was written with the assumption there is always MSI-X for
ivshmem interrupts. However, we only have this on ARM when there is also
a gic-v2m MSI controller physically available. That is not the case on
the Jetson.

We then fall back to line-based interrupts (INTx). The demo needs to be
extended in this regard. You will probably have to hard-code the GIC
interrupt number as well because the demos have no device tree support.

>
> jailhouse/inmates/lib/x86/mem.c:map_range() is used to map the IVSHMEM region and registers. Got any pointers to code doing the equivalent for ARM?
>
> What is the expected behaviour when accessing unmapped memory in an inmate?
>
> (E.g., I can see the inmate/cell gets shut down when touching memory outside .pci_mmconfig_base + 0x100000):
> # Unhandled data read at 0x2100000(2)
> FATAL: unhandled trap (exception class 0x24)
> pc=0x00000ff4 cpsr=0x60000153 hsr=0x93400006
> r0=0x00001834 r1=0x0000000d r2=0x00000000 r3=0x00006ed1
> r4=0x02100000 r5=0x00000000 r6=0x00000002 r7=0x0000ffff
> r8=0x00001000 r9=0x00000000 r10=0x00000000 r11=0x00000000
> r12=0x00000000 r13=0x00006f80 r14=0x00000fc4
> Parking CPU 1 (Cell: "ivshmem-demo")

That is the expected behaviour: stop the CPU that performed the invalid
access.

>
> What memory areas are made available by Jailhouse for a cell/inmate to access?

On ARM, the GICV (as GICC) and everything you list in the config.

Henning Schild

unread,
May 3, 2017, 9:13:07 AM5/3/17
to Jonas Westaker, Jailhouse, jo...@retotech.se, hari....@gmail.com
Am Tue, 2 May 2017 08:35:25 -0700
schrieb Jonas Westaker <jonas.w...@gmail.com>:
If you see that the example will not do anything. Your pci access code
might still not work. You can remove that sanity check to provoke more
accesses.

Does the rest of the output look like the pci-code is reading sane
values?
What did you set num_msix_vectors to?

> jailhouse/inmates/lib/x86/mem.c:map_range() is used to map the
> IVSHMEM region and registers. Got any pointers to code doing the
> equivalent for ARM?

I think on ARM the inmates run without paging, so the implementation
would be empty.

> What is the expected behaviour when accessing unmapped memory in an
> inmate?

As i said, i think you are running on physical so everything is visible.

> (E.g., I can see the inmate/cell gets shut down when touching memory
> outside .pci_mmconfig_base + 0x100000): # Unhandled data read at
> 0x2100000(2) FATAL: unhandled trap (exception class 0x24)
> pc=0x00000ff4 cpsr=0x60000153 hsr=0x93400006
> r0=0x00001834 r1=0x0000000d r2=0x00000000 r3=0x00006ed1
> r4=0x02100000 r5=0x00000000 r6=0x00000002 r7=0x0000ffff
> r8=0x00001000 r9=0x00000000 r10=0x00000000 r11=0x00000000
> r12=0x00000000 r13=0x00006f80 r14=0x00000fc4
> Parking CPU 1 (Cell: "ivshmem-demo")

This is an access outside of memory that the hypervisor gave to the
cell.

> What memory areas are made available by Jailhouse for a cell/inmate
> to access?

They are described in the cell config, however the virtual PCI bus is
special there only the base is in the config and the size is calculated.
From hypervisor/pci.c pci_init you can see the 0x100000, it is
1*256*4096

> BR - Jonas

jonas

unread,
May 5, 2017, 9:02:11 AM5/5/17
to Jailhouse, jo...@retotech.se, hari....@gmail.com, henning...@siemens.com
Yes, I commented out the 'return;' after the printk.

> Does the rest of the output look like the pci-code is reading sane
> values?

IVSHMEM: Found 1af4:1110 at 00:00.0
IVSHMEM ERROR: device is not MSI-X capable
IVSHMEM: shmem is at 0x7bf00000
IVSHMEM: bar0 is at 0x7c000000
IVSHMEM: bar2 is at 0x7c004000
IVSHMEM: mapped shmem and bars, got position 0x00000001
IVSHMEM: Enabled IRQ:0x20
IVSHMEM: Vector set for PCI MSI-X.
IVSHMEM: 00:00.0 sending IRQ
IVSHMEM: waiting for interrupt.

> What did you set num_msix_vectors to?
>

'.num_msix_vectors = 1,'

> > jailhouse/inmates/lib/x86/mem.c:map_range() is used to map the
> > IVSHMEM region and registers. Got any pointers to code doing the
> > equivalent for ARM?
>
> I think on ARM the inmates run without paging, so the implementation
> would be empty.
>

OK. That simplifies/explains things... I commented out the call to 'map_pages()' as well.

> > What is the expected behaviour when accessing unmapped memory in an
> > inmate?
>
> As i said, i think you are running on physical so everything is visible.
>
> > (E.g., I can see the inmate/cell gets shut down when touching memory
> > outside .pci_mmconfig_base + 0x100000): # Unhandled data read at
> > 0x2100000(2) FATAL: unhandled trap (exception class 0x24)
> > pc=0x00000ff4 cpsr=0x60000153 hsr=0x93400006
> > r0=0x00001834 r1=0x0000000d r2=0x00000000 r3=0x00006ed1
> > r4=0x02100000 r5=0x00000000 r6=0x00000002 r7=0x0000ffff
> > r8=0x00001000 r9=0x00000000 r10=0x00000000 r11=0x00000000
> > r12=0x00000000 r13=0x00006f80 r14=0x00000fc4
> > Parking CPU 1 (Cell: "ivshmem-demo")
>
> This is an access outside of memory that the hypervisor gave to the
> cell.
>
> > What memory areas are made available by Jailhouse for a cell/inmate
> > to access?
>
> They are described in the cell config, however the virtual PCI bus is
> special there only the base is in the config and the size is calculated.
> From hypervisor/pci.c pci_init you can see the 0x100000, it is
> 1*256*4096
>

Actually, I think I spotted a bug here. In inmates/lib/pci.c:find_pci_device() there is a loop 'for (bdf = start_bdf; bdf < 0x10000; bdf++)', which will touch memory outside PCI_CFG_BASE_ADDR + 0x100000, hence the unhandled trap. Changing the loop to 'for (bdf = start_bdf; bdf < 0x1000; bdf++)' fixes the problem (0x1000 == 4096).

Why does this work on x86? Are bigger pages used by the hypervisor to map the PCI configuration area?

BR - Jonas

jonas

unread,
May 5, 2017, 9:08:45 AM5/5/17
to Jailhouse, jonas.w...@gmail.com, jo...@retotech.se, hari....@gmail.com, henning...@siemens.com
I'm on BPi-M1, but as far as I've understood, it has a gic-v2 (Allwinner A20, 2* Cortex-A7).

> We then fall back to line-based interrupts (INTx). The demo needs to be
> extended in this regard. You will probably have to hard-code the GIC
> interrupt number as well because the demos have no device tree support.
>

I guess using a command line argument would be the way to go, as well as the base address of the PCI configuration area, as you suggested earlier.

> >
> > jailhouse/inmates/lib/x86/mem.c:map_range() is used to map the IVSHMEM region and registers. Got any pointers to code doing the equivalent for ARM?
> >
> > What is the expected behaviour when accessing unmapped memory in an inmate?
> >
> > (E.g., I can see the inmate/cell gets shut down when touching memory outside .pci_mmconfig_base + 0x100000):
> > # Unhandled data read at 0x2100000(2)
> > FATAL: unhandled trap (exception class 0x24)
> > pc=0x00000ff4 cpsr=0x60000153 hsr=0x93400006
> > r0=0x00001834 r1=0x0000000d r2=0x00000000 r3=0x00006ed1
> > r4=0x02100000 r5=0x00000000 r6=0x00000002 r7=0x0000ffff
> > r8=0x00001000 r9=0x00000000 r10=0x00000000 r11=0x00000000
> > r12=0x00000000 r13=0x00006f80 r14=0x00000fc4
> > Parking CPU 1 (Cell: "ivshmem-demo")
>
> That is the expected behaviour: stop the CPU that performed the invalid
> access.
>
> >
> > What memory areas are made available by Jailhouse for a cell/inmate to access?
>
> On ARM, the GICV (as GICC) and everything you list in the config.
>
> Jan
> --
> Siemens AG, Corporate Technology, CT RDA ITP SES-DE
> Corporate Competence Center Embedded Linux

BR - Jonas

Jan Kiszka

unread,
May 5, 2017, 12:25:53 PM5/5/17
to jonas, Jailhouse, hari....@gmail.com, henning...@siemens.com
Needs to be 0 for INTx operation.
On x86, the is always the full mmconfig space accessible. On ARM, you
need to check what platform_info.pci_mmconfig_end_bus is set to. When we
emulate PCI, we keep it at 0, i.e. a single bus. The inmate lib is not
yet aware of such restrictions.

jonas

unread,
May 8, 2017, 5:46:14 PM5/8/17
to Jailhouse, jo...@retotech.se, hari....@gmail.com, henning...@siemens.com
> Needs to be 0 for INTx operation.

OK, when I remove '.num_msix_vectors = 1' from the root cell configuration, I can see the following in '/var/log/messages':
[ 69.760313] PCI host bridge //vpci@0 ranges:
[ 69.764807] MEM 0x02100000..0x02101fff -> 0x02100000
[ 69.774477] pci-host-generic 2000000.vpci: PCI host bridge to bus 0000:00
[ 69.781428] pci_bus 0000:00: root bus resource [bus 00]
[ 69.786705] pci_bus 0000:00: root bus resource [mem 0x02100000-0x02101fff]
[ 69.793830] pci_bus 0000:00: scanning bus
[ 69.794718] pci 0000:00:00.0: [1af4:1110] type 00 class 0xff0000
[ 69.794815] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x000000ff 64bit]
[ 69.794981] pci 0000:00:00.0: calling pci_fixup_ide_bases+0x0/0x50
[ 69.797231] pci_bus 0000:00: fixups for bus
[ 69.797283] PCI: bus0: Fast back to back transfers disabled
[ 69.803007] pci_bus 0000:00: bus scan returning with max=00
[ 69.803343] pci 0000:00:00.0: fixup irq: got 124
[ 69.803363] pci 0000:00:00.0: assigning IRQ 124
[ 69.803433] pci 0000:00:00.0: BAR 0: assigned [mem 0x02100000-0x021000ff 64bit]
[ 69.813181] uio_ivshmem 0000:00:00.0: enabling device (0000 -> 0002)
[ 69.819791] uio_ivshmem 0000:00:00.0: using jailhouse mode
[ 69.825511] uio_ivshmem 0000:00:00.0: regular IRQs
[ 69.836988] The Jailhouse is opening.

How does this IRQ number correlate to the INTx I should be using when generating interrupts from the bare-metal inmate to the root-cell?

Does the uio_ivshmem driver take care of generating interrupts from the root-cell to the bare metal cell, or do I need to modify this as well?

Slightly confused - Jonas

Henning Schild

unread,
May 16, 2017, 10:54:35 AM5/16/17
to jonas, Jailhouse, hari....@gmail.com
Am Mon, 8 May 2017 14:46:14 -0700
schrieb jonas <jo...@retotech.se>:

> > Needs to be 0 for INTx operation.
>
> OK, when I remove '.num_msix_vectors = 1' from the root cell
> configuration, I can see the following in '/var/log/messages':
> [ 69.760313] PCI host bridge //vpci@0 ranges: [ 69.764807] MEM
> 0x02100000..0x02101fff -> 0x02100000 [ 69.774477] pci-host-generic
> 2000000.vpci: PCI host bridge to bus 0000:00 [ 69.781428] pci_bus
> 0000:00: root bus resource [bus 00] [ 69.786705] pci_bus 0000:00:
> root bus resource [mem 0x02100000-0x02101fff] [ 69.793830] pci_bus
> 0000:00: scanning bus [ 69.794718] pci 0000:00:00.0: [1af4:1110]
> type 00 class 0xff0000 [ 69.794815] pci 0000:00:00.0: reg 0x10:
> [mem 0x00000000-0x000000ff 64bit] [ 69.794981] pci 0000:00:00.0:
> calling pci_fixup_ide_bases+0x0/0x50 [ 69.797231] pci_bus 0000:00:
> fixups for bus [ 69.797283] PCI: bus0: Fast back to back transfers
> disabled [ 69.803007] pci_bus 0000:00: bus scan returning with
> max=00 [ 69.803343] pci 0000:00:00.0: fixup irq: got 124
> [ 69.803363] pci 0000:00:00.0: assigning IRQ 124
> [ 69.803433] pci 0000:00:00.0: BAR 0: assigned [mem
> 0x02100000-0x021000ff 64bit] [ 69.813181] uio_ivshmem 0000:00:00.0:
> enabling device (0000 -> 0002) [ 69.819791] uio_ivshmem
> 0000:00:00.0: using jailhouse mode [ 69.825511] uio_ivshmem
> 0000:00:00.0: regular IRQs [ 69.836988] The Jailhouse is opening.
>
> How does this IRQ number correlate to the INTx I should be using when
> generating interrupts from the bare-metal inmate to the root-cell?

You do not need to know the number, the uio-driver knows it. And the
bare metal inmate does not need to know it since it is just writing to
a register to trigger it.
It looks like it is working. After loading the driver you should see a
new entry in /proc/interrupts. And when the inmate runs you should see
the counter going up.
Getting an IRQ sent to the inmate will be more tricky, you will need to
program the GIC where the x86 code does "int_set_handler". The gic-demo
should give a clue.

> Does the uio_ivshmem driver take care of generating interrupts from
> the root-cell to the bare metal cell, or do I need to modify this as
> well?

The uio-driver does not actually do anything. It just makes the
ressources of the "hardware" visible to userland. I suggest you have a
look at the jailhouse specific README.
https://github.com/henning-schild/ivshmem-guest-code/blob/jailhouse/README.jailhouse
If you did not come across this file yet you might be on the wrong
branch of ivshmem-guest-code.

Henning

> Slightly confused - Jonas

jonas

unread,
May 17, 2017, 5:13:24 AM5/17/17
to Jailhouse, jo...@retotech.se, henning...@siemens.com
Den tisdag 16 maj 2017 kl. 16:54:35 UTC+2 skrev Henning Schild:
> You do not need to know the number, the uio-driver knows it. And the
> bare metal inmate does not need to know it since it is just writing to
> a register to trigger it.
> It looks like it is working. After loading the driver you should see a
> new entry in /proc/interrupts. And when the inmate runs you should see
> the counter going up.

Unfortunately not (just yet...). I've commented out the part where the bare-metal ivshmem-demo inmate writes to the IO-mapped ivshmem register of the virtual PCI device. The last thing I see in the inmate terminal window (after adding the printout prior to writing to the ivshmem register area) is:
IVSHMEM: 00:00.0 sending IRQ (by writing to 0x7c00000c)

In the terminal window of the Linux root-cell I see:
FATAL: Invalid ivshmem register read, number 04
FATAL: forbidden access (exception class 0x24)
pc=0xbf00b018 cpsr=0x600c0193 hsr=0x93800006
r0=0x0000007c r1=0xdd4f3600 r2=0x00010001 r3=0xdf948000
r4=0xc08d0000 r5=0xdd144290 r6=0xc0959325 r7=0x0000007c
r8=0x0000007c r9=0xc08a3a40 r10=0x00000000 r11=0xc08d1e0c
r12=0xc08d1e10 r13=0xc08d1e00 r14=0xc03d4dfc
Parking CPU 0 (Cell: "Banana-Pi")

If i comment out the line in the bare-metal inmate where the register is written (in ivshmem_demo.c:send_irq(), mmio_write32(d->registers + 3, 1);), all seems to be well and I am able to verify that the shared memory has been updated by the bare-metal inmate from within the root cell. I've also been able to verify that the contents of the shared memory area is picked up by the bare-metal inmate. No interrupts from the inmate to the root cell though (of course).

Since I'm able to access the virtual PCI device register area using mmio_read32() from the inmate, it looks like the area has not been mapped for write access (by Jailhouse)? Am I missing some PCI device configuration entry?

I tried to find where the FATAL:-printouts come from and found traces to jailhouse/hypervisor/ivshmem.c:ivshmem_register_mmio() and jailhouse/hypervisor/arch/arm/traps.c:arch_handle_trap(). I don't know what to do with this information at the moment. Is it possible to dump some call-stack from the hypervisor when fatal errors occur?

> Getting an IRQ sent to the inmate will be more tricky, you will need to
> program the GIC where the x86 code does "int_set_handler". The gic-demo
> should give a clue.

Yep, I've started looking at this example. Thanks for verifying that this is the way forward.

>
> > Does the uio_ivshmem driver take care of generating interrupts from
> > the root-cell to the bare metal cell, or do I need to modify this as
> > well?
>
> The uio-driver does not actually do anything. It just makes the
> ressources of the "hardware" visible to userland. I suggest you have a
> look at the jailhouse specific README.
> https://github.com/henning-schild/ivshmem-guest-code/blob/jailhouse/README.jailhouse
> If you did not come across this file yet you might be on the wrong
> branch of ivshmem-guest-code.

I've seen it. I'm on the jailhouse branch of ivshmem-guest-code.

Thanks - Jonas

jonas

unread,
May 17, 2017, 6:27:33 AM5/17/17
to Jailhouse, jo...@retotech.se, henning...@siemens.com
Den onsdag 17 maj 2017 kl. 11:13:24 UTC+2 skrev jonas:
> Den tisdag 16 maj 2017 kl. 16:54:35 UTC+2 skrev Henning Schild:
> > You do not need to know the number, the uio-driver knows it. And the
> > bare metal inmate does not need to know it since it is just writing to
> > a register to trigger it.
> > It looks like it is working. After loading the driver you should see a
> > new entry in /proc/interrupts. And when the inmate runs you should see
> > the counter going up.
>
> Unfortunately not (just yet...). I've commented out the part where the bare-metal ivshmem-demo inmate writes to the IO-mapped ivshmem register of the virtual PCI device. The last thing I see in the inmate terminal window (after adding the printout prior to writing to the ivshmem register area) is:
> IVSHMEM: 00:00.0 sending IRQ (by writing to 0x7c00000c)
>
> In the terminal window of the Linux root-cell I see:
> FATAL: Invalid ivshmem register read, number 04
> FATAL: forbidden access (exception class 0x24)
> pc=0xbf00b018 cpsr=0x600c0193 hsr=0x93800006
> r0=0x0000007c r1=0xdd4f3600 r2=0x00010001 r3=0xdf948000
> r4=0xc08d0000 r5=0xdd144290 r6=0xc0959325 r7=0x0000007c
> r8=0x0000007c r9=0xc08a3a40 r10=0x00000000 r11=0xc08d1e0c
> r12=0xc08d1e10 r13=0xc08d1e00 r14=0xc03d4dfc
> Parking CPU 0 (Cell: "Banana-Pi")
>
> If i comment out the line in the bare-metal inmate where the register is written (in ivshmem_demo.c:send_irq(), mmio_write32(d->registers + 3, 1);), all seems to be well and I am able to verify that the shared memory has been updated by the bare-metal inmate from within the root cell. I've also been able to verify that the contents of the shared memory area is picked up by the bare-metal inmate. No interrupts from the inmate to the root cell though (of course).
>
> Since I'm able to access the virtual PCI device register area using mmio_read32() from the inmate, it looks like the area has not been mapped for write access (by Jailhouse)? Am I missing some PCI device configuration entry?
>
> I tried to find where the FATAL:-printouts come from and found traces to jailhouse/hypervisor/ivshmem.c:ivshmem_register_mmio() and jailhouse/hypervisor/arch/arm/traps.c:arch_handle_trap(). I don't know what to do with this information at the moment. Is it possible to dump some call-stack from the hypervisor when fatal errors occur?

Hmm... Does the access originate from the root-cell, since the root cell is being parked, not the (non-root) cell in which the bare-metal inmate is running?

Can I turn on some more debugging info in the hypervisor?

/Jonas

Henning Schild

unread,
May 17, 2017, 6:44:07 AM5/17/17
to jonas, Jailhouse
Am Wed, 17 May 2017 02:13:24 -0700
schrieb jonas <jo...@retotech.se>:

> Den tisdag 16 maj 2017 kl. 16:54:35 UTC+2 skrev Henning Schild:
> > You do not need to know the number, the uio-driver knows it. And the
> > bare metal inmate does not need to know it since it is just writing
> > to a register to trigger it.
> > It looks like it is working. After loading the driver you should
> > see a new entry in /proc/interrupts. And when the inmate runs you
> > should see the counter going up.
>
> Unfortunately not (just yet...). I've commented out the part where
> the bare-metal ivshmem-demo inmate writes to the IO-mapped ivshmem
> register of the virtual PCI device. The last thing I see in the
> inmate terminal window (after adding the printout prior to writing to
> the ivshmem register area) is: IVSHMEM: 00:00.0 sending IRQ (by
> writing to 0x7c00000c)
>
> In the terminal window of the Linux root-cell I see:
> FATAL: Invalid ivshmem register read, number 04
> FATAL: forbidden access (exception class 0x24)
> pc=0xbf00b018 cpsr=0x600c0193 hsr=0x93800006
> r0=0x0000007c r1=0xdd4f3600 r2=0x00010001 r3=0xdf948000
> r4=0xc08d0000 r5=0xdd144290 r6=0xc0959325 r7=0x0000007c
> r8=0x0000007c r9=0xc08a3a40 r10=0x00000000 r11=0xc08d1e0c
> r12=0xc08d1e10 r13=0xc08d1e00
> r14=0xc03d4dfc Parking CPU 0 (Cell: "Banana-Pi")

Seems like the Intx path was never really tested with the uio driver. I
think the problem is caused be the interrupt handler
ivshmem_handler in uio_ivshmem.c
It is trying to read the IntrStatus register which jailhouse does not
implement. Just make the function a pure "return IRQ_HANDLED;" and you
should get further. Actually you error indicates that the interrupt was
received because Linux ran the basic uio handler.

> If i comment out the line in the bare-metal inmate where the register
> is written (in ivshmem_demo.c:send_irq(), mmio_write32(d->registers +
> 3, 1);), all seems to be well and I am able to verify that the shared
> memory has been updated by the bare-metal inmate from within the root
> cell. I've also been able to verify that the contents of the shared
> memory area is picked up by the bare-metal inmate. No interrupts from
> the inmate to the root cell though (of course).
>
> Since I'm able to access the virtual PCI device register area using
> mmio_read32() from the inmate, it looks like the area has not been
> mapped for write access (by Jailhouse)? Am I missing some PCI device
> configuration entry?
>
> I tried to find where the FATAL:-printouts come from and found traces
> to jailhouse/hypervisor/ivshmem.c:ivshmem_register_mmio() and
> jailhouse/hypervisor/arch/arm/traps.c:arch_handle_trap(). I don't
> know what to do with this information at the moment. Is it possible
> to dump some call-stack from the hypervisor when fatal errors occur?

The function ivshmem_register_mmio was the right place to look. Now if
you look at the error you see that linux tried to read register 4. And
that register is not handled by jailhouse, have a look at IVSHMEM_REG_*
in ivshmem.c.

Henning Schild

unread,
May 17, 2017, 7:31:37 AM5/17/17
to jonas, KISZKA, JAN, Jailhouse
Am Wed, 17 May 2017 12:45:19 +0200
schrieb "[ext] Henning Schild" <henning...@siemens.com>:
I do not remember why the Status register is not implemented by
jailhouse, maybe Jan does. Or i would have to read up in the archive
and see whether it was ever part of the patchsets that introduced
ivshmem.

I just pushed a patch to the jailhouse-next branch, it compiles but i
did not test it .... You could give it a try.

Henning

jonas

unread,
May 17, 2017, 7:54:07 AM5/17/17
to Jailhouse, jo...@retotech.se, jan.k...@siemens.com, henning...@siemens.com

Yes, that does do the trick!
Before starting the ivshmem-demo bare-metal inmate the interrupt count for ivshmem as reported by /proc/interrupts is zero. After having started the inmate it is one (I just write once to the LSTATE register from the inmate).

> I do not remember why the Status register is not implemented by
> jailhouse, maybe Jan does. Or i would have to read up in the archive
> and see whether it was ever part of the patchsets that introduced
> ivshmem.
>

Hehe - That was my next question...

> I just pushed a patch to the jailhouse-next branch, it compiles but i
> did not test it .... You could give it a try.
>

OK, I'm currently on v0.6. Do I want to switch to jailhouse-next in a hurry, or am I good for now on v0.6? Eventually I will move on to newer branches/tags, of course, and upstream my findings.

Thanks a bunch for the swift response - Jonas

jonas

unread,
May 17, 2017, 7:59:19 AM5/17/17
to Jailhouse, jo...@retotech.se, jan.k...@siemens.com, henning...@siemens.com
IVSHMEM_REG_DBELL, not LSTATE, sorry...

>
> > I do not remember why the Status register is not implemented by
> > jailhouse, maybe Jan does. Or i would have to read up in the archive
> > and see whether it was ever part of the patchsets that introduced
> > ivshmem.
> >
>
> Hehe - That was my next question...
>
> > I just pushed a patch to the jailhouse-next branch, it compiles but i
> > did not test it .... You could give it a try.
> >
>
> OK, I'm currently on v0.6. Do I want to switch to jailhouse-next in a hurry, or am I good for now on v0.6? Eventually I will move on to newer branches/tags, of course, and upstream my findings.
Ah, I see. The ivshmem-guest-code repo, not jailhouse repo.

Jan Kiszka

unread,
May 17, 2017, 7:59:28 AM5/17/17
to jonas, Jailhouse, henning...@siemens.com
The interrupt sources are of edge nature + the event reasons are usually
stored in the data structures inside the shared memory. So there is no
point in implementing a sticky and costly (performance- and
implementation-wise) status bit.

Henning Schild

unread,
May 17, 2017, 8:02:43 AM5/17/17
to jonas, Jailhouse, jan.k...@siemens.com
Am Wed, 17 May 2017 04:54:07 -0700
schrieb jonas <jo...@retotech.se>:
I was talking about ivshmem-guest-code, not jailhouse.

Henning

jonas

unread,
May 17, 2017, 11:47:59 AM5/17/17
to Jailhouse, jo...@retotech.se, jan.k...@siemens.com, henning...@siemens.com
Hi again,

Let's assume that I want to modify jailhouse/inmates/demos/arm/gic-demo.c to also handle ivshmem interrupts generated by the hypervisor to the bare-metal cell when writing the virtual PCI driver config area using uio_ivshmem/uio_send in the root-cell.

The first thing I would have to do is enable the IVSHMEM_IRQ in jailhouse/inmates/demos/arm/gic-demo.c:inmate_main() by calling gic_enable_irq(IVSHMEM_IRQ); in the same manner as gic_enable_irq(TIMER_IRQ);.

I would also have to check what irqn is passed in jailhouse/inmates/demos/arm/gic-demo.c:handle_IRQ(unsigned int irqn), in order to distinguish between TIMER_IRQ and IVSHMEM_IRQ, right?

TIMER_IRQ is defined (to 27) in jailhouse/inmates/lib/arm/include/mach.h. Where does this value come from?

How do I know what value to set IVSHMEM_IRQ to?

Still a bit confused - Jonas

Henning Schild

unread,
May 17, 2017, 12:45:16 PM5/17/17
to jonas, Jailhouse, jan.k...@siemens.com
Am Wed, 17 May 2017 08:47:59 -0700
schrieb jonas <jo...@retotech.se>:

> Hi again,
>
> Let's assume that I want to modify
> jailhouse/inmates/demos/arm/gic-demo.c to also handle ivshmem
> interrupts generated by the hypervisor to the bare-metal cell when
> writing the virtual PCI driver config area using uio_ivshmem/uio_send
> in the root-cell.
>
> The first thing I would have to do is enable the IVSHMEM_IRQ in
> jailhouse/inmates/demos/arm/gic-demo.c:inmate_main() by calling
> gic_enable_irq(IVSHMEM_IRQ); in the same manner as
> gic_enable_irq(TIMER_IRQ);.

You would have to register a handler first, gic_setup() but yes.

> I would also have to check what irqn is passed in
> jailhouse/inmates/demos/arm/gic-demo.c:handle_IRQ(unsigned int irqn),
> in order to distinguish between TIMER_IRQ and IVSHMEM_IRQ, right?

I am not sure but it looks like gic_setup() might actually redirect all
interrupts to that one handler. Because you do not need to specify the
number. That check is there to not react to other interrupts, there are
probably no others.

> TIMER_IRQ is defined (to 27) in
> jailhouse/inmates/lib/arm/include/mach.h. Where does this value come
> from?

Probably from some ARM manual describing the interrupt controller GIC,
or maybe from the device tree, i do not know too much about ARM.
But it is basically a constant for your target.

> How do I know what value to set IVSHMEM_IRQ to?

Have a look at the linux inmate config for the bananapi. You will have
to get some pieces for your inmate config.

Get
.vpci_irq_base = 123,
and the irqchips section. Make sure you adjust the array size for
irqchips.
From the pin_bitmap you just need the second value
0, 0, 0, 1 << (155-128),
And now your IVSHMEM_IRQ is 155. That should work, but i also cant
fully explain where the numbers come from.

Henning

jonas

unread,
May 18, 2017, 5:42:21 PM5/18/17
to Jailhouse, jo...@retotech.se, jan.k...@siemens.com, henning...@siemens.com

Thanks Henning,

I tried the suggested additions to the bare-metal cell configuration and inmate, but no success yet. I use 'uio_send /dev/uio0 10 0 0' to fire 10 interrupts from the root-cell.

Any suggestions on how to proceed?

Verify that accesses to the virtual PCI device configuration area actually are made, intercepted by the hypervisor and interrupts generated to the bare-metal cell when running 'uio_send'?

/Jonas

Henning Schild

unread,
May 19, 2017, 5:22:06 AM5/19/17
to jonas, Jailhouse, jan.k...@siemens.com
Am Thu, 18 May 2017 14:42:20 -0700
schrieb jonas <jo...@retotech.se>:
You could instrument ivshmem_remote_interrupt and
arch_ivshmem_trigger_interrupt and other functions on the way with
printfs
I guess the interrupt is leaving the hypervisor but your inmate does
not receive it. You could integrate the timer-code from gic-demo into
your inmate to verify that the cell is able to receive interrupts at
all.
And maybe the 155 is wrong after all but you could see that with the
instrumentation of the hypervisor.

Henning

jonas

unread,
May 19, 2017, 6:22:05 AM5/19/17
to Jailhouse, jo...@retotech.se, jan.k...@siemens.com, henning...@siemens.com

Good suggestion! Actually, that is exactly what I did this morning.

In hypervisor/arch/arm-common/ivshmem.c:arch_ivshmem_trigger_interrupt(), I added:
```
printk("%s(ive:%p), irq_id:%d\n", __func__, ive, irq_id);
```

When running `uio_send /dev/uio0 10 0 0` I get:
```
[UIO] ping #0

[UIO] ping #1arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] ping #2arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] ping #3arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] ping #4arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] ping #5arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] ping #6arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] ping #7arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] ping #8arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] ping #9arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] Exiting...
```

Hence, since irq_id is 0, no interrupt is set pending in the irqchip...

I also added a printout in `hypervisor/arch/arm-common/irqchip.c`:
``` if ((irq_id != 30) && (irq_id != 33)) {
printk("%s(cpu_data:%p, irq_id:%d)\n", __func__, cpu_data, irq_id);
}
```
which shows a lot of interrupts being handled (I had to filter out 30 (PPI 14) and 33 (UART 0) in order not to drown in printouts at startup).

I can then see printouts for IRQ 2, (SGI 2), 4 (SGI 4), 64 (SD/MMC 0), 140 (IVSHMEM_IRQ for root-cell???, 108+32, as my root-cell config contains:`config.cell.vpci_irq_base = 108`?), and eventually when reaching steady state, only 117 (GMAC).

I'm guessing that I should have seen corresponding printouts for IRQ 155 (123+32, as my bare-metal cell contains:`config.cell.vpci_irq_base = 123`) if all would have been OK?

I've used the A20 User Manual Revision 1.0 Feb. 18, 2013 from Allwinner as reference for interrupt source numbers mentioned above.

According to this, it seems like 108 coincides with GPU-RSV0.

/Jonas

Henning Schild

unread,
May 19, 2017, 7:13:15 AM5/19/17
to jonas, Jailhouse, jan.k...@siemens.com
Am Fri, 19 May 2017 03:22:05 -0700
schrieb jonas <jo...@retotech.se>:
If you look at where the irq_id comes from you will find
(ive->intx_ctrl_reg & IVSHMEM_INTX_ENABLE).

Have a look what uio_ivshmem.c is doing in line 207, that is missing in
your inmate.

> I also added a printout in `hypervisor/arch/arm-common/irqchip.c`:
> ``` if ((irq_id != 30) && (irq_id != 33)) {
> printk("%s(cpu_data:%p, irq_id:%d)\n", __func__,
> cpu_data, irq_id); }
> ```
> which shows a lot of interrupts being handled (I had to filter out 30
> (PPI 14) and 33 (UART 0) in order not to drown in printouts at
> startup).
>
> I can then see printouts for IRQ 2, (SGI 2), 4 (SGI 4), 64 (SD/MMC
> 0), 140 (IVSHMEM_IRQ for root-cell???, 108+32, as my root-cell config
> contains:`config.cell.vpci_irq_base = 108`?), and eventually when
> reaching steady state, only 117 (GMAC).
>
> I'm guessing that I should have seen corresponding printouts for IRQ
> 155 (123+32, as my bare-metal cell
> contains:`config.cell.vpci_irq_base = 123`) if all would have been OK?
>
> I've used the A20 User Manual Revision 1.0 Feb. 18, 2013 from
> Allwinner as reference for interrupt source numbers mentioned above.
>
> According to this, it seems like 108 coincides with GPU-RSV0.

I guess the vpci_irq_base for a SoC would be the last irq the thing
uses plus some alignment. Jailhouse is using 140 not 108.

Henning

> /Jonas

Hari Krishnan

unread,
May 19, 2017, 9:40:32 AM5/19/17
to Jailhouse
Hi there,

Sorry for asking a question that is regarding the porting of ivshmem-demo.c.
As mentioned, I was trying to replicate pci.c for the arm and I am facing some issues.

I have converted the pci_read_config() and pci_write_config() using mmio.

I have used Code that is similar to that can be found in the
hypervisor. hypervisor/pci.c include/jailhouse/mmio.h

The mmcfg_address = PCI_get_device_mmcfg_base(bdf) + address in which I replaced the function call with value from the root cell configuration " 0x48000000". But then the PCI_read/write_config() doesn't require bdf to be passed as a parameter?

Please support regarding the porting of ivshmem-demo.c to arm.

Henning Schild

unread,
May 19, 2017, 10:12:14 AM5/19/17
to Hari Krishnan, Jailhouse, jonas
Hey,

i think you should talk to Jonas because he is almost there. Maybe you
guys can exchange code. If you are not subscribed check out the archive
to read what happened.
http://jailhouse-dev.narkive.com/

Henning

Am Fri, 19 May 2017 06:40:32 -0700
schrieb Hari Krishnan <hari....@gmail.com>:

jonas

unread,
May 19, 2017, 10:15:17 AM5/19/17
to Jailhouse, jo...@retotech.se, jan.k...@siemens.com, henning...@siemens.com

Yay! It works!

> > I also added a printout in `hypervisor/arch/arm-common/irqchip.c`:
> > ``` if ((irq_id != 30) && (irq_id != 33)) {
> > printk("%s(cpu_data:%p, irq_id:%d)\n", __func__,
> > cpu_data, irq_id); }
> > ```
> > which shows a lot of interrupts being handled (I had to filter out 30
> > (PPI 14) and 33 (UART 0) in order not to drown in printouts at
> > startup).
> >
> > I can then see printouts for IRQ 2, (SGI 2), 4 (SGI 4), 64 (SD/MMC
> > 0), 140 (IVSHMEM_IRQ for root-cell???, 108+32, as my root-cell config
> > contains:`config.cell.vpci_irq_base = 108`?), and eventually when
> > reaching steady state, only 117 (GMAC).
> >
> > I'm guessing that I should have seen corresponding printouts for IRQ
> > 155 (123+32, as my bare-metal cell
> > contains:`config.cell.vpci_irq_base = 123`) if all would have been OK?
> >
> > I've used the A20 User Manual Revision 1.0 Feb. 18, 2013 from
> > Allwinner as reference for interrupt source numbers mentioned above.
> >
> > According to this, it seems like 108 coincides with GPU-RSV0.
>
> I guess the vpci_irq_base for a SoC would be the last irq the thing
> uses plus some alignment. Jailhouse is using 140 not 108.

The last interrupt source number used by A20 is 122 (IIS 2), so the first unused is 123.

>
> Henning
>
> > /Jonas

Henning Schild

unread,
May 19, 2017, 1:10:38 PM5/19/17
to jonas, Jailhouse, jan.k...@siemens.com
Am Fri, 19 May 2017 07:15:17 -0700
Would be nice if you could try and wrap up your changes into a patch
and send that here for reference.
If you would like to try and get your changes merged they would have to
be generic. So no hardcoded interrupt numbers, pci bus locations etc.
Have a look at inmates/lib/cmdline.c to pass such values in as
arguments for your target. That mechanism is currently only used for
debug output and is also described in Documentation/debug-output.md

Henning

Constantin Petra

unread,
Aug 8, 2017, 3:27:31 AM8/8/17
to Jailhouse, jo...@retotech.se, jan.k...@siemens.com, henning...@siemens.com
Hi,

Sorry to pick this up after this time, but I would be interested in the pci.c modifications related to ARM for inmates (using MMIO instead of PIO).
Was there a follow-up to the discussions above?(checked out the discussion archives but I can't find any). I would like to avoid re-inventing the wheel if there's one already rolling.

Thanks,
Constantin

Jan Kiszka

unread,
Aug 8, 2017, 7:48:07 AM8/8/17
to Constantin Petra, Jailhouse, jo...@retotech.se, henning...@siemens.com
To my knowledge, there is no risk of duplicate effort at the moment:
this task still seeks for contributions.

Jonas, are there any first changes you made back then that may help
Constantin to pick up?

Henning Schild

unread,
Aug 9, 2017, 9:21:34 AM8/9/17
to Constantin Petra, Jailhouse, jo...@retotech.se, jan.k...@siemens.com
Hey,

unfortunately Jonas never published his overall changes, maybe now he
understands why i kindly asked him to do so.
I think Jonas maybe ran into every single problem one could encounter
on the way, so if you read the thread you will probably be able to come
up with a similar patch at some point. That would be the duplication of
efforts, getting a first working patch into a mergeable form is another
story.

If there are legal reasons to not publish code on the list i suggest
you exchange patches between each other. But of cause i would like to
see contributions eventually ;).

regards,
Henning

Am Tue, 8 Aug 2017 00:27:31 -0700
schrieb Constantin Petra <constant...@gmail.com>:

Constantin Petra

unread,
Aug 10, 2017, 8:33:25 AM8/10/17
to Henning Schild, Jailhouse, jo...@retotech.se, Jan Kiszka
Hi,

Thanks for the information, I have been looking more into this thread.
For my understanding:
Is the pci_(read/write)_config() access using MMIO from guest ARM side supposed to access .pci_mmconfig_base address (0xfc000000 for zcu102, which I see it being "reserved memory" in the Ultrascale+ TRM) and thus trigger arch_handle_dabt()->mmio_handle_access() on hypervisor side, or am I off track?

Thanks,
Constantin

Jan Kiszka

unread,
Aug 10, 2017, 12:06:27 PM8/10/17
to Constantin Petra, Henning Schild, Jailhouse, jo...@retotech.se
On 2017-08-10 08:33, Constantin Petra wrote:
> Hi,
>
> Thanks for the information, I have been looking more into this thread.
> For my understanding:
> Is the pci_(read/write)_config() access using MMIO from guest ARM side
> supposed to access .pci_mmconfig_base address (0xfc000000 for zcu102,
> which I see it being "reserved memory" in the Ultrascale+ TRM) and thus
> trigger arch_handle_dabt()->mmio_handle_access() on hypervisor side, or
> am I off track?

Nope, that's how things are supposed to work on that board. The MMIO
config space is fully virtualized, so we picked an unused address range
for it.

Claudio Scordino

unread,
Dec 7, 2017, 11:29:47 AM12/7/17
to Henning Schild, jo...@retotech.se, Constantin Petra, Jailhouse, Jan Kiszka, Luca Cuomo
Hi guys,

2017-08-09 15:23 GMT+02:00 Henning Schild <henning...@siemens.com>:
Hey,

unfortunately Jonas never published his overall changes, maybe now he
understands why i kindly asked him to do so.
I think Jonas maybe ran into every single problem one could encounter
on the way, so if you read the thread you will probably be able to come
up with a similar patch at some point. That would be the duplication of
efforts, getting a first working patch into a mergeable form is another
story.

If there are legal reasons to not publish code on the list i suggest
you exchange patches between each other. But of cause i would like to
see contributions eventually ;).


We need to run IVSHMEM on the TX1.
Any chance of upstreaming those patches to not waste time re-inventing the wheel ?
If that's not possible, please send me a copy privately.

Many thanks and best regards,

             Claudio

Henning Schild

unread,
Dec 7, 2017, 3:08:16 PM12/7/17
to Claudio Scordino, jo...@retotech.se, Constantin Petra, Jailhouse, Jan Kiszka, Luca Cuomo
Hi Claudio,

Am Thu, 7 Dec 2017 17:29:45 +0100
schrieb Claudio Scordino <cla...@evidence.eu.com>:
Unfortunately i do not have those patches either. I am afraid someone
will have to do that over again.

But the whole thread is basically about enabling the demo, which is
interesting for people just getting started with ivshmem. And for
people that want to implement their own protocol on top of it.
If you are just looking at running ivshmem-net you are good to go, that
code is in a working state.

Henning

Constantin Petra

unread,
Dec 8, 2017, 12:47:33 AM12/8/17
to Henning Schild, Claudio Scordino, jo...@retotech.se, Jailhouse, Jan Kiszka, Luca Cuomo
Hi,

I'm resending the patch(es) that were shared by Jonas a while ago.

Best Regards,
Constantin
jailhouse-bpi-ivshmem-demo.patch

jonas

unread,
Dec 10, 2017, 11:34:24 AM12/10/17
to Jailhouse
Hi,

I'll be making an effort to contribute my work to the master branch of Jailhouse within the next couple of weeks.

/Jonas

Luca Cuomo

unread,
Dec 21, 2017, 4:05:30 AM12/21/17
to Jailhouse
Hi all,

i've applied the provided patch and i'm trying to connect the linux root cell with the bare metal cell running the inmate/demo/arm/ivshmem-demo.c. I've attached the used configurations (jetson-tx1-ivshmem for the root cell and the other one for the bare metal).
When i create the bare metal cell the connection between pci devices is correctly up.
-----------------------------------------------------------------
Initializing Jailhouse hypervisor on CPU 2
Code location: 0x0000ffffc0200050
Page pool usage after early setup: mem 63/16358, remap 64/131072
Initializing processors:
CPU 2... OK
CPU 1... OK
CPU 3... OK
CPU 0... OK
Adding virtual PCI device 00:0f.0 to cell "Jetson-TX1-ivshmem"
Page pool usage after late setup: mem 74/16358, remap 69/131072
Activating hypervisor
Adding virtual PCI device 00:0f.0 to cell "jetson-tx1-demo-shmem"
Shared memory connection established: "jetson-tx1-demo-shmem" <--> "Jetson-TX1-ivshmem"
-------------------------------------------------------------------

1st problem: no PCI device appears in linux (lspci does not return anything)

Then i launch the ivshmem-demo.bin. I've made some modification:
* in inmate/lib/arm-common/pci.c the #define PCI_CFG_BASE (0x48000000)
as defined in jetson-tx-ivshmem.c:
config.header.platform_info.pci_mmconfig_base.
In the same file i've enabled the print of pci_read/write_config
* in inmate/demos/arm/ivshmem-demo.c i've removed the filter on
class/revision in order to get a suitable pci device with proper
deviceId:vendorId

When the bare metal starts, it iterate on the memory with a lot of read

pci_read_config(bdf:0x0, addr:0x0000000000000000, size:0x2), reg_addr0x48000000
pci_read_config(bdf:0x1, addr:0x0000000000000000, size:0x2), reg_addr0x48000100
pci_read_config(bdf:0x2, addr:0x0000000000000000, size:0x2), reg_addr0x48000200
pci_read_config(bdf:0x3, addr:0x0000000000000000, size:0x2), reg_addr0x48000300

.... after a while something happens (follow <---)

IVSHMEM: Found 1af4:1110 at 07:10.0 <---
pci_read_config(bdf:0x780, addr:0x0000000000000008, size:0x4), reg_addr0x48078008
IVSHMEM: class/revision ff010000, not supported skipping device <--- //IGNORED
pci_read_config(bdf:0x780, addr:0x0000000000000006, size:0x2), reg_addr0x48078004
pci_read_config(bdf:0x780, addr:0x0000000000000034, size:0x1), reg_addr0x48078034
IVSHMEM ERROR: device is not MSI-X capable <---
pci_read_config(bdf:0x780, addr:0x000000000000004c, size:0x4), reg_addr0x4807804c
pci_read_config(bdf:0x780, addr:0x0000000000000048, size:0x4), reg_addr0x48078048
pci_read_config(bdf:0x780, addr:0x0000000000000044, size:0x4), reg_addr0x48078044
pci_read_config(bdf:0x780, addr:0x0000000000000040, size:0x4), reg_addr0x48078040
IVSHMEM: shmem is at 0x000000007bf00000 <---
pci_write_config(bdf:0x780, addr:0x0000000000000014, value:0x0, size:0x4), reg_addr0x48078014
pci_write_config(bdf:0x780, addr:0x0000000000000010, value:0x7c000000, size:0x4), reg_addr0x48078010
IVSHMEM: bar0 is at 0x000000007c000000 <---
....
IVSHMEM: mapped shmem and bars, got position 0x0000000000000001 <---
IVSHMEM: Enabled IRQ:0x9b
IVSHMEM: Enabling IVSHMEM_IRQs
...

2nd problem:

At the end of devices scanning here is the error:


Unhandled data read at 0x10000(1)

FATAL: unhandled trap (exception class 0x24)
Cell state before exception:
pc: 0000000000001a80 lr: 0000000000001ac0 spsr: 20000005 EL1
sp: 0000000000003da0 esr: 24 1 1130007
x0: 0000000070006000 x1: 00000000000000ff x2: 0000000000002238
x3: ffffffffffffffff x4: 0000000000003de0 x5: 0000000000000002
x6: 0000000000000000 x7: 0000000000000008 x8: 0000000000003de8
x9: 0000000000000000 x10: 0000000000000000 x11: 0000000000002690
x12: 0000000000000000 x13: 0000000000000000 x14: 0000000000000020
x15: 0000000000000000 x16: 0000000000000000 x17: 0000000000000000
x18: 0000000000000000 x19: 00000000000000ff x20: 0000000000010000
x21: 0000000000001000 x22: 00000000000011b8 x23: 00000000ffffffd0
x24: 0000000000003f70 x25: 000000000000288b x26: 000000000000272c
x27: 0000000000000001 x28: 0000000000000780 x29: 0000000000000000

Parking CPU 3 (Cell: "jetson-tx1-demo-shmem")


Any suggestion?

Thanks in advance,

--Luca
jetson-tx1-demo-ivshmem.c
jetson-tx1-ivshmem.c

Jan Kiszka

unread,
Dec 21, 2017, 8:19:48 AM12/21/17
to Luca Cuomo, Jailhouse
Are you using a Linux kernel with the Jailhouse-related patches? Did you
enable CONFIG_PCI_HOST_GENERIC and CONFIG_PCI_DOMAINS? On most ARM
systems, Jailhouse exposes the ivshmem devices via a virtual host bridge.
That indicates a mismatch between the cell configuration and the
hardware information that is either hard-coded into the inmate code or
otherwise provided to it. I didn't check the ivshmem-demo code but I
suspect the location of the virtual PCI host controller is hard-coded. A
Linux inmate would get it from a device tree provided to its boot.

Jan

Luca Cuomo

unread,
Dec 21, 2017, 10:32:12 AM12/21/17
to Jailhouse
Yes, i'm using a kernel for jailhouse on Jetson tx1. The kernel has the above CONFIGS. In root cell configuration if i put ".pci_is_virtual = 1," dmesg shows the message:
[ 102.100405] jailhouse: CONFIG_OF_OVERLAY disabled
[ 102.100417] jailhouse: failed to add virtual host controller
[ 102.100422] The Jailhouse is opening.

If i put it to 0 there is no error message but not pci device is set as before.
I've checked and the .pci_mmconfig_base is equal to inmate/lib/arm-common/pci.c:#define PCI_CFG_BASE.
The previous error was on a read on a not terminated string. I've fixed the problem and the new output looks like this

IVSHMEM: Found 1af4:1110 at 07:10.0
IVSHMEM: class/revision ff010000, not supported skipping device //forced to be true
IVSHMEM ERROR: device is not MSI-X capable
IVSHMEM: shmem is at 0x000000007bf00000
IVSHMEM: bar0 is at 0x000000007c000000
IVSHMEM: mapped shmem and bars, got position 0x0000000000000001
IVSHMEM: Enabled IRQ:0x9b
IVSHMEM: Enabling IVSHMEM_IRQs
IVSHMEM: Done setting up...
IVSHMEM: 07:10.0 sending IRQ (by writing 1 to 0x7c00000c)
IVSHMEM: waiting for interrupt.

It looks like a misconfigured area (bdf, class/revision are not as expected).

--Luca



Jan Kiszka

unread,
Dec 21, 2017, 11:24:30 AM12/21/17
to Luca Cuomo, Jailhouse
This device was configured in the cell config to be used as virtual
Ethernet (over ivshmem). You should set shmem_protocol on both ends to 0
in order to define the right type.

> IVSHMEM ERROR: device is not MSI-X capable

Not sure is this is now only a warning, but if the demo code still does
not support INTx, event signaling via interrupts will not work on this
host platform.

Jan

jonas

unread,
Dec 21, 2017, 5:47:39 PM12/21/17
to Jailhouse
That's right. I added some printouts in the hypervisor for this case in the patch indicating protocol mismatch between root-cell and bare-metal cell configs.

> > IVSHMEM ERROR: device is not MSI-X capable
>
> Not sure is this is now only a warning, but if the demo code still does
> not support INTx, event signaling via interrupts will not work on this
> host platform.
>
> Jan
>

Yes, the printout originates from the ivshmem-demo for x86, which requires support for MSI-X (which is available and supported/used/required by Jailhouse on x86, but not on ARM, if I've understood it correctly). The printout should have been removed in the patch...

/Jonas

jonas

unread,
Dec 22, 2017, 4:29:38 AM12/22/17
to Jailhouse
You also need to enable the INTx of the IVSHMEM virtual PCI device in the cell configuration (of the bare-metal inmate) in order for interrupts be forwarded by Jailhouse to the inmate (as a result of writing to either the 'Doorbell' or the 'Local state' register, located at offsets 12 and 16 in the BAR0 area of the IVSHMEM virtual PCI device).
Adding something like this to the cell configuration (of the bare-metal inmate) should do the trick:
8<--- Start of snippet --->8
.irqchips = {
/* GIC */ {
.address = 0x01c81000,
.pin_base = 32,
/* Interrupts:
52 of UART 7,
155 for IVSHMEM,
belong to the client */
.pin_bitmap = {
1ULL<<(52-32),
0,
0,
1 << (155-128),
},
},
},
8<--- End of snippet --->8

Luca Cuomo

unread,
Dec 22, 2017, 4:32:12 AM12/22/17
to Jailhouse
Ok, JAILHOUSE_SHMEM_PROTO_UNDEFINED is the right value. But i can't still see any pci device in linux.

>
> > > IVSHMEM ERROR: device is not MSI-X capable
> >
> > Not sure is this is now only a warning, but if the demo code still does
> > not support INTx, event signaling via interrupts will not work on this
> > host platform.
> >
> > Jan
> >
>
> Yes, the printout originates from the ivshmem-demo for x86, which requires support for MSI-X (which is available and supported/used/required by Jailhouse on x86, but not on ARM, if I've understood it correctly). The printout should have been removed in the patch...
>
> /Jonas

In the ivshmem-demo i see that the interrupt sending is still done writing to the doorbell register.
Now that i've no access via uio driver (no pci device in linux) i'm trying to send interrupt to the bare metal by mmapping /dev/mem with the proper offset and then writing to the doorbell register. Is it the right way?

Thanks,
--Luca

jonas

unread,
Dec 22, 2017, 6:54:52 AM12/22/17
to Jailhouse
Yes, that's how I did it as well.

Take a look at:
ivshmem-guest-code/uio/tests/Interrupts/VM/uio_send.c
for an idea of how it is supposed to work.

I did a user-space inter-cell communication driver based on this, which basically first mmaps 256 B from /dev/uio0, then mmaps the area corresponding to the shmem area provided by the Jailhouse IVSHMEM virtual PCI device.
I then separated the shmem area into one tx- and one rx-part and used the (MMIO) LSTATE register provided in BAR0 between two cells inter-connected via IVSHMEM. Writing to the LSTATE register in one cell is intercepted by Jailhouse, which updates the RSTATE register of the peer cell and triggers an IVSHMEM interrupt in the peer. Once the interrupt is registered by the peer, it reads its RSTATE register to determine at what offset in the rx-part to look for the message from the transmitting cell. The tx-part of the shmem in the transmitting cell is set up to be the rx-part in the receiving cell, and vice versa.

/Jonas

Luca Cuomo

unread,
Dec 22, 2017, 8:35:08 AM12/22/17
to Jailhouse
Yes, i've succesfully used it on x86 platform, mmpapping /dev/uio maps[0] as bar0 and maps[1] as shmem. But unfortunately the uio_ivshmem cannot detect any pci device.
Furthermore i've added the section you suggestion of irqchips for the bare metal inmate configuration. The address is different (for jetson tx1 gicd_base = 0x50041000). But when i run the bare metal ivshmem-demo, immediately after the call gic_enable_irq, then interrupts arrive (continously) and serving them blocks the inmate execution.

IVSHMEM: handle_irq(irqn:155) - interrupt #0
IVSHMEM: handle_irq(irqn:155) - interrupt #1
IVSHMEM: handle_irq(irqn:155) - interrupt #2
IVSHMEM: handle_irq(irqn:155) - interrupt #3
IVSHMEM: handle_irq(irqn:155) - interrupt #4
IVSHMEM: handle_irq(irqn:155) - interrupt #5
IVSHMEM: handle_irq(irqn:155) - interrupt #6
IVSHMEM: handle_irq(irqn:155) - interrupt #7
... and so on
And later the machine reboot by itself.

Claudio Scordino

unread,
Mar 9, 2018, 3:10:12 AM3/9/18
to jonas, Jailhouse
Hi Jonas,

2017-12-10 17:34 GMT+01:00 jonas <jo...@retotech.se>:
Hi,

I'll be making an effort to contribute my work to the master branch of Jailhouse within the next couple of weeks.

If I'm not wrong, those patches were not eventually upstreamed.
Do you still plan to upstream them ?

Many thanks,

              Claudio

jonas

unread,
Mar 12, 2018, 8:08:41 AM3/12/18
to Jailhouse
Hi,

I upstreamed a patch-set, see:
https://groups.google.com/forum/#!topic/jailhouse-dev/IqwQsQ9JEno

Henning said he would take care of introducing this into Jailhouse. I don't know the status or plans for this work though.

/Jonas
Reply all
Reply to author
Forward
0 new messages