Problems with ivshmem-net on root cell

84 views
Skip to first unread message

Pontes, Otavio

unread,
Aug 4, 2017, 9:00:11 PM8/4/17
to jailho...@googlegroups.com
Hi,

I am using jailhouse in an Intel x86 hardware and I am having some
problems to start ivshmem-net driver in a root cell, in order to
communicate with a Linux inmate cell.
I have successfully use this driver to stablish a communication between
two Linux inmate cells, but the driver fails when I load it in the root
cell.
Can anyone help me in understanding this issue and debbuging the
problem?

In the root cell configuration file I have addeded the following:

mem_regions:
/* IVSHMEM shared memory region (networking) */
{
.phys_start = 0x3f100000,
.virt_start = 0x3f100000,
.size = 0xff000,
.flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
JAILHOUSE_MEM_ROOTSHARED,
},

pci_devices:
/* IVSHMEM (networking) */
{
.type = JAILHOUSE_PCI_TYPE_IVSHMEM,
.domain = 0x0000,
.bdf = 0x0f << 3, //This doesn't conflict with other bdf
.bar_mask = {
0xffffff00, 0xffffffff, 0x00000000,
0x00000000, 0xffffffe0, 0xffffffff,
},
.num_msix_vectors = 1,
.shmem_region = 0, //This is pointing to the mem_region above
.shmem_protocol = JAILHOUSE_SHMEM_PROTO_VETH,
},

And when I run:
$ ./tools/jailhouse enable configs/root.cell

The driver ivshmem-net is loaded and the following message is printed
in the kernel log
[ 20.956367] ivshmem-net 0000:00:0f.0: invalid IVPosition -1

I noticed when I was debbuging this issue that the function
ivshmem_register_mmio is not called when ivshmem-net driver tries to
get the ivpos value. I suspected then that value of bar0_address and
bar4_address may be wrong. Is there an way to know what would be the
expected value for this addresses? Or is there any other thing I should
check to try to debug this?

Best regards,
Otavio

Jan Kiszka

unread,
Aug 6, 2017, 7:58:48 AM8/6/17
to Pontes, Otavio, jailho...@googlegroups.com
If that function isn't called, the guest may not have enabled the
virtual PCI device properly (trapped by ivshmem_write_command), or the
address it programmed into that BARs is not trapped by the Jailhouse
configuration for that cell - IOW, the config maps it to something real.

Jan

Henning Schild

unread,
Aug 7, 2017, 4:23:04 AM8/7/17
to Pontes, Otavio, jailho...@googlegroups.com
Am Sat, 5 Aug 2017 00:59:58 +0000
schrieb "Pontes, Otavio" <otavio...@intel.com>:
If your machine has multiple iommu_units try .iommu = 1 and higher in
the above struct.

> And when I run:
> $ ./tools/jailhouse enable configs/root.cell
>
> The driver ivshmem-net is loaded and the following message is printed
> in the kernel log
> [ 20.956367] ivshmem-net 0000:00:0f.0: invalid IVPosition -1

Please also provide the hypervisor output. Either from serial or from
"jailhouse console".

Henning

Pontes, Otavio

unread,
Aug 15, 2017, 8:14:00 PM8/15/17
to jan.k...@web.de, jailho...@googlegroups.com
Hi,

Thank you for your answer. I found what was the problem in my tests and
I could have ivshmem-net working fine. The address programmed in the
BAR was not being trapped by jailhouse because of one problem in the
generated configuration file for the root cell.

The jailhouse-config-create script always assume that memregions are
aligned in one page, so it rounds up the memregion size to 0x1000. The
problem I am facing is that I have one PCI device that was being mapped
in a region with size 0x800 and the ivshmem pci device was mapped to
use a memory just after the end of this device. So jailhouse was
assuming that the memory region used by ivshmem was being used by a
physical pci device and it was not handling the IO. If I change the
root cell config file to use the correct size for the physical PCI
device, everything works perfectly.

I took a look at the kernel code and it is not necessarily aligning the
PCI resource memory from 2 different devices that are in the same bus
in different pages. So rounding the memregion size in config-create
doesn't look correct to me. But I imagine that not having this would
cause performance issues.

Isn't this a possible problem for other PCI devices that have small
resources that are not aligned in a page? Do you have any suggestion on
how to fix this? I couldn't find a way to force the ivshmem pci device
to have its resources to be page aligned, but if there is a way, it is
a possible solution.

Thanks,

Otavio

Henning Schild

unread,
Aug 21, 2017, 4:36:39 AM8/21/17
to Pontes, Otavio, jan.k...@web.de, jailho...@googlegroups.com
Am Wed, 16 Aug 2017 00:13:54 +0000
schrieb "Pontes, Otavio" <otavio...@intel.com>:
This rounding is from a time when jailhouse did not support sub-page
mappings. In your case i would suggest you drop those 0x200 bytes and
use aligned memory regions. Using split-pages requires trapping every
single access.

> I took a look at the kernel code and it is not necessarily aligning
> the PCI resource memory from 2 different devices that are in the same
> bus in different pages. So rounding the memregion size in
> config-create doesn't look correct to me. But I imagine that not
> having this would cause performance issues.

Now that we have split-pages that rounding could be reconsidered. For
the root-cell it does not really matter which device brings the memory.
If the region is there it will be made available to the cell and both
devices will work.
But assigning the two devices to different cells, you would have to
correct that. So you can not just copy-paste the parts of the config.

> Isn't this a possible problem for other PCI devices that have small
> resources that are not aligned in a page? Do you have any suggestion
> on how to fix this? I couldn't find a way to force the ivshmem pci
> device to have its resources to be page aligned, but if there is a
> way, it is a possible solution.

As i said, it becomes a problem if two or more devices actually share a
page and the multiple devices should be assigned to multiple cells. The
config generator could detect the shared page and add a comment to the
affected device sections, maybe including the values one would have to
set when actually splitting. Splitting by default does not seem to be
the best idea, it woud slow down things even if the collision stays in
one cell.

regards,
Henning

> Thanks,
>
> Otavio
>

Reply all
Reply to author
Forward
0 new messages