[RFC][PATCH v2 0/3] IVSHMEM version 2 device for QEMU

123 views
Skip to first unread message

Jan Kiszka

unread,
Jan 7, 2020, 9:36:47 AM1/7/20
to qemu-devel, Markus Armbruster, Claudio Fontana, liang yan, Stefan Hajnoczi, Michael S . Tsirkin, Hannes Reinecke, Jailhouse
Overdue update of the ivshmem 2.0 device model as presented at [1].

Changes in v2:
- changed PCI device ID to Siemens-granted one,
adjusted PCI device revision to 0
- removed unused feature register from device
- addressed feedback on specification document
- rebased over master

This version is now fully in sync with the implementation for Jailhouse
that is currently under review [2][3], UIO and virtio-ivshmem drivers
are shared. Jailhouse will very likely pick up this revision of the
device in order to move forward with stressing it.

More details on the usage with QEMU were in the original cover letter
(with adjustements to the new device ID):

If you want to play with this, the basic setup of the shared memory
device is described in patch 1 and 3. UIO driver and also the
virtio-ivshmem prototype can be found at

http://git.kiszka.org/?p=linux.git;a=shortlog;h=refs/heads/queues/ivshmem2

Accessing the device via UIO is trivial enough. If you want to use it
for virtio, this is additionally to the description in patch 3 needed on
the virtio console backend side:

modprobe uio_ivshmem
echo "110a 4106 1af4 1100 ffc003 ffffff" > /sys/bus/pci/drivers/uio_ivshmem/new_id
linux/tools/virtio/virtio-ivshmem-console /dev/uio0

And for virtio block:

echo "110a 4106 1af4 1100 ffc002 ffffff" > /sys/bus/pci/drivers/uio_ivshmem/new_id
linux/tools/virtio/virtio-ivshmem-console /dev/uio0 /path/to/disk.img

After that, you can start the QEMU frontend instance with the
virtio-ivshmem driver installed which can use the new /dev/hvc* or
/dev/vda* as usual.

Any feedback welcome!

Jan

PS: Let me know if I missed someone potentially interested in this topic
on CC - or if you would like to be dropped from the list.

[1] https://kvmforum2019.sched.com/event/TmxI
[2] https://groups.google.com/forum/#!topic/jailhouse-dev/ffnCcRh8LOs
[3] https://groups.google.com/forum/#!topic/jailhouse-dev/HX-0AGF1cjg

Jan Kiszka (3):
hw/misc: Add implementation of ivshmem revision 2 device
docs/specs: Add specification of ivshmem device revision 2
contrib: Add server for ivshmem revision 2

Makefile | 3 +
Makefile.objs | 1 +
configure | 1 +
contrib/ivshmem2-server/Makefile.objs | 1 +
contrib/ivshmem2-server/ivshmem2-server.c | 462 ++++++++++++
contrib/ivshmem2-server/ivshmem2-server.h | 158 +++++
contrib/ivshmem2-server/main.c | 313 +++++++++
docs/specs/ivshmem-2-device-spec.md | 376 ++++++++++
hw/misc/Makefile.objs | 2 +-
hw/misc/ivshmem2.c | 1085 +++++++++++++++++++++++++++++
include/hw/misc/ivshmem2.h | 48 ++
include/hw/pci/pci_ids.h | 2 +
12 files changed, 2451 insertions(+), 1 deletion(-)
create mode 100644 contrib/ivshmem2-server/Makefile.objs
create mode 100644 contrib/ivshmem2-server/ivshmem2-server.c
create mode 100644 contrib/ivshmem2-server/ivshmem2-server.h
create mode 100644 contrib/ivshmem2-server/main.c
create mode 100644 docs/specs/ivshmem-2-device-spec.md
create mode 100644 hw/misc/ivshmem2.c
create mode 100644 include/hw/misc/ivshmem2.h

--
2.16.4

Liang Yan

unread,
Apr 9, 2020, 9:57:42 AM4/9/20
to Jan Kiszka, qemu-devel, Markus Armbruster, Claudio Fontana, Stefan Hajnoczi, Michael S . Tsirkin, Hannes Reinecke, Jailhouse
Hi, Jan,

Nice work.

I did a full test for your this new version. QEMU device part looks
good, virtio console worked as expected. Just had some issue with
virtio-ivshmem-block tests here.

I suppose you mean "linux/tools/virtio/virtio-ivshmem-block"?

Noticed "ffc002" is the main difference, however I saw nothing response
when run echo command here, are there anything I need to prepare?

I build the driver in guest kernel already.

Do I need a new protocol or anything for below command line?
ivshmem2-server -F -l 64K -n 2 -V 3 -P 0x8003

Best,
Liang

Jan Kiszka

unread,
Apr 9, 2020, 10:11:25 AM4/9/20
to Liang Yan, qemu-devel, Markus Armbruster, Claudio Fontana, Stefan Hajnoczi, Michael S . Tsirkin, Hannes Reinecke, Jailhouse
Yes, copy&paste mistake, had the same issue over in
https://github.com/siemens/jailhouse/blob/master/Documentation/inter-cell-communication.md

>
> Noticed "ffc002" is the main difference, however I saw nothing response
> when run echo command here, are there anything I need to prepare?
>
> I build the driver in guest kernel already.
>
> Do I need a new protocol or anything for below command line?
> ivshmem2-server -F -l 64K -n 2 -V 3 -P 0x8003

Yes, you need to adjust that command line - didn't I document that
somewhere? Looks like I didn't:

ivshmem2-server -F -l 1M -n 2 -V 2 -P 0x8002

i.e. a bit more memory is good (but this isn't speed-optimized anyway),
you only need 2 vectors here (but more do not harm), and the protocol
indeed needs adjustment (that is the key).

Jan

--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

Liang Yan

unread,
Apr 9, 2020, 9:02:47 PM4/9/20
to Jan Kiszka, qemu-devel, Markus Armbruster, Claudio Fontana, Stefan Hajnoczi, Michael S . Tsirkin, Hannes Reinecke, Jailhouse
Thanks for the reply. I just confirmed that virtio-ivshmem-block worked
with the new configruation, a "vdb" disk is mounted to fronted VM. I
will send out a full test summary later.

Best,
Liang



> Jan
>

Liang Yan

unread,
Apr 29, 2020, 12:17:23 AM4/29/20
to Jan Kiszka, qemu-devel, Markus Armbruster, Claudio Fontana, Stefan Hajnoczi, Michael S . Tsirkin, Hannes Reinecke, Jailhouse
Hi, All,

Did a test for these patches, all looked fine.

Test environment:
Host: opensuse tumbleweed + latest upstream qemu + these three patches
Guest: opensuse tumbleweed root fs + custom kernel(5.5) + related
uio-ivshmem driver + ivshmem-console/ivshmem-block tools


1. lspci show

00:04.0 Unassigned class [ff80]: Siemens AG Device 4106 (prog-if 02)
Subsystem: Red Hat, Inc. Device 1100
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR+ FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort-
<MAbort- >SERR- <PERR- INTx-
Latency: 0
Region 0: Memory at fea56000 (32-bit, non-prefetchable) [size=4K]
Region 1: Memory at fea57000 (32-bit, non-prefetchable) [size=4K]
Region 2: Memory at fd800000 (64-bit, prefetchable) [size=1M]
Capabilities: [4c] Vendor Specific Information: Len=18 <?>
Capabilities: [40] MSI-X: Enable+ Count=2 Masked-
Vector table: BAR=1 offset=00000000
PBA: BAR=1 offset=00000800
Kernel driver in use: virtio-ivshmem


2. virtio-ivshmem-console test
2.1 ivshmem2-server(host)

airey:~/ivshmem/qemu/:[0]# ./ivshmem2-server -F -l 64K -n 2 -V 3 -P 0x8003
*** Example code, do not use in production ***

2.2 guest vm backend(test-01)
localhost:~ # echo "110a 4106 1af4 1100 ffc003 ffffff" >
/sys/bus/pci/drivers/uio_ivshmem/new_id
[ 185.831277] uio_ivshmem 0000:00:04.0: state_table at
0x00000000fd800000, size 0x0000000000001000
[ 185.835129] uio_ivshmem 0000:00:04.0: rw_section at
0x00000000fd801000, size 0x0000000000007000

localhost:~ # virtio/virtio-ivshmem-console /dev/uio0
Waiting for peer to be ready...

2.3 guest vm frontend(test-02)
need to boot or reboot after backend is done

2.4 backend will serial output of frontend

localhost:~ # virtio/virtio-ivshmem-console /dev/uio0
Waiting for peer to be ready...

localhost:~/virtio # ./virtio-ivshmem-console /dev/uio0
Waiting for peer to be ready...
Starting virtio device
device_status: 0x0
device_status: 0x1
device_status: 0x3
device_features_sel: 1
device_features_sel: 0
driver_features_sel: 1
driver_features[1]: 0x13
driver_features_sel: 0
driver_features[0]: 0x1
device_status: 0xb
queue_sel: 0
queue size: 8
queue driver vector: 1
queue desc: 0x200
queue driver: 0x280
queue device: 0x2c0
queue enable: 1
queue_sel: 1
queue size: 8
queue driver vector: 2
queue desc: 0x400
queue driver: 0x480
queue device: 0x4c0
queue enable: 1
device_status: 0xf

Welcome to openSUSE Tumbleweed 20200326 - Kernel 5.5.0-rc5-1-default+
(hvc0).

enp0s3:


localhost login:

2.5 close backend and frontend will show
localhost:~ # [ 185.685041] virtio-ivshmem 0000:00:04.0: backend failed!

3. virtio-ivshmem-block test

3.1 ivshmem2-server(host)
airey:~/ivshmem/qemu/:[0]# ./ivshmem2-server -F -l 1M -n 2 -V 2 -P 0x8002
*** Example code, do not use in production ***

3.2 guest vm backend(test-01)

localhost:~ # echo "110a 4106 1af4 1100 ffc002 ffffff" >
/sys/bus/pci/drivers/uio_ivshmem/new_id
[ 77.701462] uio_ivshmem 0000:00:04.0: state_table at
0x00000000fd800000, size 0x0000000000001000
[ 77.705231] uio_ivshmem 0000:00:04.0: rw_section at
0x00000000fd801000, size 0x00000000000ff000

localhost:~ # virtio/virtio-ivshmem-block /dev/uio0 /root/disk.img
Waiting for peer to be ready...

3.3 guest vm frontend(test-02)
need to boot or reboot after backend is done

3.4 guest vm backend(test-01)
localhost:~ # virtio/virtio-ivshmem-block /dev/uio0 /root/disk.img
Waiting for peer to be ready...
Starting virtio device
device_status: 0x0
device_status: 0x1
device_status: 0x3
device_features_sel: 1
device_features_sel: 0
driver_features_sel: 1
driver_features[1]: 0x13
driver_features_sel: 0
driver_features[0]: 0x206
device_status: 0xb
queue_sel: 0
queue size: 8
queue driver vector: 1
queue desc: 0x200
queue driver: 0x280
queue device: 0x2c0
queue enable: 1
device_status: 0xf

3.5 guest vm frontend(test-02), a new disk is attached:

fdisk /dev/vdb

Disk /dev/vdb: 192 KiB, 196608 bytes, 384 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

3.6 close backend and frontend will show
localhost:~ # [ 1312.284301] virtio-ivshmem 0000:00:04.0: backend failed!



Tested-by: Liang Yan <ly...@suse.com>

On 1/7/20 9:36 AM, Jan Kiszka wrote:

Jan Kiszka

unread,
Apr 29, 2020, 7:50:17 AM4/29/20
to Liang Yan, qemu-devel, Markus Armbruster, Claudio Fontana, Stefan Hajnoczi, Michael S . Tsirkin, Hannes Reinecke, Jailhouse
Thanks for testing this! I'll look into your patch findings.

Jan
Reply all
Reply to author
Forward
0 new messages