Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

KVM: GPU passthrough

181 views
Skip to first unread message

Gokan Atmaca

unread,
Apr 8, 2021, 6:40:04 PM4/8/21
to
Hello

I want to use the graphics card directly in the virtual machine. IOMMU
seems to be running, but unfortunately it doesn't work when I want to
start the virtual machine.


pci:

[ 0.010066] ACPI: DMAR 0x000000009D8B7000 000070 (v01 INTEL EDK2
00000002 01000013)
[ 0.121392] DMAR: IOMMU enabled
[ 0.202324] DMAR: Host address width 39
[ 0.202325] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[ 0.202331] DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap
d2008c40660462 ecap f050da
[ 0.202333] DMAR: RMRR base: 0x0000009e543000 end: 0x0000009e78cfff
[ 0.202336] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0
[ 0.202338] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[ 0.202339] DMAR-IR: Queued invalidation will be enabled to support
x2apic and Intr-remapping.
[ 0.203666] DMAR-IR: Enabled IRQ remapping in x2apic mode
[ 0.391676] iommu: Default domain type: Translated
[ 0.591706] DMAR: No ATSR found
[ 0.591762] DMAR: dmar0: Using Queued invalidation
[ 0.591942] pci 0000:00:00.0: Adding to iommu group 0
[ 0.592011] pci 0000:00:01.0: Adding to iommu group 1
[ 0.592090] pci 0000:00:08.0: Adding to iommu group 2
[ 0.592367] pci 0000:00:14.0: Adding to iommu group 3
[ 0.592378] pci 0000:00:14.2: Adding to iommu group 3
[ 0.592438] pci 0000:00:16.0: Adding to iommu group 4
[ 0.592519] pci 0000:00:17.0: Adding to iommu group 5
[ 0.592583] pci 0000:00:1b.0: Adding to iommu group 6
[ 0.592674] pci 0000:00:1c.0: Adding to iommu group 7
[ 0.592687] pci 0000:00:1c.3: Adding to iommu group 7
[ 0.594066] pci 0000:00:1f.0: Adding to iommu group 8
[ 0.594075] pci 0000:00:1f.2: Adding to iommu group 8
[ 0.594084] pci 0000:00:1f.4: Adding to iommu group 8
[ 0.594091] pci 0000:01:00.0: Adding to iommu group 1
[ 0.594096] pci 0000:01:00.1: Adding to iommu group 1
[ 0.594104] pci 0000:02:00.0: Adding to iommu group 6
[ 0.594112] pci 0000:03:00.0: Adding to iommu group 7
[ 0.594119] pci 0000:04:00.0: Adding to iommu group 7
[ 0.594122] DMAR: Intel(R) Virtualization Technology for Directed I/O


error:
pci,host=0000:01:00.0,id=hostdev0,bus=pci.0,addr=0x9: vfio
0000:01:00.0: group 1 is not viable
Please ensure all devices within the iommu_group are bound to their
vfio bus driver.

Dan Ritter

unread,
Apr 8, 2021, 8:00:04 PM4/8/21
to
Gokan Atmaca wrote:
> Hello
>
> I want to use the graphics card directly in the virtual machine. IOMMU
> seems to be running, but unfortunately it doesn't work when I want to
> start the virtual machine.
>
>
> error:
> pci,host=0000:01:00.0,id=hostdev0,bus=pci.0,addr=0x9: vfio
> 0000:01:00.0: group 1 is not viable
> Please ensure all devices within the iommu_group are bound to their
> vfio bus driver.

Just to confirm: you have at least two graphics cards? One for
the host to boot with, one for your guest to take over?

And you loaded the vfio mod and configured it with the PCI ids
for your second card? There could be several.

-dsr-

Gokan Atmaca

unread,
Apr 15, 2021, 9:30:04 AM4/15/21
to
Hello

> Just to confirm: you have at least two graphics cards? One for
> the host to boot with, one for your guest to take over?

I saw it working in my trials. But of course, since there is only one
graphics card, the image of the host system is gone. :) I am looking
for a motherboard where I can install two graphics cards.

Gokan Atmaca

unread,
Apr 27, 2021, 8:00:04 AM4/27/21
to
Hello

I have two GPUs.My other video card has arrived. The current error has changed.
what could be the problem ?


error:
Error starting domain: internal error: qemu unexpectedly closed the
monitor: 2021-04-27T11:26:00.638521Z qemu-system-x86_64:
-device vfio-pci,host=0000:06:00.0,id=hostdev0,bus=pci.0,addr=0xa:
vfio 0000:06:00.0: failed to setup container for group
18: Failed to set iommu for container: Operation not permitted


-% modules:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

-% log:
dmesg | grep -E "DMAR|IOMMU"
[ 0.020358] ACPI: DMAR 0x00000000BFE880C0 000090 (v01 AMI
OEMDMAR 00000001 MSFT 00000097)
[ 0.052689] DMAR: IOMMU enabled
[ 0.124828] DMAR: Host address width 36
[ 0.124829] DMAR: DRHD base: 0x000000fed90000 flags: 0x1
[ 0.124834] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap
c90780106f0462 ecap f020e3
[ 0.124835] DMAR: RMRR base: 0x000000000e4000 end: 0x000000000e7fff
[ 0.124836] DMAR: RMRR base: 0x000000bfeec000 end: 0x000000bfefffff
[ 0.564105] DMAR: No ATSR found
[ 0.564226] DMAR: dmar0: Using Queued invalidation
[ 0.569521] DMAR: Intel(R) Virtualization Technology for Directed I/O

-% gpus:
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GT218
[GeForce 210] [10de:0a65] (rev a2)
01:00.1 Audio device [0403]: NVIDIA Corporation High Definition Audio
Controller [10de:0be3] (rev a1)

nvidia_uvm 36864 0
nvidia 10592256 77 nvidia_uvm
drm 552960 11 drm_kms_helper,nvidia,radeon,ttm

-% gpus:
06:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc.
[AMD/ATI] Caicos [Radeon HD 6450/7450/8450 / R5 230 OEM] [1002:6779]
06:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI]
Caicos HDMI Audio [Radeon HD 6450 / 7450/8450/8490 OEM / R5
230/235/235X OEM] [1002:a..

radeon 1466368 2
ttm 102400 1 radeon
drm_kms_helper 217088 1 radeon
i2c_algo_bit 16384 1 radeon
drm 552960 11 drm_kms_helper,nvidia,radeon,ttm

Gokan Atmaca

unread,
Apr 30, 2021, 1:10:05 PM4/30/21
to
ok it worked now. I reduced the ram size I gave for the GPU. But I saw
errors like the following.


---% kernel_err:

[ 9.487622] r8169 0000:02:00.0: firmware: failed to load
rtl_nic/rtl8168d-2.fw (-2)
[ 9.487697] firmware_class: See https://wiki.debian.org/Firmware
for information about missing firmware
[ 1159.047398] irq 16: nobody cared (try booting with the "irqpoll" option)
[ 1159.047534] handlers:
[ 1159.047572] [<000000007029899b>] usb_hcd_irq [usbcore]
[ 1164.024714] irq 16: nobody cared (try booting with the "irqpoll" option)
[ 1164.024846] handlers:
[ 1164.024883] [<000000007029899b>] usb_hcd_irq [usbcore]
[ 1268.843310] irq 16: nobody cared (try booting with the "irqpoll" option)
[ 1268.843448] handlers:
[ 1268.843487] [<000000007029899b>] usb_hcd_irq [usbcore]
[ 1323.645066] irq 16: nobody cared (try booting with the "irqpoll" option)
[ 1323.645198] handlers:
[ 1323.645236] [<000000007029899b>] usb_hcd_irq [usbcore]
root@homeKvm:~#

On Fri, Apr 30, 2021 at 7:36 PM Gokan Atmaca <linux...@gmail.com> wrote:
>
> system boots up but freezes. It just stays like that. I guess the
> problem is with the hardware.
>
>
>
> On Tue, Apr 27, 2021 at 6:14 PM Christian Seiler <chri...@iwakd.de> wrote:
> >
> > Hi there,
> >
> > Am 2021-04-09 00:37, schrieb Gokan Atmaca:
> > > error:
> > > pci,host=0000:01:00.0,id=hostdev0,bus=pci.0,addr=0x9: vfio
> > > 0000:01:00.0: group 1 is not viable
> > > Please ensure all devices within the iommu_group are bound to their
> > > vfio bus driver.
> >
> > This is a known issue with PCIe passthrough: depending on your
> > mainboard and CPU, some PCIe devices will be grouped together,
> > and you will either be able to forward _all_ devices in the
> > group to the VM or none at all.
> >
> > (If you have a "server" GPU that supports SR-IOV you'd have
> > additional options, but that doesn't appear to be the case.)
> >
> > This will highly depend on the PCIe slot the card is in, as well
> > as potentially some BIOS/UEFI settings on PCIe lane distribution.
> >
> > First let's find out what devices are in the same IOMMU group.
> > From your kernel log:
> >
> > [ 0.592011] pci 0000:00:01.0: Adding to iommu group 1
> > [ 0.594091] pci 0000:01:00.0: Adding to iommu group 1
> > [ 0.594096] pci 0000:01:00.1: Adding to iommu group 1
> >
> > Could you check with "lspci" what these devices are in your case?
> >
> > If you are comfortable forwarding the other two devices into the
> > VM as well, just add that to the list of passthrough devices,
> > then this should work.
> >
> > If you need the other two devices on the host, then you need to
> > either put the GPU into a different PCIe slot, put the other
> > devices into a different PCIe slot, or find some BIOS/UEFI setting
> > for PCIe lane management that separates the devices in question
> > into different IOMMU groups implicitly. (BIOS/UEFI settings will
> > typically not mention IOMMU groups at all, so look for "lane
> > management" or "lane distribution" or something along those
> > lines. You might need to drop some PCIe lanes from other devices
> > and give them directly to the GPU you want to pass through in
> > order for this to work, or vice-versa, depending on the specific
> > situation.)
> >
> > Note: the GUI tool "lstopo" from the package "hwloc" is _very_
> > useful to identify how the PCIe devices are organized in your
> > system and may give you a clue as to why your system is grouped
> > together in the way it is.
> >
> > Hope that helps.
> >
> > Regards,
> > Christian
> >
0 new messages