Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Bug#1016359: ovmf: does not recognise SCSI controllers

210 views
Skip to first unread message

Thorsten Glaser

unread,
Jul 29, 2022, 7:40:04 PM7/29/22
to
Package: ovmf
Version: 2020.11-2+deb11u1
Severity: important
X-Debbugs-Cc: t...@mirbsd.de

Configuring a qemu/kvm amd64 VM in virt-manager with a SCSI controller
(lsilogic in my attempt, other people report¹ virtio-scsi to be affected
as well) makes the system not boot with EFI (BIOS boot works).

This is *extremely* annoying because SCSI is the more reliable one over
SATA, and many pre-configured VMs use lsilogic SCSI controllers (e.g. some
dev images from Redmond, which *do* require Restricted Boot).

https://bugzilla.redhat.com/show_bug.cgi?id=1754704 which RH managed
to ignore for years then auto-close


-- System Information:
Debian Release: 11.4
APT prefers stable-updates
APT policy: (500, 'stable-updates'), (500, 'stable-security'), (500, 'oldstable-updates'), (500, 'oldoldstable'), (500, 'stable'), (500, 'oldstable')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 5.10.0-10-amd64 (SMP w/8 CPU threads)
Kernel taint flags: TAINT_FIRMWARE_WORKAROUND
Locale: LANG=C, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE not set
Shell: /bin/sh linked to /bin/lksh
Init: sysvinit (via /sbin/init)

-- no debconf information

Vincent Danjean

unread,
Nov 29, 2022, 11:00:03 AM11/29/22
to
Hi,

I can confirm that virtio-scsi does not work anymore
(and that this is indeed really annoying).

A simple workaround: just downgrade ovmf to the bullseye version

Regards
Vincent

dann frazier

unread,
Nov 29, 2022, 4:20:04 PM11/29/22
to
On Tue, Nov 29, 2022 at 04:47:58PM +0100, Vincent Danjean wrote:
> Hi,
>
> I can confirm that virtio-scsi does not work anymore
> (and that this is indeed really annoying).

It looks like upstream changed the default to disabled ahead of stable202208:

commit 57783adfb579da32b1eeda77b2bec028a5e0b7b3
Author: Michael D Kinney <michael....@intel.com>
Date: Tue Jul 26 12:40:00 2022 -0700

OvmfPkg: Change default to disable MptScsi and PvScsi

I suppose we could override that and turn them back on - but that
implies doing so without upstream support.

Gerd Hoffmann

unread,
Dec 6, 2022, 1:10:04 AM12/6/22
to
On Mon, Dec 05, 2022 at 04:36:15PM -0700, dann frazier wrote:
> On Tue, Jul 26, 2022 at 12:46:39PM -0700, Michael D Kinney wrote:
> > The email addresses for the reviewers of the MptScsi and
> > PvScsi are no longer valid. Disable the MptScsi and PvScsi
> > drivers in all DSC files until new maintainers/reviewers can
> > be identified.
>
> Hi Michael,
>
> This seems likely to be the reason for the following regression
> report in Debian:
>
> https://bugs.debian.org/1016359

I'm not so sure about that.

> > - DEFINE PVSCSI_ENABLE = TRUE
> > - DEFINE MPT_SCSI_ENABLE = TRUE
> > + DEFINE PVSCSI_ENABLE = FALSE
> > + DEFINE MPT_SCSI_ENABLE = FALSE
> > DEFINE LSI_SCSI_ENABLE = FALSE

The bug report talks about lsilogic and virtio-scsi.

lsilogic was already disabled by default before this patch.

virtio-scsi support is included and there are no plans to change
that because it is a rather essential driver. It works just fine
upstream, and there isn't even a config switch to disable it.

take care,
Gerd

Mike Maslenkin

unread,
Dec 6, 2022, 10:10:05 AM12/6/22
to
Greetings All!

As I can see LSI_SCSI_ENABLE related to LSI LSI_53C895A_PCI_DEVICE_ID
Vid/Did 0x1000:0x0012.
I guess it is some old Megaraid adapter.

A patch mentioned above set MPT_SCSI_ENABLE=FALSE, that removed
support for LSI 53C1030 and SAS1068.
These SCSI controllers were emulated by VMware, Parallels and I guess
VitualBox.
This is generic setup for VMware VMs, as far as I remember.
So the booting of such VMs (probably migrated from VMware and others)
was definitely broken.

Regards,
Mike.


On Tue, Dec 6, 2022 at 5:38 PM dann frazier <dann.f...@canonical.com> wrote:
> Thanks Gerd - I'll work with the users to clarify via the bug (thanks
> for responding there as well btw).
>
> -dann
>
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Groups.io Links: You receive all messages sent to this group.
> View/Reply Online (#97038): https://edk2.groups.io/g/devel/message/97038
> Mute This Topic: https://groups.io/mt/92635541/1770412
> Group Owner: devel...@edk2.groups.io
> Unsubscribe: https://edk2.groups.io/g/devel/unsub [mike.ma...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
>

dann frazier

unread,
Dec 6, 2022, 2:20:04 PM12/6/22
to
tag 1016359 + moreinfo
thanks

Hi Thorsten,

Could you confirm the last version that worked for you - perhaps
testing some builds from snapshot.debian.org if you're unsure? If you
can provide the libvirt XML from your VM, that may be useful as well.

-dann

Thorsten Glaser

unread,
Dec 6, 2022, 2:30:03 PM12/6/22
to
Hi dann,

> Could you confirm the last version that worked for you - perhaps

I have never booted an EFI system before; this VM was the first
time for me, so I do not have a “last version” either way.

As I said, this does work with BIOS (or at least used to).

bye,
//mirabilos
--
[...] if maybe ext3fs wasn't a better pick, or jfs, or maybe reiserfs, oh but
what about xfs, and if only i had waited until reiser4 was ready... in the be-
ginning, there was ffs, and in the middle, there was ffs, and at the end, there
was still ffs, and the sys admins knew it was good. :) -- Ted Unangst über *fs

dann frazier

unread,
Dec 6, 2022, 3:40:03 PM12/6/22
to
On Tue, Dec 06, 2022 at 07:25:27PM +0000, Thorsten Glaser wrote:
> Hi dann,
>
> > Could you confirm the last version that worked for you - perhaps
>
> I have never booted an EFI system before; this VM was the first
> time for me, so I do not have a “last version” either way.
>
> As I said, this does work with BIOS (or at least used to).

OK, then I think there may be multiple conflated issues here. Let's
focus on the original use case you described - a VM created with
virt-manager using a SCSI controller doesn't work. You tried
"lsilogic", but you assume based on a RH report that "virtio-scsi"
also does not work.

I just tried this on latest sid, and was able to reproduce. "lsilogic"
does indeed not work. I also didn't find a way to tell virt-manager to
use anything other that "lsilogic". But, when I edited the XML and
changed "lsilogic" to "virtio-scsi" (and ran virsh define), the system
booted fine.

Thorsten - do you have reason that you prefer lsilogic to virtio-scsi?
If not, I suggest just using virtio-scsi. I don't know why
virt-manager defaults to that - even virt-install seems to default to
virtio-scsi. I suggest someone take that up w/ the virt-manager
maintainer(s).

Now, Vincent me too'd this bug to say that virtio-scsi wasn't working
for them, but the version in bullseye did work. Thorsten reported
using the version of ovmf that *is* in bullseye and wasn't using
virtio-scsi. So whatever Vincent is/was seeing seems like a separate
issue. If you are still having a problem Vincent, please report a
separate issue.

-dann

Thorsten Glaser

unread,
Dec 6, 2022, 4:30:04 PM12/6/22
to
Hi dann,

>OK, then I think there may be multiple conflated issues here. Let's
>focus on the original use case you described - a VM created with
>virt-manager using a SCSI controller doesn't work. You tried

Nonono, not quite.

The original use case: an *EFI* VM with a SCSI controller does not
work. This is what I reported.

>I just tried this on latest sid, and was able to reproduce. "lsilogic"
>does indeed not work.

Oh, interesting. That’s got to be a new/different bug. We were on
bullseye, not sid, in case that helps.

>I also didn't find a way to tell virt-manager to use anything other
>that "lsilogic". But, when I edited the XML and changed "lsilogic" to
>"virtio-scsi" (and ran virsh define), the system booted fine.

Yeah well virt-manager… it does have an XML editor built in, but
sometimes…

I did just test a BIOS case: I took an existing VM I had but was
not using (an OpenBSD VM), dropped the IDE disc, added an lsilogic
SCSI controller, added a SCSI disc with the same backing LV that
the IDE disc had, booted it, and it works. Also on bullseye.

>Thorsten - do you have reason that you prefer lsilogic to virtio-scsi?

Yes: I mostly run operating systems with no or insufficient support
for virtio over the hypervisor interface. (There’s also virtio over
PCI, but my inquiries to the qemu developers how to even access this
led to them eventually agreeing it probably isn’t even implemented
fully yet.)

In the specific case, it was a VM “appliance” imported from some
other virtualisation tools that had a preconfigured Windows, and
the other VM hosts all use lsilogic for that.

>Now, Vincent me too'd this bug to say that virtio-scsi wasn't working
>for them, but the version in bullseye did work. Thorsten reported
>using the version of ovmf that *is* in bullseye and wasn't using
>virtio-scsi. So whatever Vincent is/was seeing seems like a separate
>issue. If you are still having a problem Vincent, please report a
>separate issue.

I’m not too sure about this either.

I also had a grml-efi VM lying around, which incidentally already
had a virtio-scsi configured, so I did the same thing: drop the
SATA CD, re-add it as an SCSI HDD, change the boot order, start.
It switches from “the guest has not initialised the display yet”
to “viewer was disconnected” very quickly. (I also did a test
with the NIC in the boot order enabled, and it does netboot, so
the problem is with, again, SCSI.)

So I can state, with reasonable confidence, that EFI booting in
bullseye works with neither lsilogic nor virtio-scsi. This makes
it mostly unsuitable for running most Windows VMs.

bye,
//mirabilos
--
21:12⎜<Vutral> sogar bei opensolaris haben die von der community so
ziemlich jeden mist eingebaut │ man sollte unices nich so machen das
desktopuser zuviel intresse kriegen │ das macht die code base kaputt
21:13⎜<Vutral:#MirBSD> linux war früher auch mal besser :D

Gerd Hoffmann

unread,
Dec 7, 2022, 2:50:04 AM12/7/22
to
Hi,

> A patch mentioned above set MPT_SCSI_ENABLE=FALSE, that removed
> support for LSI 53C1030 and SAS1068.
> These SCSI controllers were emulated by VMware, Parallels and I guess
> VitualBox.
> This is generic setup for VMware VMs, as far as I remember.
> So the booting of such VMs (probably migrated from VMware and others)
> was definitely broken.

Yes. Problem is there is no maintainer for the driver. There used to
be one, but the email address started bouncing. So we updated
Maintainers.txt and flipped the switch to not build the unmaintained
drivers by default.

If debian is fine with shipping unmaintained software to its users you
can flip the config switches of course, at least as long as the drivers
are still in the tree. The drivers are at risk of being removed though
in case we don't find a new maintainer within a year or two.

take care,
Gerd

Ard Biesheuvel

unread,
Dec 7, 2022, 9:20:04 AM12/7/22
to
Indeed. These options can be set from the command line when building
the image, so the distro wrapper scripts can just en/disable the
features they desire.

As for maintenance: indeed, lack of maintainership generally also
means lack of testing coverage, and if something breaks, we won't
notice, and if we do, we may not be able to fix it without running the
risk of breaking something else.

So at some point, these drivers will be removed rather than kept alive
by the core team unless someone steps up.

dann frazier

unread,
Dec 7, 2022, 9:50:03 AM12/7/22
to
On Tue, Dec 06, 2022 at 09:21:21PM +0000, Thorsten Glaser wrote:
> Hi dann,
>
> >OK, then I think there may be multiple conflated issues here. Let's
> >focus on the original use case you described - a VM created with
> >virt-manager using a SCSI controller doesn't work. You tried
>
> Nonono, not quite.
>
> The original use case: an *EFI* VM with a SCSI controller does not
> work. This is what I reported.

Sure, I thought that was implied since this is a UEFI firmware bug. To
be clear, my testing was with UEFI boot enabled.

> >I just tried this on latest sid, and was able to reproduce. "lsilogic"
> >does indeed not work.
>
> Oh, interesting. That’s got to be a new/different bug. We were on
> bullseye, not sid, in case that helps.

Do you believe it is new/different because you assumed I was not using
a UEFI VM, or some other reason(s)? Note that I tested both bullseye
and sid - neither supports it, and there's no evidence we ever did.

> >Thorsten - do you have reason that you prefer lsilogic to virtio-scsi?
>
> Yes: I mostly run operating systems with no or insufficient support
> for virtio over the hypervisor interface. (There’s also virtio over
> PCI, but my inquiries to the qemu developers how to even access this
> led to them eventually agreeing it probably isn’t even implemented
> fully yet.)

OK, so it sounds like this bug is really a "please enable lsilogic
support in OVMF" - as that is the only way to support the guests you
mostly run w/ SCSI. Is that accurate?

> In the specific case, it was a VM “appliance” imported from some
> other virtualisation tools that had a preconfigured Windows, and
> the other VM hosts all use lsilogic for that.

I trimmed the content about virtio-scsi. Please report any virtio-scsi
issues in a new bug since I'm not convinced they are related to the
issue you are having with lsilogic-dependendent VMs. Even if you think
they are, I'd much rather treat them as separate and merge them
later if necessary then try to triage both in the same bug.

-dann

Ard Biesheuvel

unread,
Dec 7, 2022, 11:10:04 AM12/7/22
to
On Wed, 7 Dec 2022 at 17:02, Gerd Hoffmann <kra...@redhat.com> wrote:
>
> On Wed, Dec 07, 2022 at 09:14:39AM -0500, James Bottomley wrote:
> > On Wed, 2022-12-07 at 15:09 +0100, Ard Biesheuvel wrote:
> > > So at some point, these drivers will be removed rather than kept
> > > alive by the core team unless someone steps up.
> >
> > How important is keeping them alive?
>
> Most common use case is probably bootimg images created on other
> hypervisors on qemu. Otherwise there is little reason to use
> something which is not virtio-scsi.
>
> > I can volunteer to "maintain"
> > them which I anticipate won't be much effort (plus I'm used to looking
> > after obsolete SCSI equipment). The hardware is obsolete, so the
> > mechanics of their emulation isn't going to change, the only potential
> > risk is changes in the guest to host transmission layer that breaks
> > something.
>

Thanks James, that would be very helpful.

> Yes, I don't expect it being much effort, but knowing oldish scsi stuff
> certainly helps understanding the driver code if needed. If you want
> step up sent a patch updating Maintainers.txt accordingly.
>

Having the informed opinion of a domain expert should allow us to
diagnose issued related to these drivers with more confidence, and
also give us insight in how obsolete those drivers actually are.

I can send the patch if you prefer.


> > On the other hand, I've got to say I use virtio-scsi in all
> > my VM testing environments,
>
> Same here ;)
>
> take care,
> Gerd
>

Gerd Hoffmann

unread,
Dec 7, 2022, 11:10:04 AM12/7/22
to
On Wed, Dec 07, 2022 at 09:14:39AM -0500, James Bottomley wrote:
> On Wed, 2022-12-07 at 15:09 +0100, Ard Biesheuvel wrote:
> > So at some point, these drivers will be removed rather than kept
> > alive by the core team unless someone steps up.
>
> How important is keeping them alive?

Most common use case is probably bootimg images created on other
hypervisors on qemu. Otherwise there is little reason to use
something which is not virtio-scsi.

> I can volunteer to "maintain"
> them which I anticipate won't be much effort (plus I'm used to looking
> after obsolete SCSI equipment). The hardware is obsolete, so the
> mechanics of their emulation isn't going to change, the only potential
> risk is changes in the guest to host transmission layer that breaks
> something.

Yes, I don't expect it being much effort, but knowing oldish scsi stuff
certainly helps understanding the driver code if needed. If you want
step up sent a patch updating Maintainers.txt accordingly.

Thorsten Glaser

unread,
Dec 7, 2022, 11:20:04 AM12/7/22
to
dann frazier dixit:

>Sure, I thought that was implied since this is a UEFI firmware bug. To
>be clear, my testing was with UEFI boot enabled.

Ah okay. That was not clear to me, because you only referred to
the SCSI part.

>Do you believe it is new/different because you assumed I was not using
>a UEFI VM

Yes.

>Note that I tested both bullseye
>and sid - neither supports it, and there's no evidence we ever did.

Ouch!

>OK, so it sounds like this bug is really a "please enable lsilogic
>support in OVMF" - as that is the only way to support the guests you
>mostly run w/ SCSI. Is that accurate?

From the above, I think so. For OVMF, it’s probably equal to a
new feature request, but for the whole virtualisation/emulation
setup it’s rather a missing feature bug, for feature parity with
BIOS (and nōn-x86) firmwares, and as “missing half” of qemu’s
built-in support for this SCSI controller.

>I trimmed the content about virtio-scsi. Please report any virtio-scsi
>issues in a new bug since I'm not convinced they are related to the
>issue you are having with lsilogic-dependendent VMs.

OK. I’ll leave the lead for that to Vincent, but feel free to
keep me in Cc on that one; I probably can dig a little deeper
in the failing build with a bit more time.

bye,
//mirabilos
--
<cnuke> den AGP stecker anfeilen, damit er in den slot aufm 440BX board passt…
oder netzteile, an die man auch den monitor angeschlossen hat und die dann für
ein elektrisch aufgeladenes gehäuse gesorgt haben […] für lacher gut auf jeder
LAN party │ <nvb> damals, als der pizzateig noch auf dem monior "gegangen" ist

Vincent Danjean

unread,
Dec 7, 2022, 4:10:04 PM12/7/22
to
Hi,

Le 07/12/2022 à 17:13, Thorsten Glaser a écrit :
> OK. I’ll leave the lead for that to Vincent, but feel free to
> keep me in Cc on that one; I probably can dig a little deeper
> in the failing build with a bit more time.

I filled a new bug: #1025701

Regards
Vincent

Thorsten Glaser

unread,
Dec 7, 2022, 5:00:04 PM12/7/22
to
Dixi quod…

>I also had a grml-efi VM lying around, which incidentally already
>had a virtio-scsi configured, so I did the same thing: drop the
>SATA CD, re-add it as an SCSI HDD, change the boot order, start.
>It switches from “the guest has not initialised the display yet”
>to “viewer was disconnected” very quickly. (I also did a test
>with the NIC in the boot order enabled, and it does netboot, so
>the problem is with, again, SCSI.)


Vincent Danjean dixit:

>Downgrading to 2020.11-2+deb11u1 fixes the issue.


This led me to reinvestigate. Turns out that OVMF cannot boot
when the ISO image is added as a read-only SCSI disc to the
system, as opposed to a read-write one.

So I have to change my earlier statement to:

>So I can state, with reasonable confidence, that EFI booting in
->bullseye works with neither lsilogic nor virtio-scsi. This makes
+ bullseye does not work with lsilogic, but works in bullseye
+ with virtio-scsi. This makes
>it mostly unsuitable for running most Windows VMs.

I agree that the virtio-scsi bug should be split from the lsilogic
bug, especially as the former seems to be a regression against
bullseye while the latter is a missing functionality.

Thanks,
//mirabilos
--
“ah that reminds me, thanks for the stellar entertainment that you and certain
other people provide on the Debian mailing lists │ sole reason I subscribed to
them (I'm not using Debian anywhere) is the entertainment factor │ Debian does
not strike me as a place for good humour, much less German admin-style humour”
0 new messages