Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Bug#988174: /usr/bin/qemu-aarch64-static: Segfaults sometimes on python3-minimal on arm64

230 views
Skip to first unread message

Diederik de Haas

unread,
May 6, 2021, 6:20:03 PM5/6/21
to
Package: qemu-user-static
Version: 1:5.2+dfsg-10
Severity: normal
File: /usr/bin/qemu-aarch64-static
X-Debbugs-Cc: pkg-raspi-...@lists.alioth.debian.org

When trying to build an arm64 image for Raspberry Pi image specs
(https://salsa.debian.org/raspi-team/image-specs) I and others have
gotten a segfault when installing pyton3[-minimal] on a regular basis,
but not always.
This is essentially a 'forwarded' for an issue reported in its issue
tracker: https://salsa.debian.org/raspi-team/image-specs/-/issues/42

I had to try 3 times now to trigger the issue, but now I 'succeeded'.

You can reproduce it as follows:
- clone the 'image-specs' repo
- add 'python3' or 'python3-minimal' to the "apt: install" step
- run as root or via sudo: make raspi_3_bullseye.img

If it succeeds (you'll see "All went fine."), try again (and again) till
it fails.

Relevant part of the log file created during image built:
===================================================
Setting up linux-image-5.10.0-6-arm64 (5.10.28-1) ...
I: /vmlinuz.old is now a symlink to boot/vmlinuz-5.10.0-6-arm64
I: /initrd.img.old is now a symlink to boot/initrd.img-5.10.0-6-arm64
I: /vmlinuz is now a symlink to boot/vmlinuz-5.10.0-6-arm64
I: /initrd.img is now a symlink to boot/initrd.img-5.10.0-6-arm64
Setting up linux-image-arm64 (5.10.28-1) ...
Setting up ssh (1:8.4p1-5) ...
Processing triggers for libc-bin (2.31-11) ...
Processing triggers for ca-certificates (20210119) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Processing triggers for initramfs-tools (0.140) ...
update-initramfs: Generating /boot/initrd.img-5.10.0-6-arm64
Processing triggers for dbus (1.12.20-2) ...

2021-05-06 23:44:24 DEBUG STDERR: E: Can not write log (Is /dev/pts mounted?) - posix_openpt (19: No such device)
Moving old data out of the way
Created symlink /etc/systemd/system/sysinit.target.wants/apparmor.service -> /lib/systemd/system/apparmor.service.
invoke-rc.d: could not determine current runlevel
ls: cannot access '/boot/initrd.img-*': No such file or directory
raspi-firmware: no initrd found in /boot/initrd.img-*, cannot populate /boot/firmware
Updating certificates in /etc/ssl/certs...
129 added, 0 removed; done.
Segmentation fault
dpkg: error processing package python3.9 (--configure):
installed python3.9 package post-installation script subprocess returned error exit status 139

===================================================

And the following gets written to 'dmesg':

===================================================
[38867.808238] EXT4-fs (dm-3): mounted filesystem with ordered data mode. Opts: (null)
[38867.849329] show_signal_msg: 38 callbacks suppressed
[38867.849331] arm64[112046]: segfault at 1e54310 ip 00000000005637c0 sp 00007ffc88256398 error 4 in qemu-aarch64-static[401000+3e3000]
[38867.849339] Code: 00 e9 94 78 1c 00 0f 1f 40 00 64 83 2c 25 50 ff ff ff 01 74 05 c3 0f 1f 40 00 48 8d 3d e9 d0 7f 00 e9 e4 85 1c 00 0f 1f 40 00 <64> 8b 04 25 50 ff ff ff 85 c0 0f 9f c0 c3 66 90 48 83 ec 08 64 8b
[38989.616760] EXT4-fs (dm-3): mounted filesystem with ordered data mode. Opts: (null)
[39277.809928] python3.9[134304]: segfault at 24de310 ip 00000000005637c0 sp 00007ffc4cc1e6f8 error 4 in qemu-aarch64-static[401000+3e3000]
[39277.809936] Code: 00 e9 94 78 1c 00 0f 1f 40 00 64 83 2c 25 50 ff ff ff 01 74 05 c3 0f 1f 40 00 48 8d 3d e9 d0 7f 00 e9 e4 85 1c 00 0f 1f 40 00 <64> 8b 04 25 50 ff ff ff 85 c0 0f 9f c0 c3 66 90 48 83 ec 08 64 8b

===================================================

I noticed there's now also a segfault on 'arm64', but I don't think I've
seen that before. The one that *consistently* pops up *when* it fails,
is "dpkg: error processing package python3.9 (--configure)", although,
thus far it was actually python3-minimal, during the '--configure' step.

Normally the logical candidate to file this against is python3, but
'we' (various ppl on #debian-raspberrypi and in the previous mentioned
issue) have not seen this issue on a native arm64 device or under
systemd-nspawn.

https://salsa.debian.org/raspi-team/image-specs/-/issues/40#note_216315
(and following) also contains several 'segfault' incidents.



-- System Information:
Debian Release: 11.0
APT prefers unstable-debug
APT policy: (500, 'unstable-debug'), (500, 'testing-debug'), (500, 'unstable'), (500, 'testing'), (101, 'experimental')
Architecture: amd64 (x86_64)

Kernel: Linux 5.10.0-6-amd64 (SMP w/16 CPU threads)
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE=en_US
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

qemu-user-static depends on no packages.

Versions of packages qemu-user-static recommends:
ii binfmt-support 2.2.1-1

Versions of packages qemu-user-static suggests:
ii sudo 1.9.5p2-3

-- no debconf information

Diederik de Haas

unread,
May 6, 2021, 8:20:03 PM5/6/21
to
I have now 'finally' gotten the error 'I was looking for':

Traceback (most recent call last):
File "/usr/bin/py3compile", line 319, in <module>
main()
File "/usr/bin/py3compile", line 290, in main
compile(files, compile_versions, options.force,
File "/usr/bin/py3compile", line 201, in compile
interpreter.magic_number(version), mtime)
File "/usr/share/python3/debpython/interpreter.py", line 233, in magic_number
result = self._execute('import imp; print(imp.get_magic())', version)
File "/usr/share/python3/debpython/interpreter.py", line 359, in _execute
raise Exception('{} failed with status code {}'.format(command, output['returncode']))
Exception: python3.9 -c 'import imp; print(imp.get_magic())' failed with status code 139
dpkg: error processing package python3-minimal (--configure):
installed python3-minimal package post-installation script subprocess returned error exit status 1

And this in 'dmesg':
[44932.698657] python3.9[313800]: segfault at 2524310 ip 00000000005637c0 sp 00007ffdeefd1098 error 4 in qemu-aarch64-static[401000+3e3000]
[44932.698664] Code: 00 e9 94 78 1c 00 0f 1f 40 00 64 83 2c 25 50 ff ff ff 01 74 05 c3 0f 1f 40 00 48 8d 3d e9 d0 7f 00 e9 e4 85 1c 00 0f 1f 40 00 <64> 8b 04 25 50 ff ff ff 85 c0 0f 9f c0 c3 66 90 48 83 ec 08 64 8b

HTH
signature.asc

Bernhard Übelacker

unread,
May 21, 2021, 6:20:04 PM5/21/21
to
Hello Diederik,
I am not involved in packaging, just
trying to collect some information.


> Architecture: amd64 (x86_64)

The subject on the email mentions "on arm64".
From the Architecture line I assume this should read "on amd64"?



> [44932.698657] python3.9[313800]: segfault at 2524310 ip 00000000005637c0 sp 00007ffdeefd1098 error 4 in qemu-aarch64-static[401000+3e3000]
> [44932.698664] Code: 00 e9 94 78 1c 00 0f 1f 40 00 64 83 2c 25 50 ff ff ff 01 74 05 c3 0f 1f 40 00 48 8d 3d e9 d0 7f 00 e9 e4 85 1c 00 0f 1f 40 00 <64> 8b 04 25 50 ff ff ff 85 c0 0f 9f c0 c3 66 90 48 83 ec 08 64 8b

The breaking instruction seems to be here:

0x5637c0: file ../../linux-user/mmap.c, line 43.

0x00000000005637c0 <have_mmap_lock+0>: 64 8b 04 25 50 ff ff ff mov %fs:0xffffffffffffff50,%eax


https://sources.debian.org/src/qemu/1:5.2+dfsg-10/linux-user/mmap.c/#L43

25 static __thread int mmap_lock_count;
...
41 bool have_mmap_lock(void)
42 {
43 return mmap_lock_count > 0 ? true : false;
44 }


I have hoped it might be more clear, but this might probably
be related to the thread local storage of mmap_lock_count.
Maybe systemd-coredump would collect a core of such a crash?


Kind regards,
Bernhard
debugging.txt

Bernhard Übelacker

unread,
May 22, 2021, 8:00:04 AM5/22/21
to
Am 22.05.21 um 00:11 schrieb Bernhard Übelacker:
> Maybe systemd-coredump would collect a core of such a crash?

And I did a debootstrap in a loop and got three crashes out of 20 tries.
A core was collected and shows the stack below.

It is strange that exec_path shows just "/arm64" and
trying gdb to print the variable mmap_lock_count shows
a warning about a corrupted shared library list.

Kind regards,
Bernhard




(gdb) bt
#0 have_mmap_lock () at ../../linux-user/mmap.c:43
#1 0x00000000005863ac in page_set_flags (start=start@entry=4194304, end=end@entry=21041152, flags=flags@entry=8) at ../../accel/tcg/translate-all.c:2568
#2 0x000000000056416d in target_mmap (start=start@entry=4194304, len=<optimized out>, len@entry=16842963, target_prot=target_prot@entry=0, flags=16434, fd=fd@entry=-1, offset=offset@entry=0) at ../../linux-user/mmap.c:602
#3 0x000000000057be4d in load_elf_image (image_name=0x7ffe12b44e4f "/arm64", image_fd=3, info=info@entry=0x7ffe12b43b20, pinterp_name=pinterp_name@entry=0x7ffe12b43880, bprm_buf=bprm_buf@entry=0x7ffe12b43d30 "\177ELF\002\001\001") at ../../linux-user/elfload.c:2700
#4 0x000000000057c5bc in load_elf_binary (bprm=bprm@entry=0x7ffe12b43d30, info=info@entry=0x7ffe12b43b20) at ../../linux-user/elfload.c:3104
#5 0x0000000000571a4b in loader_exec (fdexec=fdexec@entry=3, filename=<optimized out>, argv=argv@entry=0x20b8d20, envp=envp@entry=0x210db50, regs=regs@entry=0x7ffe12b43c20, infop=infop@entry=0x7ffe12b43b20, bprm=<optimized out>) at ../../linux-user/linuxload.c:147
#6 0x0000000000402831 in main (argc=<optimized out>, argv=0x7ffe12b442e8, envp=<optimized out>) at ../../linux-user/main.c:831

(gdb) display/i $pc
1: x/i $pc
=> 0x5637c0 <have_mmap_lock>: mov %fs:0xffffffffffffff50,%eax

(gdb) frame 6
#6 0x0000000000402831 in main (argc=<optimized out>, argv=0x7ffe12b442e8, envp=<optimized out>) at ../../linux-user/main.c:831
831 ../../linux-user/main.c: Datei oder Verzeichnis nicht gefunden.
(gdb) print argv[0]
$6 = 0x7ffe12b44e25 "/usr/libexec/qemu-binfmt/aarch64-binfmt-P"
(gdb) print argv[1]
$7 = 0x7ffe12b44e4f "/arm64"
(gdb) print argv[2]
$8 = 0x7ffe12b44e56 "/arm64"
(gdb) print argv[3]
$9 = 0x0

(gdb) print &mmap_lock_count
warning: Corrupted shared library list: 0xd5f120 != 0x0
Cannot find thread-local storage for LWP 148246, executable file /usr/lib/debug/.build-id/2e/c1a124ce847ca347222b5ddcdb8639aadff4e0.debug:
Cannot find thread-local variables on this target

(gdb) print exec_path
$32 = 0x7ffe12b44e4f "/arm64"
debugging.txt

Diederik de Haas

unread,
May 22, 2021, 8:40:03 AM5/22/21
to
Hi Bernhard,

On zaterdag 22 mei 2021 00:11:22 CEST Bernhard Übelacker wrote:
> > Architecture: amd64 (x86_64)
>
> The subject on the email mentions "on arm64".
> From the Architecture line I assume this should read "on amd64"?

No.
While I build my images on an amd64 machine, the problem (so far) only
occurs when building arm64 images, we haven't seen the problem with
f.e. armhf images.

> The breaking instruction seems to be here:
> 0x5637c0: file ../../linux-user/mmap.c, line 43.
> 0x00000000005637c0 <have_mmap_lock+0>: 64 8b 04 25 50 ff ff ff mov
> %fs:0xffffffffffffff50,%eax
>
> I have hoped it might be more clear, but this might probably
> be related to the thread local storage of mmap_lock_count.
> Maybe systemd-coredump would collect a core of such a crash?

I'll take your word for it.
I'm assuming that the contents of debugging.txt is how you arrived at
mmap.c line 43, but it's mostly abracadabra to me.
If someone would provide updated qemu debs with a potential fix,
I can install them on my system to test it.
But it's very unlikely that I could (meaningfully) assist in tracking the
core problem down or coming up with a fix. Installing systemd-coredump
would also be pretty useless if I did it.

OTOH, it *should* be rather easily reproducible by following the steps from
https://salsa.debian.org/raspi-team/image-specs#option-2-building-your-own-image

as that's how I and several others noticed the issue. The problem doesn't
happen in 100% of the builds, but it is reproducible.
One could speed it up by using (f.e.) apt-cacher-ng and replacing
'deb.debian.org' with 'localhost:3142/deb.debian.org' in raspi-master.yaml

FTR: it's not unwillingness to help, but apart from reporting the issue
I'm not able to further help in any meaningful way to diagnose it.
But as stated before, if someone provides a .deb file with a/the fix,
I'm happy to test it.

> Kind regards,
> Bernhard

Cheers,
Diederik
signature.asc

Christopher Obbard

unread,
May 22, 2021, 11:20:03 AM5/22/21
to
Hi Diederik,

> FTR: it's not unwillingness to help, but apart from reporting the issue
> I'm not able to further help in any meaningful way to diagnose it.
> But as stated before, if someone provides a .deb file with a/the fix,
> I'm happy to test it.

We're facing the same (similar?) problem with Debos - in some cases
the qemu-user process seems to segfault.

Looks like a new version of qemu is in experimental - which has some
changes to linux-user/mmap.c

Perhaps it would be a good idea to see if the bug is still present in 6.0?

Thanks,
Chris

Diederik de Haas

unread,
May 22, 2021, 3:10:03 PM5/22/21
to
Hi Chris,

On zaterdag 22 mei 2021 17:14:34 CEST Christopher Obbard wrote:
> We're facing the same (similar?) problem with Debos - in some cases
> the qemu-user process seems to segfault.
>
> Looks like a new version of qemu is in experimental - which has some
> changes to linux-user/mmap.c
>
> Perhaps it would be a good idea to see if the bug is still present in 6.0?

Thanks for that excellent suggestion; I upgraded immediately.
First run was successful, but I'll run (some) more and report back whether
the issue is indeed solved with 6.0.

Cheers,
Diederik
signature.asc

Diederik de Haas

unread,
May 22, 2021, 8:10:03 PM5/22/21
to
I've now run it 8 times and it succeeded all 8 times.
Another person on #debian-raspberrypi did 3 runs and those succeeded as well.

So afaic this is a strong indication that the problem is indeed resolved with
version 6.0. I'll do some more runs later in the week and if those (all)
succeeded as well, I'll close the bug.

I think it's worth investigating by the maintainers whether a targeted fix can
be backported to Bullseye as I think it's likely more people will run into
this problem once Bullseye becomes Stable.

Thank you all for participating :)

Diederik
signature.asc

Bernhard Übelacker

unread,
May 23, 2021, 12:00:03 PM5/23/21
to
Dear Maintainer,
I did a little further investigation and found that it could be
reproduced with just the following line, inside the arm64 chroot:

for i in {1..100}; do echo $i; python3.9 -c "exit()"; done

This produced 13 crashes for the 100 runs.

But the crashes stop to appear when /proc is mounted inside the chroot.

With the help of strace:amd64, rr:amd64 and a self-built qemu-aarch64-static
I could locate the access [2] to /proc that, if failing,
seem to cause the segfault.

And the backtrace leads to this upstream change [1],
which matches this bug and a qemu-aarch64-static built
with this patch does not show the segfault anymore,
when /proc is not available.

Kind regards,
Bernhard


[1] https://git.qemu.org/?p=qemu.git;a=commitdiff;h=0266e8e3b3981b492e82be20bb97e8ed9792ed00


[2]
(rr) bt
#0 0x0000000000607402 in read_self_maps () at ../../util/selfmap.c:60
#1 0x00000000005b5124 in pgb_find_hole (guest_loaddr=guest_loaddr@entry=4194304, guest_size=guest_size@entry=22269416, align=align@entry=4096, offset=0) at ../../linux-user/elfload.c:2211
#2 0x00000000005b69bf in pgb_static (align=4096, orig_hiaddr=<optimized out>, orig_loaddr=4194304, image_name=0x7ffc7ef22cf3 "/usr/bin/python3.9") at ../../linux-user/elfload.c:2305
#3 probe_guest_base (image_name=image_name@entry=0x7ffc7ef22cf3 "/usr/bin/python3.9", guest_loaddr=guest_loaddr@entry=4194304, guest_hiaddr=<optimized out>) at ../../linux-user/elfload.c:2389
#4 0x00000000005b71e7 in load_elf_image (image_name=0x7ffc7ef22cf3 "/usr/bin/python3.9", image_fd=3, info=info@entry=0x7ffc7ef20bc0, pinterp_name=pinterp_name@entry=0x7ffc7ef20920, bprm_buf=bprm_buf@entry=0x7ffc7ef20dd0 "\177ELF\002\001\001") at ../../linux-user/elfload.c:2676
#5 0x00000000005b754c in load_elf_binary (bprm=bprm@entry=0x7ffc7ef20dd0, info=info@entry=0x7ffc7ef20bc0) at ../../linux-user/elfload.c:3104
#6 0x00000000005b49db in loader_exec (fdexec=fdexec@entry=3, filename=<optimized out>, argv=argv@entry=0x23df520, envp=envp@entry=0x23eee00, regs=regs@entry=0x7ffc7ef20cc0, infop=infop@entry=0x7ffc7ef20bc0, bprm=<optimized out>) at ../../linux-user/linuxload.c:147
#7 0x0000000000402801 in main (argc=<optimized out>, argv=0x7ffc7ef21388, envp=<optimized out>) at ../../linux-user/main.c:832

Diederik de Haas

unread,
May 26, 2021, 8:20:03 AM5/26/21
to
Control: fixed -1 1:6.0+dfsg-1~exp0
Control: tag -1 patch
Control: tag -1 upstream
Control: tag -1 bullseye

I've now done several runs spread over several days and they all succeeded, so
that alone would indicate that the issue is resolved with the 6.0 version.
On top of that, Bernhard Übelacker has identified the exact commit which fixed
the issue and when that commit got backported to 5.2, the issue was resolved
there too.

Normally I'd close the bug by sending a msg to 98817...@b.d.o, but I'm
explicitly not doing that here as I think this bug should be fixed for Bullseye
as well. AFAIK that usually requires a RC bug and I don't think this bug
qualifies as such.
But I (/we?) do think that a/the maintainer should evaluate this issue, also
because the fix is tiny and targeted, and take the appropriate action.
As the maintainers email address/ML doesn't (seem to) exist, I've explicitly
CC-ed the uploader of 1:6.0+dfsg-1~exp0 in this response.

Cheers,
Diederik
signature.asc

Cyril Brulebois

unread,
Aug 25, 2021, 6:30:04 AM8/25/21
to
Hallo Bernhard,

Bernhard Übelacker <bern...@mailbox.org> (2021-05-23):
> I did a little further investigation and found that it could be
> reproduced with just the following line, inside the arm64 chroot:
>
> for i in {1..100}; do echo $i; python3.9 -c "exit()"; done
>
> This produced 13 crashes for the 100 runs.
>
> But the crashes stop to appear when /proc is mounted inside the chroot.
>
> With the help of strace:amd64, rr:amd64 and a self-built
> qemu-aarch64-static I could locate the access [2] to /proc that, if
> failing, seem to cause the segfault.
>
> And the backtrace leads to this upstream change [1], which matches
> this bug and a qemu-aarch64-static built with this patch does not show
> the segfault anymore, when /proc is not available.
[…]
That's terrific investigation work, thanks!

Before possibly proposing a fix via bullseye-proposed-updates, to be
extra sure, I've built a minimal version of the raspi_3_buster.yaml
recipe that's been failing for me on a regular manner (gut feeling was
that is was basically a coin toss: 50% chance of broken build). It's
attached to this mail in case someone's curious. The first run will do
the qemu-debootstrap dance then cache the results, so that the following
runs are only about untar-ing that cache and attempting to install the
python3-minimal package.

Looping over this test case, I'm hitting 46 errors on 100 attempts.

I've imported the upstream commit via debian/patches, on top of the
existing debian-bullseye branch, added a debian/changelog entry for the
existing CVE fix plus the new patch, trying to mimick existing
practices, and pushed the result to this branch:
https://salsa.debian.org/kibi/qemu/-/tree/pu/debian-bullseye-bug-988174

Test-build done in cowbuilder, and runtime is now perfect: 100/100
builds are successful.


Dear maintainers,

I'm happy to check with the security team to see if they'd like to go
through security for the CVE fix, and to check with the stable release
managers to see if they're OK with a bullseye-proposed-updates upload in
case an upload via security isn't warranted.


I'll open a merge request as well, in case this makes tracking easier.


Cheers,
--
Cyril Brulebois (ki...@debian.org) <https://debamax.com/>
D-I release manager -- Release team member -- Freelance Consultant
qemu_bug_988174.yaml
signature.asc

Michael Tokarev

unread,
Aug 25, 2021, 6:50:03 AM8/25/21
to
25.08.2021 13:18, Cyril Brulebois wrote:
...
> Test-build done in cowbuilder, and runtime is now perfect: 100/100
> builds are successful.

Cyril, thank you very much for doing all this work!
Bernhard, and thank you too, - the most important is your investigation.

Somehow I missed this bugreport at the time (I was travelling and had
other stuff to do too). Yes, this change definitely should be pushed
to bullseye.

> Dear maintainers,
>
> I'm happy to check with the security team to see if they'd like to go
> through security for the CVE fix, and to check with the stable release
> managers to see if they're OK with a bullseye-proposed-updates upload in
> case an upload via security isn't warranted.
>
> I'll open a merge request as well, in case this makes tracking easier.

Yes I want to push this to -stable. However I also want to include
fixes for one or two other security issues found recently.
Hopefully I'll manage to do this today.

I'll cherry-pick just one commit from your pull request :)

Thanks!

/mjt

Cyril Brulebois

unread,
Aug 25, 2021, 7:20:04 AM8/25/21
to
Hi,

Michael Tokarev <m...@tls.msk.ru> (2021-08-25):
> Yes I want to push this to -stable. However I also want to include
> fixes for one or two other security issues found recently. Hopefully
> I'll manage to do this today.
>
> I'll cherry-pick just one commit from your pull request :)

Great! For the avoidance of doubt (I was just discussing that with
Diederik), the final commit wasn't to pretend it should be uploaded
as-is, but trying to match what I noticed in the git history (all
patches get documented at once, in the end), and so that there would be
some ready-to-use documentation about the change we're interested in.
Feel free to use/improve/rework entirely as you see fit. :)
signature.asc

Diederik de Haas

unread,
Sep 20, 2021, 8:20:03 AM9/20/21
to
On Wed, 25 Aug 2021 13:45:03 +0300 Michael Tokarev <m...@tls.msk.ru> wrote:
> Yes I want to push this to -stable. However I also want to include
> fixes for one or two other security issues found recently.
> Hopefully I'll manage to do this today.

On donderdag 16 september 2021 20:27:18 CEST Adam D. Barratt wrote:
> The first point release for "bullseye" (11.1) is scheduled for
> Saturday, October 9th. Processing of new uploads into bullseye-
> proposed-updates will be frozen during the preceding weekend.

It would be great if the fix for this issue (and the others you wanted to
include) will be uploaded before in time for the 11.1 point release.

Cheers,
Diederik
signature.asc

Michael Tokarev

unread,
Sep 20, 2021, 8:40:02 AM9/20/21
to
20.09.2021 15:12, Diederik de Haas wrote:
...
> It would be great if the fix for this issue (and the others you wanted to
> include) will be uploaded before in time for the 11.1 point release.

Hmm... Thank you for reminder. I completely forgot about updating qemu in bullseye.
The fixes are in the git for a long time, together with a few other stuff
including the mentioned security fix, but it's all sitting there...

/mjt
0 new messages