QSB #37: Information leaks due to processor speculative execution bugs (XSA-254, Meltdown & Sepctre)

879 views
Skip to first unread message

Andrew David Wong

unread,
Jan 11, 2018, 9:57:50 AM1/11/18
to qubes-a...@googlegroups.com, qubes...@googlegroups.com, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Dear Qubes Community,

We have just published Qubes Security Bulletin (QSB) #37:
Information leaks due to processor speculative execution bugs.
The text of this QSB is reproduced below. This QSB and its accompanying
signatures will always be available in the Qubes Security Pack
(qubes-secpack).

View QSB #37 in the qubes-secpack:

<https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-037-2018.txt>

Learn about the qubes-secpack, including how to obtain, verify, and
read it:

<https://www.qubes-os.org/security/pack/>

View all past QSBs:

<https://www.qubes-os.org/security/bulletins/>

View XSA-254 in the XSA Tracker:

<https://www.qubes-os.org/security/xsa/#254>

```
---===[ Qubes Security Bulletin #37 ]===---

January 11, 2018


Information leaks due to processor speculative execution bugs

Summary
========

On the night of January 3, two independent groups of researchers
announced the results of their months-long work into abusing modern
processors' so-called speculative mode to leak secrets from the system's
privileged memory [1][2][3][4]. As a response, the Xen Security Team
published Xen Security Advisory 254 [5]. The Xen Security Team did _not_
previously share information about these problems via their (non-public)
security pre-disclosure list, of which the Qubes Security Team is a
member.

In the limited time we've had to analyze the issue, we've come to the
following conclusions about the practical impact on Qubes OS users and
possible remedies. We'll also share a plan to address the issues in a
more systematic way in the coming weeks.

Practical impact and limiting factors for Qubes users
======================================================

## Fully virtualized VMs offer significant protection against Meltdown

Meltdown, the most reliable attack of the three discussed, cannot be
exploited _from_ a fully-virtualized (i.e. HVM or PVH) VM. It does not
matter whether the _target_ VM (i.e. the one from which the attacker
wants to steal secrets) is fully-virtualized. In Qubes 3.x, all VMs are
para-virtualized (PV) by default, though users can choose to create
fully-virtualized VMs. PV VMs do not protect against the Meltdown
attack. In Qubes 4.0, almost all VMs are fully-virtualized by default
and thus offer protection. However, the fully-virtualized VMs in Qubes
3.2 and in release candidates 1-3 of Qubes 4.0 still rely on PV-based
"stub domains", making it possible for an attacker who can chain another
exploit for qemu to attempt the Meltdown attack.

## Virtualization makes at least one variant of Spectre seem difficult

Of the two Spectre variants, it _seems_ that at least one of them might
be significantly harder to exploit under Xen than under monolithic
systems because there are significantly fewer options for the attacker
to interact with the hypervisor.

## All attacks are read-only

It's important to stress that these attacks allow only _reading_ memory,
not modifying it. This means that an attacker cannot use Spectre or
Meltdown to plant any backdoors or otherwise compromise the system in
any persistent way. Thanks to the Qubes OS template mechanism, which is
used by default for all user and system qubes (AppVMs and ServiceVMs),
simply restarting a VM should bring it back to a good known state for
most attacks, wiping out the potential attacking code in the
TemplateBasedVM (unless an attacker found a way to put triggers within
the user's home directory; please see [8] for more discussion).

## Only running VMs are vulnerable

Since Qubes OS is a memory-hungry system, it seems that an attacker
would only be able to steal secrets from VMs running concurrently with
the attacking VM. This is because any pages from shutdown VMs will
typically very quickly get allocated to other, running VMs and get wiped
as part of this procedure.

## PGP and other cryptographic keys are at risk

For VMs that happen to be running concurrently with the attacking VM, it
seems possible that these attacks might allow the attacker to steal
cryptographic keys, including private PGP keys.

## Disk encryption and screenlocker passwords are at risk

There is one VM that is always running concurrently with other VMs: the
AdminVM (dom0). This VM contains at least two important user secrets:

- The disk (LUKS) encryption key (and likely the passphrase)
- The screenlocker passphrase

In order to make use of these secrets, however, the attacker would have
to conduct a physical attack on the user's computer (e.g. steal the
laptop physically). Users who use the same passphrase to encrypt their
backups may also be affected.

Additional remedies available to Qubes users
=============================================

Thanks to the explicit Qubes partitioning model, it should be
straightforward for users to implement additional hygiene by ensuring
that, whenever less trusted VMs are running, highly sensitive VMs are
shut down.

Additionally, for some of the VMs that must run anyway (e.g. networking
and USB qubes), it is possible to recreate the VM each time the user
suspects it may have been compromised, e.g. after disconnecting from a
less trusted Wi-Fi network, or unplugging an untrusted USB device. In
Qubes 4.0, this is even easier, since Disposable VMs can now be used for
the networking and USB VMs (see [10]).

The Qubes firewalling and networking systems also make it easy to limit
the networking resources VMs can reach, including making VMs completely
offline. While firewalling in Qubes is not intended to be a
leak-prevention mechanism, it likely has this effect in a broad class
class of attack scenarios. Moreover, making a VM completely offline
(i.e. setting its NetVM to "none") is a more robust way to limit the
ability of an attacker to leak secrets stolen from memory to the outside
world. While this mechanism should not be considered bullet-proof -- it
is still possible to mount a specialized attack that exploits a covert
channel to leak the data -- it could be considered as an additional
layer of defense.

Finally, Qubes offers mechanisms to allow for additional protection of
user secrets, especially cryptographic keys, such as PGP keys used for
encryption and signing. Qubes Split GPG [6] allows the user to keep
these keys in an isolated VM. So, for example, the user might be running
her "development" qube in parallel with a compromised qube, while
keeping the GPG backend VM (where she keeps the signing key that she
uses to sign her software releases) shut down most of the time (because
it's only needed when a release is being made). This way, the software
signing keys will be protected from the attack.

The user could take this further by using Qubes Split GPG with a backend
qube running on a physically separate computer, as has been demonstrated
with the Qubes USB Armory project [7].

(Proper) patching
==================

Mitigations against the CPU bugs discussed here are in development but
have not yet been released. The Xen Project is working on a set of
patches (see XSA 254 [5] for updates). At the same time, we are working
on similar mitigations where feasible.

## Qubes 4.0

As explained above, almost all the VMs in Qubes 4.0 are
fully-virtualized by default (specifically, they are HVMs), which
mitigates the most severe issue, Meltdown. The only PV domains in
Qubes 4.0 are stub domains, which we plan to eliminate by switching to
PVH where possible. This will be done in Qubes 4.0-rc4 and also
released as a normal update for existing Qubes 4.0 installations. The
only remaining PV stub domains will be those used for VMs with PCI
devices. (In the default configuration, these are sys-net and
sys-usb.) The Xen Project has not yet provided any solution for this
[9].

## Qubes 3.2

For Qubes 3.2, we plan to release an update that will make almost all
VMs run in a fully-virtualized mode. Specifically, we plan to backport
PVH support from Qubes 4.0 and enable it for all VMs without PCI
devices. After this update, all VMs that previously ran in PV mode (and
that do not have PCI devices) will subsequently run in PVH mode, with
the exception of stub domains. Any HVMs will continue to run in HVM
mode.

There are two important points regarding the Qubes 3.2 update. First,
this update will work only when the hardware supports VT-x or equivalent
technology. Qubes 3.2 will continue to work on systems without VT-x, but
there will be no mitigation against Meltdown on such systems. Users on
systems that do not support VT-x are advised to take this into
consideration when assessing the trustworthiness of their systems.

Second, the Qubes 3.2 update will also switch any VMs that use a custom
kernel to PVH mode, which will temporarily prevent them from working.
This is a deliberate security choice to protect the system as a whole
(rather than leaving VMs with custom kernels in PV mode, which would
allow attackers to use them to mount Meltdown attacks). In order to use
a VM with a custom kernel after the update (whether the custom kernel
was installed in dom0 or inside the VM), users must either manually
change the VM back to PV or change the kernel that the VM uses. (Kernel
>=4.11 is required, and booting an in-VM kernel is not supported in PVH
mode.)

We'll update this bulletin and issue a separate announcement once
patches are available.

Suggested actions after patching
=================================

While the potential attacks discussed in this bulletin are severe,
recovering from these potential attacks should be easier than in the
case of an exploit that allows the attacker to perform arbitrary code
execution, resulting in a full system compromise. Specifically, we don't
believe it is necessary to use Qubes Paranoid Backup Restore Mode to
address these vulnerabilities because of the strict read-only character
of the attacks discussed. Instead, users who believe they are affected
should consider taking the following actions:

1. Changing the screenlocker passphrase.

2. Changing the disk encryption (LUKS) passphrase.

3. Re-encrypting the disk to force a change of the disk encryption
_key_. (In practice, this can be done by reinstalling Qubes and
restoring from a backup.)

4. Evaluating the odds that other secrets have been compromised,
such as other passwords and cryptographic keys (e.g. private
PGP, SSH, or TLS keys), and generate new secrets. It is unclear
how easy it might be for attackers to steal such data in a
real world Qubes environment.

Technical discussion
=====================

- From a (high-level) architecture point of view, the attacks discussed in
this bulletin should not concern Qubes OS much. This is because,
architecture-wise, there should be no secrets or other sensitive data in
the hypervisor memory. This is in stark contrast to traditional
monolithic systems, where there is an abundance of sensitive information
living in the kernel (supervisor).

Unfortunately, for rather accidental reasons, the implementation of the
particular hypervisor we happen to be using to implement isolation for
Qubes, i.e. the Xen hypervisor, undermines this clean architecture by
internally mapping all physical memory pages into its address space. Of
course, under normal circumstances, this isn't a security problem,
because no one is able to read the hypervisor memory. However, the bugs
we're discussing today might allow an attacker to do just that. This is
a great example of how difficult it can be to analyze the security
impact of a feature when limiting oneself to only one layer of
abstraction, especially a high-level one (also known as the "PowerPoint
level").

At the same time, we should point out that the use of full
virtualization prevents at least one of the attacks, and incidentally
the most powerful one, i.e. the Meltdown attack.

However, we should also point out that, in Qubes 3.2, even HVMs still
rely on PV stub domains to provide I/O emulation (qemu). In the case of
an additional vulnerability within qemu, an attacker might compromise
the PV stub domain and attempt to perform the Meltdown attack from
there.

This limitation also applies to HVMs in release candidates 1-3 of Qubes
4.0. Qubes 4.0-rc4, which we plan to release next week, should be using
PVH instead of HVM for almost all VMs without PCI devices by default,
thus eliminating this avenue of attack. As discussed in the Patching
section, VMs with PCI devices will be the exception, which means that
the Meltdown attack could in theory still be conducted if the attacker
compromises a VM with PCI devices and afterward compromises the
corresponding stub domain via a hypothetical qemu exploit.
Unfortunately, there is not much we can do about this without
cooperation from the Xen project [9][11].

Here is an overview of the VM modes that correspond to each Qubes OS
version:

VM type \ Qubes OS version | 3.2 | 3.2+ | 4.0-rc1-3 | 4.0-rc4 |
- ---------------------------------- | --- | ---- | --------- | ------- |
Default VMs without PCI devices | PV | PVH | HVM | PVH |
Default VMs with PCI devices | PV | PV | HVM | HVM |
Stub domains - VMs w/o PCI devices | PV | N/A | PV | N/A |
Stub domains - VMs w/ PCI devices | PV | PV | PV | PV |

("3.2+" denotes Qubes 3.2 after applying the update discussed above,
which will result in most VMs running in PVH mode. "N/A" means "not
applicable," since PVH VMs do not require stub domains.)

Credits
========

See the original Xen Security Advisory.

References
===========

[1] https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
[2] https://meltdownattack.com/
[3] https://meltdownattack.com/meltdown.pdf
[4] https://spectreattack.com/spectre.pdf
[5] https://xenbits.xen.org/xsa/advisory-254.html
[6] https://www.qubes-os.org/doc/split-gpg/
[7] https://github.com/inversepath/qubes-qrexec-to-tcp
[8] https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/
[9] https://lists.xenproject.org/archives/html/xen-devel/2018-01/msg00403.html
[10] https://www.qubes-os.org/news/2017/10/03/core3/
[11] https://blog.xenproject.org/2018/01/04/xen-project-spectremeltdown-faq/

- --
The Qubes Security Team
https://www.qubes-os.org/security/
```

- --
Andrew David Wong (Axon)
Community Manager, Qubes OS
https://www.qubes-os.org

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEZQ7rCYX0j3henGH1203TvDlQMDAFAlpXe0cACgkQ203TvDlQ
MDDtmxAAwyTkRWQm2HWcQtcNog4ni/KcR1fVUS/iI/GTe9NE3ikbljQ2HKoGhMWF
C9/R14FyIYdSnRXWAP4FGb8tH0Pf4WFyQNegD3sDLPOxUIaSoaclLKWLLcWWpkJS
+zyomfsalpuHQ4AZZtd2PDTqpeslHy9GHvmTDDw8Pqq1Ih1d0ij4LtnRsyHWDt1B
kP6A9dC0zAsXFQnu2dSJNQcCltAdKTdD1myJ+08ot7+f1hiaWU2sllqJEO+QM/Jh
68TXEBB82XeBB4ad2nuKmTCyaYKJQB9oWi6yHVFknOM/QcNdhmAEB2YkQCNplGyD
QLfoQWJGidhu7wLzsqhtoZJC+vVg+wN1+i8h54jPwNMGnqnhhgiy4gf2QghOsZ7q
5/McepdncZ0tRuXzE4FkDhyl2h5v2rZhrPDQxcyfWLon22uW0xws5vJsyJy7xMRY
Fp4J4j+jSJjq61Hd9oCiCvFzs08y/p6vVHThcV6iy+MYJTS4QiTqf7w1JqR/GYxh
jMzTJyEUhUUVKUV/rlJTPg6CxUiC5V441iEShfRqS/LSHXNUvj2l+7TPhe8U93yU
L74qJysYcZsPJxEoAois6j8AKjcP1WCqhoDaxdFjkEBHjEm40JMzJJDCJk9eaFSL
oz9uCLCvB1IGRvYKYfbiU16NwIZ4vRFax/HRtOBicq/p1wad2X4=
=pAsX
-----END PGP SIGNATURE-----

cooloutac

unread,
Jan 11, 2018, 10:26:53 PM1/11/18
to qubes-users

so people saying the intel meltdown bios patch slows performance. I got an increase in performance lmao. probably depends on os though.

cooloutac

unread,
Jan 11, 2018, 10:27:53 PM1/11/18
to qubes-users

but also in my particular case they also addressed other bugs, but intel pushed the bios patch for meltdown, so worth a check from your boards manufacturer site.

haaber

unread,
Jan 12, 2018, 5:24:25 AM1/12/18
to qubes...@googlegroups.com
>>
>> so people saying the intel meltdown bios patch slows performance. I got an increase in performance lmao. probably depends on os though.
>
> but also in my particular case they also addressed other bugs, but intel pushed the bios patch for meltdown, so worth a check from your boards manufacturer site.
>
When I download the (in my case with HP a win exe) BIOS update file, it
contains the "real" bios update (the .BIN file) and some other crap. The
only way to avoid tampered downloads seems to download it several times,
via tor and some other independent sources & to compare them. I guess
you all do that?

HP does not seem to deliver pgp signatures afaik. But they do ship some
signature files. Is someone aware of how checking these manually? Bernhard

Vít Šesták

unread,
Jan 12, 2018, 5:41:02 PM1/12/18
to qubes-users
The XSA mentions PV-in-PVH workaround, the QSB does not. Why Qubes does not go this way? Is it due to the timeline of releasing the patch? At first sight, it looks like a more general solution – it might be applicable even for VMs with PCI devices. (At least, the XSA does not mention such limitations and the limitation in Qubes AFAIK arises just from limitation on Linux kernel, not from Xen.)

BTW, the table seems to be incorrect about stubdomains in Qubes 3.2. It looks like some stubdomains are removed (“Stub domains - VMs w/o PCI devices” is PV in 3.2 and N/A in 3.2+.). In 3.2, the stubdomain is not used unless user explicitly requires full virtualization, and it is going to be the same.

Regards,
Vít Šesták 'v6ak'

Marek Marczykowski-Górecki

unread,
Jan 12, 2018, 9:56:34 PM1/12/18
to Vít Šesták, qubes-users
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On Fri, Jan 12, 2018 at 02:41:02PM -0800, Vít Šesták wrote:
> The XSA mentions PV-in-PVH workaround, the QSB does not. Why Qubes does not go this way? Is it due to the timeline of releasing the patch? At first sight, it looks like a more general solution – it might be applicable even for VMs with PCI devices. (At least, the XSA does not mention such limitations and the limitation in Qubes AFAIK arises just from limitation on Linux kernel, not from Xen.)

There are two shims: PV-in-HVM aka Vixen and PV-in-PVH aka Comet. Both
have limitations making them incompatible (or at least suboptimal) in
Qubes:

Vixen:
- memory ballooning not supported
- qemu running in dom0

Comet:
- PCI passthrough not supported (as this is not supported by PVH)
- require more extensive changes to Xen and toolstack, done for 4.10
only (yet)

> BTW, the table seems to be incorrect about stubdomains in Qubes 3.2. It looks like some stubdomains are removed (“Stub domains - VMs w/o PCI devices” is PV in 3.2 and N/A in 3.2+.). In 3.2, the stubdomain is not used unless user explicitly requires full virtualization, and it is going to be the same.

Indeed, the table is about generic/default VMs. If one choose HVM, it
will have PV stubdomain, regardless of Qubes version. We'll clarify
this.

- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAlpZdRkACgkQ24/THMrX
1yyg7gf/UpyKcKdvyZOewnauQ4F+l2Q5L6+MG1Nkti3XepitKDG16pjaHYC9Uvbj
Wpc6GyA9osG7rFLLaF1dfP4FljhphEu7BxFfSTVzQBxuCRZurqEhT+HxO+WdQmrH
RdFehdn748XKWA6OGRQcT2YVCCIXJ6GIrk2LWIZzeMrBX66pBAKmNNDLlo/1uYOq
C4ArUjkVq/jdBbfssnVcObjQOWQNpL9r8K390DJQKPM8gAA9n+X+wrzOPjuSaV4I
Dlj5+KX50pZLa5fOtksq0UiWoyQYC7ebBv/5kBUddbUdm1ToWYoihw26sjRD9jmF
VuXKXNJuJCk3jBMBadHDpiH0hxg8Dw==
=zEhr
-----END PGP SIGNATURE-----

Andrew David Wong

unread,
Jan 12, 2018, 10:28:03 PM1/12/18
to haaber, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On 2018-01-12 04:24, haaber wrote:
>>>
>>> so people saying the intel meltdown bios patch slows
>>> performance. I got an increase in performance lmao. probably
>>> depends on os though.
>>
>> but also in my particular case they also addressed other bugs,
>> but intel pushed the bios patch for meltdown, so worth a check
>> from your boards manufacturer site.
>>
> When I download the (in my case with HP a win exe) BIOS update
> file, it contains the "real" bios update (the .BIN file) and some
> other crap. The only way to avoid tampered downloads seems to
> download it several times, via tor and some other independent
> sources & to compare them. I guess you all do that?
>

Yes, that's what I do. You can also upload the file to Virus Total
(or similar).

- --
Andrew David Wong (Axon)
Community Manager, Qubes OS
https://www.qubes-os.org

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEZQ7rCYX0j3henGH1203TvDlQMDAFAlpZfKYACgkQ203TvDlQ
MDBwdw/+L3xxP48xEHGI+cd7mw2lw/twXnCz/G0zOcfrUCtdayPLicn921rAcqss
WwZeQGHBSn+ff+CWr7HRtnqn9DLWgVu4kYR+JE9tldf33wSKDJJFwRb0XXoeos2m
oOjOKmMRwYWS2d5VpKLjPmTi8If9dCQvUiH3Ru7XwrarINivZYrLPYTruEZ9joNg
V8cXDuSxgsz853/J9srroFRH/52q7EzW5fW/AGutkh7XrV4Ru4KaEpP2EEKLM0//
VUDBmsO/MrMu2Fsah8b6ypK65hpug5ewG/sx8bSW/O0kFjHzc8zCaffYDtyjt7z5
ezQpFj2XW08Z32nSkTgoXHi/PF07Tmkb5soYYpFfwe8DgaOdOdBfSnTja18d8E65
fjeVrnJANIIy/41oDK6oSlugXWrG50CDflznVCz48W01cX2XAXHifl3ScxT0KTr+
pUqNHWPpdVudp9KP25AFPYbQ5XxBpw07ak4xZC4XbCuNmG6KzPriRs8ncT704pDN
ethh1DViRyeF53xKW/FZS+zktRH2dmkbwVZvdJjtPAo2KnznVgpnMskW/2H6Y0VS
+Pt5f0YYihEM4Gb0+EB8rGMKLP6s1xMaQilki19VLkY0xVlQ6MT7cSEYP3Y5BIjG
tN3Nl2Rv458u4PnACCZnJtYFNxhQxfQUhCAC7JtXIbfh1T1+uJU=
=7yAc
-----END PGP SIGNATURE-----

Vít Šesták

unread,
Jan 13, 2018, 4:56:47 AM1/13/18
to qubes-users
> There are two shims: PV-in-HVM aka Vixen and PV-in-PVH aka Comet. Both
have limitations making them incompatible (or at least suboptimal) in
Qubes

Marek, thanks for the clarification. So, IIUC, Vixien's shim is no-go and Comet's shim would do the same (but at higher cost) as migration to PVH where possible. Now, your solution looks like a reasonable tradeoff.

> Indeed, the table is about generic/default VMs. If one choose HVM, it
will have PV stubdomain, regardless of Qubes version. We'll clarify
this.

The problem is not just when explicitly requesting HVM (while not explicitly stated, I can understand it is not about that), it seem to be inaccurate even about the default VM types. As far as I know, PVH For Qubes 4, it seems OK, we got rid of some stubdoms. In Qubes 3.2, currently no stubdoms are used for such VMs, but you have PV in the table. For 3.2+ for VMs with PCI devices, you also have noted that there is PV stubdom, but there is AFAIU no stubdom.

> ## Only running VMs are vulnerable
>
> Since Qubes OS is a memory-hungry system, it seems that an attacker
> would only be able to steal secrets from VMs running concurrently with
> the attacking VM. This is because any pages from shutdown VMs will
> typically very quickly get allocated to other, running VMs and get wiped
> as part of this procedure.

It depends. In fact, not letting more-trusted-VMs and less-trusted-VMs run together (as advised) makes Qubes less memory hungry. On a system with something like 32 GiB of RAM, this can lead to having much spare memory. I have upgraded to 32 GiB after realizing that I'd like to have slightly more than 16 GiB and maybe it would be better to have two the same modules. As a result, I have much spare memory now. IIUC, the memory is usually not overwritten until it is assigned to another VM, so the data are at risk even after shutdown.

For this reason, I've created a BrickVM whose purpose is just to allocate unneeded memory. Unfortunately, the VM does not take much memory even if it could, so I have decided to run about twenty or thirty DVMs for this purpose. (OK, I could run something memory-intensive in the BrickVM, but running many DVMs (or closing them if needed) seems to be easier.)

Vincent Adultman

unread,
Jan 13, 2018, 7:19:18 AM1/13/18
to qubes...@googlegroups.com

Only running VMs are vulnerable

Since Qubes OS is a memory-hungry system, it seems that an attacker
would only be able to steal secrets from VMs running concurrently with
the attacking VM. This is because any pages from shutdown VMs will
typically very quickly get allocated to other, running VMs and get wiped
as part of this procedure.

IIUC this still seems fairly awful from a usability perspective if we think of the added cognitive load of watching what is running when and remembering or making choices on what to close / restart when (I'm reading between the lines and guessing this has had something to do with decision on reintroduction of Qubes manager?).

sys-net is considered to be likely / easily compromised (such that there seems some real utility in making it a dispvm under 4). However, it will also be running for most users in most everyday cases for long periods.

A common use case for open at one time for me for internet banking might be at minimum sys-net, sys-firewall, sys-usb, vault and a dispvm (as shitty banks here often loading things off marketing or even advertising network domains changing fairly regularly). If we're saying that in an ideal situation, sys-net and sys-usb (if it has had any untrusted devices attached to it) are started clean else the secrets vault is at risk, that seems to remain a very serious problem. The other approach seems to be to store the banking secrets in a banking vm, and do the browsing as well from there. Some sensitive tasks can no doubt be done with sys-net shut down, but by no means all.

If we're considering that this will be the case for quite some time(?) due to Xen approach, do we need to offer some sort of recipe situation for vm-start (where I can ensure my "red" vms are shut down or cycled before my vault is started for example).

I try to pay my Qubes dues by offering assistance in IRC, and I'm anticipating here the sort of user willing to put effort into thinking about how they need to partition their domains, and maybe even write some custom rules / scripts but after that needs the system not to overly get in the way of day to day tasks / require constant tinkering.

Vince

cooloutac

unread,
Jan 13, 2018, 1:03:20 PM1/13/18
to qubes-users

comparing with tor is all I do too.

You can also update the HP bios from a usb without using windows if you prefer. https://support.hp.com/us-en/document/c00042629

Vít Šesták

unread,
Jan 13, 2018, 1:50:11 PM1/13/18
to qubes-users
I have one more idea: The Vixen patch could be useful for VMs with PCI devices. Memory balooning is not supported there anyway. QEMU in dom0 looks ugly, but this case is a bit different: AFAIU, the attacker can directly talk to QEMU if and only if she has escaped from PV. Maybe it is not nice, but it is not that bad either.

With Qubes 3.2, I believe this can be a clean win. Compared to the proposal (focusing on VMs with PCI devices only):

* It fixes the Meltdown. The proposal does not address it for those VMs.
* Attacker can try to break out from both PV and then from HVM or (more likely) from PV and then pwn QEMU. This is arguably harder than breaking directly from PV.

With Qubes 4.0 (still focusing on VMs with PCI devices only), it is still probably an improvement:
* If attacker can pwn QEMU (but not PV), she can with the current proposal read the whole memory using Meltdown. With Vixen, QEMU vulnerability is probably not enough for Meltdown.
* If attacker can escape from PV (but not QEMU), she can do pretty nothing. Well, with Vixen, she can read the content of the container, but I don't think this is a serious issue.
* If attacker can both escape from PV and attack QEMU, you are doomed in either case.
* Theoretically: If attacker can escape from HVM, you are better protected with Vixen (because attacker needs to escape from PV first).
* If there are some vulnerabilities that do not allow full VM escape, you are probably still better protected with Vixen. Qemu in dom0 runs as an ordinary process (so attacks like buffer overread have quite limited impact) and it is the same case for PV.

Have I missed something?

I don't say that Qubes should go this way. Maybe there are better ways to achieve some goals (especially for 4.0+). I am just saying that QEMU in dom0 – however horrible it looks – might be acceptable in this special case.

Regards,
Vít Šesták 'v6ak'

Vít Šesták

unread,
Jan 13, 2018, 2:05:23 PM1/13/18
to qubes-users
On Saturday, January 13, 2018 at 1:19:18 PM UTC+1, Vincent Adultman wrote:
> IIUC this still seems fairly awful from a usability perspective if we think of the added cognitive load of watching what is running when and remembering or making choices on what to close / restart when (I'm reading between the lines and guessing this has had something to do with decision on reintroduction of Qubes manager?).

It is just temporary countermeasure. Actually, Qubes was not designed with Meldown in mind. In fact, no OS was. The countermeasures are rather the best we can do until it gets fixed than something that shoul be smooth and user friendly.

IMHO it is just a coincidence that Qubes Manager was reintroduced those days.

Regards,
Vít Šesták 'v6ak'

Doug M

unread,
Jan 13, 2018, 8:21:26 PM1/13/18
to qubes-users
Would using a 32-bit PV provide any additional protection for Xen?

Vít Šesták

unread,
Jan 14, 2018, 6:24:04 AM1/14/18
to qubes-users
Good point, I forgot this one. of course, it would, but I am not sure if Qubes is ready for that.

But it could be useful to use 32-bit stubdoms for those reasons. They do rather I/O-bound work (=> minimal performance penalty) and they don't need so much memory to utilize more than 32-bit pointers. (Also, using 32-bit pointers can make a minor performance gain.)

On other VMs than stubdoms, it is not so easily deployable, because user might have some 64-bit only software there. At least, it is impossible to deploy it automatically (via update) without breaking anything.

It is possible to use 32-bit stubdom on a 64-bit system?

Regards,
Vít Šesták 'v6ak'

pixel fairy

unread,
Jan 14, 2018, 6:53:20 AM1/14/18
to qubes-users
what about the cpu microcode? can a package be backported for it? or does that have to be done through xen?

fedora 26 has some (theoretical?) protection against meltdown, maybe qubes-4 should update dom0 to that in the rc.

Vít Šesták

unread,
Jan 14, 2018, 12:20:48 PM1/14/18
to qubes-users
As far as I understand it, microcode update cannot fix it. It just brings some new instructions that can be used for Spectre fix. (But they don't help on their own.)

You can try to update your BIOS if it is well supported by your vendor. Mine is.

Alternatively, you can try to update microcode via Xen. (In fact, the new microcode is loaded on every boot, because CPU has no persistent storage for that. It should be loaded in early stage of boot.*) Xen has some documentation, it would be probably enough to use some Linux package and add something like “ucode=scan” to Xen parameters: https://wiki.xenproject.org/wiki/XenParavirtOps/microcode_update

Regards,
Vít Šesták 'v6ak'

*) Some μcode updates can be loaded even runtime, but this is not so general and I don't recommend it. As far as I understand, the result of runtime patching might vary on what instructions have been used before the attempt to patch it, so you could end up with some race condition.

Nik H

unread,
Jan 14, 2018, 8:15:34 PM1/14/18
to qubes-users
Thanks, this is good info. I found instructions to update microcode in linux - seems very simple. Xen instructions seem simple as well but where do I enter this? In the Dom0 terminal? I am a bit unclear as to how Dom0 and Xen interact.

I am guessing normal VMs do not have enough privileges to update microcode (well... hopefully, otherwise compromised VMs could install malicious microcode...)

As a side-note, spectre does compromise the entire qubes architecture. I know, nobody was thinking about these things, so no shame in that. But one of the main premises in qubes is that VMs are isolated from each other, and that is no longer the case as long as spectre is out there. Good that meltdown is not an issue, yes, but doesn't really matter, weakest link and all that.


awokd

unread,
Jan 15, 2018, 6:57:09 AM1/15/18
to Nik H, qubes-users
On Mon, January 15, 2018 1:15 am, Nik H wrote:
> On Monday, January 15, 2018 at 12:20:48 AM UTC+7, Vít Šesták wrote:
>
>> As far as I understand it, microcode update cannot fix it. It just
>> brings some new instructions that can be used for Spectre fix. (But
>> they don't help on their own.)
>>
>> You can try to update your BIOS if it is well supported by your vendor.
>> Mine is.
>>
>>
>> Alternatively, you can try to update microcode via Xen. (In fact, the
>> new microcode is loaded on every boot, because CPU has no persistent
>> storage for that. It should be loaded in early stage of boot.*) Xen has
>> some documentation, it would be probably enough to use some Linux
>> package and add something like “ucode=scan” to Xen parameters:
>> https://wiki.xenproject.org/wiki/XenParavirtOps/microcode_update

>
> Thanks, this is good info. I found instructions to update microcode in
> linux - seems very simple. Xen instructions seem simple as well but where
> do I enter this? In the Dom0 terminal? I am a bit unclear as to how Dom0
> and Xen interact.

If you're referring to the "ucode=scan" addition to the bootloader; yes,
you'd enter those from dom0.

> As a side-note, spectre does compromise the entire qubes architecture. I
> know, nobody was thinking about these things, so no shame in that. But
> one of the main premises in qubes is that VMs are isolated from each
> other, and that is no longer the case as long as spectre is out there.
> Good that meltdown is not an issue, yes, but doesn't really matter,
> weakest link and all that.

It matters a bit because Spectre is harder to exploit than Meltdown. IIUR,
Qubes' design allowed it to constrict Meltdown to a single VM versus other
OS designs where that would give access to the entire system. I'm still
somewhat unclear on how Spectre operates under hardware virtualization but
you're right, it needs to be fixed.

cooloutac

unread,
Jan 15, 2018, 11:49:52 AM1/15/18
to qubes-users
do you mean you need bios microcode update AND software fixes together to prevent these attacks?

Also did you notice the "20% increase in cpu utilization" they are talking about? Because I feel I have had a dramatic increase in performance. I'm becoming skeptical about some of the information out there.

Vít Šesták

unread,
Jan 15, 2018, 2:41:57 PM1/15/18
to qubes-users
On Monday, January 15, 2018 at 2:15:34 AM UTC+1, Nik H wrote:
> Thanks, this is good info. I found instructions to update microcode in linux - seems very simple. Xen instructions seem simple as well but where do I enter this? In the Dom0 terminal? I am a bit unclear as to how Dom0 and Xen interact.

Well, dom0 is a privileged domain and any administration of Xen should be done from it. So, dom0 terminal is probably a good start.

You will probably need to adjust Xen parameters. It depends if you have UEFI or legacy BIOS. You can see both variants (but you need to write something else than „iommu=no-igfx“) in this (otherwise unrelated) article: https://www.qubes-os.org/doc/intel-igfx-troubleshooting/

> I am guessing normal VMs do not have enough privileges to update microcode (well... hopefully, otherwise compromised VMs could install malicious microcode...)

I hope so. They are digitally signed (at least at Intel), but still…

> As a side-note, spectre does compromise the entire qubes architecture.

Not fully.

> Good that meltdown is not an issue, yes

As far as I understand, Meltdown _is_ an issue. It allows reading memory of whole system. It will be hopefully fixed soon.

Spectre is harder to exploit, but it will also take longer to fix it.

Vít Šesták

unread,
Jan 15, 2018, 2:56:16 PM1/15/18
to qubes-users
On Monday, January 15, 2018 at 12:57:09 PM UTC+1, awokd wrote:
> It matters a bit because Spectre is harder to exploit than Meltdown. IIUR,
> Qubes' design allowed it to constrict Meltdown to a single VM

Not in PV, which are primary type of VM in 3.2.

> I'm still
> somewhat unclear on how Spectre operates under hardware virtualization but
> you're right, it needs to be fixed.

As far as I understand, Spectre can read the memory available of victim. That is:

* If an application does not mitigate Spectre and attacker finds useful entry point, attacker can read memory of the application (but nothing more).
* If VM kernel does not mitigate Spectre and attacker finds useful entry point, attacker can probably read memory of whole VM (but other VMs are not affected).
* If Xen does not mitigate Spectre and attacker finds useful entry point, attacker can probably read memory of whole system.

Please note that:

* Attacker needs a suitable entry point. It might be difficult to find it.
* All code needs to be recompiled in order to mitigate Spectre. It is not binary that you are/aren't protected. Some parts of system might be protected at the same time when others aren't.
* Lowlevel components might need additional work because of assembly code.
* Microcode update is needed only for some variants of patches. But retpoline might be preferred for both performance reasons and not need of microcode update.

Regards,
Vít Šesták 'v6ak'

Lorenzo Lamas

unread,
Jan 17, 2018, 11:33:43 AM1/17/18
to qubes-users
On Thursday, January 11, 2018 at 3:57:50 PM UTC+1, Andrew David Wong wrote:
> ## Qubes 3.2
>
> For Qubes 3.2, we plan to release an update that will make almost all
> VMs run in a fully-virtualized mode. Specifically, we plan to backport
> PVH support from Qubes 4.0 and enable it for all VMs without PCI
> devices. After this update, all VMs that previously ran in PV mode (and
> that do not have PCI devices) will subsequently run in PVH mode, with
> the exception of stub domains. Any HVMs will continue to run in HVM
> mode.

Is this the shim-based approach from XSA-254?
Then it should be made clear that the VM's will be more vulnerable to Meltdown:
"Note this shim-based approach prevents attacks on the host, but leaves
the guest vulnerable to Meltdown attacks by its own unprivileged
processes; this is true even if the guest OS has KPTI or similar
Meltdown mitigation."
https://xenbits.xen.org/xsa/xsa254/README.which-shim

Ilpo Järvinen

unread,
Jan 17, 2018, 4:29:18 PM1/17/18
to Lorenzo Lamas, qubes-users
On Wed, 17 Jan 2018, Lorenzo Lamas wrote:

> On Thursday, January 11, 2018 at 3:57:50 PM UTC+1, Andrew David Wong wrote:
> > ## Qubes 3.2
> >
> > For Qubes 3.2, we plan to release an update that will make almost all
> > VMs run in a fully-virtualized mode. Specifically, we plan to backport
> > PVH support from Qubes 4.0 and enable it for all VMs without PCI
> > devices. After this update, all VMs that previously ran in PV mode (and
> > that do not have PCI devices) will subsequently run in PVH mode, with
> > the exception of stub domains. Any HVMs will continue to run in HVM
> > mode.
>
> Is this the shim-based approach from XSA-254?

No, it won't be a shim-based approach (see also the Marek's mail in this
thread).

> Then it should be made clear that the VM's will be more vulnerable to
> Meltdown:

Even if shims would be used, that "more" claim is false as Meltdown
against the host hypervisor from PVs that are currently used in R3.2
expose both host and also the guest through the host hypervisor (its
memory). With shims only the guest is still vulnerable, this time through
the intermediate xen instance running in the HVM/PVH encapsulating the PV
guest. Clearly it's "less" vulnerable rather than "more".

Qubes has been trying to migrate away from PVs altogether (rather than
e.g., placing PVs into those shims) due to PV vulnerabilities in general.
In fact, even before these HW vulnerabilities were discovered, the process
towards PVH was ongoing which is why R4.0 rcs as is are much better
protected already. These vulnerabilities only accelerated this process.
There will be, unfortunately, be one limitation to this migration still
due to PCI passthrough: VMs with PCI devices need to remain PV (or their
stubdoms in R4.0).

> "Note this shim-based approach prevents attacks on the host, but leaves
> the guest vulnerable to Meltdown attacks by its own unprivileged
> processes; this is true even if the guest OS has KPTI or similar
> Meltdown mitigation."
> https://xenbits.xen.org/xsa/xsa254/README.which-shim

Also, note that one of the fundamental assumption with Qubes security
model is that the VMs _will get compromised_ (regardless of HW exploits).
What Qubes aims to protect against is escalation from a compromised VM
to host or to another VM.


--
i.

Marek Marczykowski-Górecki

unread,
Jan 17, 2018, 8:06:08 PM1/17/18
to Simon Gaiser, Vít Šesták, qubes-users
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

That's interesting idea. Simon, what do you think about it?

- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAlpf8uUACgkQ24/THMrX
1yxZbgf/SFCNG6DfWSgriXHlfmNR/eOI6Bdzr+6EpI7rA5AjSu+wERZbFMBFDg0I
M5SKJUiKBexC96eUDktrG3wQnEhJ9aJHGFlQ/8jBSCx023yWpFYCtvKgjYlYEUjE
2J32lEvcT0Mv2e/7OouWO9w3oeJD+Qg189naRuKXZKJkF3B9z4iu7eTP9QePFbSg
6QEN3QHpHBZTfkufYJHuqdMyoZ0XiXaYnFBJQ83hsJSSllP1liCA0mKXLHoo+pD2
L/mVT/4Z337EiCdg/zYHoksWALr3I7rxbvRgzEmIyx6c1YbVZR3qfzYWe6VpPXjb
uW5/rt/XqSX5g+MaJLX/S78dOEeT2g==
=GqGk
-----END PGP SIGNATURE-----

cooloutac

unread,
Jan 18, 2018, 10:04:22 AM1/18/18
to qubes-users
SO it doesn't look like 4th or 5th generation boards are going to get a bios patch. IS the bios patch nescessary?

Or Should we just assume our desktop pc's are about as secure as android phones now? Are they no good after a year or two? I joke that real security costs alot of money because when firmware gets compromised nothing you can do but replace the pc. But if you have to buy a new mobo and pc every year or two to stay up to date that is a sad future for most people.

awokd

unread,
Jan 18, 2018, 10:10:36 AM1/18/18
to cooloutac, qubes-users
On Thu, January 18, 2018 3:04 pm, cooloutac wrote:
> But if you have to buy a new
> mobo and pc every year or two to stay up to date that is a sad future for
> most people.

Most people, but not the Intel board members and stockholders!


David Hobach

unread,
Jan 18, 2018, 12:20:49 PM1/18/18
to cooloutac, qubes-users
On 01/18/2018 04:04 PM, cooloutac wrote:
> SO it doesn't look like 4th or 5th generation boards are going to get a bios patch. IS the bios patch nescessary?

Meltdown can be patched on Kernel and/or Hypervisor level with a
performance loss by doing in the Kernel what should be done by the CPU.
It also seems that Qubes 4 isn't affected in certain virtualisation
modes, see the QSB & XSA.

It might be possible to patch Spectre 1 & 2 in limited ways as well, but
there are only ideas out yet, see
https://blog.xenproject.org/2018/01/04/xen-project-spectremeltdown-faq/

So the Microcode patches would be the proper way to do it and even there
it seems to be hard if I recall the Spectre paper correctly, but the
Open Source community attempts to implement (partial) mitigations anyway.

Nik H

unread,
Jan 18, 2018, 1:00:42 PM1/18/18
to Vít Šesták, qubes-users
On Jan 16, 2018, at 2:56 AM, Vít Šesták <groups-no-private-mail--con...@v6ak.com> wrote:
>
> * If an application does not mitigate Spectre and attacker finds useful entry point, attacker can read memory of the application (but nothing more).
> * If VM kernel does not mitigate Spectre and attacker finds useful entry point, attacker can probably read memory of whole VM (but other VMs are not affected).
> * If Xen does not mitigate Spectre and attacker finds useful entry point, attacker can probably read memory of whole system.

Can you explain why you think that Spectre can't escape the container (VM)? It seems that is the main issue, Spectre escapes the container.

I read the whitepaper and what Spectre is doing is, it's accessing memory it should not have access to, and then uses a few simple tricks to extract the data it should not have access to. This happens on a processor level so any bounds checks that are outside the CPU core will not prevent that.

Given the nature of the attack, I do not think that hardware virtualization would stop this attack. Reasoning: If HW Virtualization was doing privilege checks on memory access in speculatively executed code, it would severely impact or completely remove the performance gains from speculative execution. I would be *very* happy to be wrong about that so if you have info to the contrary, please let me know.

Here's how spectre works (conceptual - the existing sample implementations are just that, examples):

- Trick the CPU into doing something it shouldn't to, like in our case access another VM's memory.
- This memory access happens in a speculative execution, which is built for speed and doesn't have time to check whether or not I actually have the right to access this memory.
- Speculative execution continues, and I load some of my own data into the processor, but which data depends on the value of the byte I read in the previous step.
- The CPU realizes I didn't have access, and reverts register states
- The CPU does not, however, remove my data from the cache
- I can then use cache timing to figure out *what part* of my own data was cached
- Once I know what part of my data was cached, I know the value of the byte that I read illegally.

If Hardware virtualization were to protect against this attack, it would need to either have bounds checks inside the processor core, or flush caches whenever different VMs run, all of which would severely impact performance. So I don't think they do it.

Reasoning: The entire point of HW virtualization is to have very fast and seamless context switching so that if I have 10 different VMs running, the processor does not lose performance from that. So you keep caches, and you keep speculatively executing what you believe to be the correct branch of an if statement. HW virtualization vs. software seems to have been implemented mainly to improve performance, and not to improve security/isolation.

I found various snippets of information hinting at this as well, but again, I'd be happy to be wrong! But, if I am right, then qubes isolation is compromised.

Sorry this got a bit long.


awokd

unread,
Jan 18, 2018, 1:07:55 PM1/18/18
to Nik H, "Vít Šesták", qubes-users
On Thu, January 18, 2018 6:00 pm, Nik H wrote:

> Reasoning: The entire point of HW virtualization is to have very fast and
> seamless context switching so that if I have 10 different VMs running,
> the processor does not lose performance from that. So you keep caches,
> and you keep speculatively executing what you believe to be the correct
> branch of an if statement. HW virtualization vs. software seems to have
> been implemented mainly to improve performance, and not to improve
> security/isolation.
>
> I found various snippets of information hinting at this as well, but
> again, I'd be happy to be wrong! But, if I am right, then qubes isolation
> is compromised.

This is the feeling I got too wrt Spectre, but it's hard to find facts on
it. Maybe if we could look at what the virtualization opcodes are doing at
a microcode level...

cooloutac

unread,
Jan 18, 2018, 3:44:29 PM1/18/18
to qubes-users

OHH so thats why people say there is a performance loss, in other words if your vendor doesn't patch the bios? because I got a huge increase in performance with my board that got patched. So i'm having a hard time believing all the hype about it.

And so yes I'm reading that the Qubes team is working to make some changes even to 3.2, which is great news. But I wasn't sure if they are able to address all the problems.

I guess a performance loss because lack of vendor support, is better then no mitigations at all. If this is even the case, I'm still skeptical.

Vít Šesták

unread,
Jan 18, 2018, 3:49:10 PM1/18/18
to qubes-users
On Thursday, January 18, 2018 at 7:00:42 PM UTC+1, Nik H wrote:
> On Jan 16, 2018, at 2:56 AM, Vít Šesták <…@v6ak.com> wrote:
> >
> > * If an application does not mitigate Spectre and attacker finds useful entry point, attacker can read memory of the application (but nothing more).
> > * If VM kernel does not mitigate Spectre and attacker finds useful entry point, attacker can probably read memory of whole VM (but other VMs are not affected).
> > * If Xen does not mitigate Spectre and attacker finds useful entry point, attacker can probably read memory of whole system.
>
> Can you explain why you think that Spectre can't escape the container (VM)? It seems that is the main issue, Spectre escapes the container.

It depends on what you mean by VM escape. Sure, both Meltdown and Spectre are about reading memory that should not be accessible. From your description below, I believe you have confused those two.

The reason why Spectre is much harder to actually exploit than Meltdown: For Meltdown, you just use your own code to read the memory. With Spectre, you have to use (and find!) a victim's code to perform innocently-looking operations.

Meltdown allows attacker to read any address in her address space. That's not always whole physical address space, but in case of Xen PV x64 domains, it is the case.

Spectre allows to read the memory in a different way. Imagine the _victim_ has code like this:

if((i > 0) &&(i < a_length)) {
return a[i];
} else {
return NULL; // or any other error code
}

This looks like a perfect code that prevents overreads and underreads. But an attempt to overread/underread will affect cache. Fortunatelly, such simple code is not much useful. The attacker rather needs something like this: foo[bar[index]]. Even with all the proper bounds checks (that will cause the code not to execute in traditional sense), attacker might try to perform overflow/underflow by using index variable out of range. But CPU might try to execute the branch speculatively (because the condition is usually satisfied), which can cause a read of arbitrary out-of-bounds bar[index]. The read of the value would be probably benign on its own, but then, it tries to load data from foo array based on this value, which might cause cache fetch depending on value of bar[index]. The attacker has not won yet, she has to determine what part of memory was loaded into cache. This can be done using timing attack.

Another interesting part of Spectre is branch target injection. I remember some double fetch vulnerability that can cause bad jump due to race condition (TOCTOU issue). With Spectre, attacker can try to abuse this for bad speculative jump even if there is no race condition possible.

But my main point is that for Spectre attack, the fact that nobody has cared about that when writing the software is not enough for successful exploitation. Actually, one needs to find a suitable code that processes some attacker's input in a suitable way. Moreover, the attacker needs some precise measurement, so passing some malicious input to some queue in order to be processed by code that can trigger speculative out-of-bounds read can be impractical.

> I read the whitepaper and what Spectre is doing is, it's accessing memory it should not have access to, and then uses a few simple tricks to extract the data it should not have access to. This happens on a processor level so any bounds checks that are outside the CPU core will not prevent that.

That's true for both Spectre and Meltdown. But the fact that bounds checks aren't enough does not mean that those attack cannot be mitigated in software elsehow.

> Given the nature of the attack, I do not think that hardware virtualization would stop this attack.

If this is about Spectre, you are right, hardware virtualization does not stop it on its own. For Meltdown, the situation is a bit different: Hardware virtualization makes the VM not to have the address outside the VM mapped in its address space. Trying to access the memory outside the VM is not prevented by bounds check, it is prevented by the simple fact that they have no address. Note that this AFAIU does not prevent attacking VM's kernel from VM's process, it just prevents attacking hypervisor from VM.


> I found various snippets of information hinting at this as well, but again, I'd be happy to be wrong! But, if I am right, then qubes isolation is compromised.

Well, you are mostly right. But maybe we should divide it to base system (e.g., Xen and dom0) and single VMs.

The base system is unfortunately affected by Meltdown, because it mostly does not use hardware virtualization. (Qubes 4 is quite better there, but still not perfect.) It might be also vulnerable to Spectre attacks, but I am not sure if they are practical.

Single VMs might be vulnerable to both (unless patched). Meltdown vulnerability is usually not much an issue there, because Qubes does not utilize user separation within a VM much. (But probably Chrome sandbox can see something it should not see. It cannot use Meltdown to see other processes, just some kernel memory.) And Spectre is quite omnipresent.

Regards,
Vít Šesták 'v6ak'

Lorenzo Lamas

unread,
Jan 20, 2018, 2:57:10 PM1/20/18
to qubes-users
Thank you for clarifying this.

Vít Šesták

unread,
Jan 21, 2018, 4:42:12 PM1/21/18
to qubes-users
On Thursday, January 18, 2018 at 2:06:08 AM UTC+1, Marek Marczykowski-Górecki wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> On Sun, Jan 14, 2018 at 03:24:04AM -0800, Vít Šesták wrote:
> > But it could be useful to use 32-bit stubdoms for those reasons. They do rather I/O-bound work (=> minimal performance penalty) and they don't need so much memory to utilize more than 32-bit pointers. (Also, using 32-bit pointers can make a minor performance gain.)
> > …
> > It is possible to use 32-bit stubdom on a 64-bit system?
>
> That's interesting idea. Simon, what do you think about it?
>


I've tried to implement this kind of protection and I'd like to report my failure achieve the result. Someone might be more successful. After all, I am not much experienced with Xen internals (I mostly configure Xen through Qubes). I've found there is a gzipped ELF with ioemu stubdom. Maybe replacing it with a 32b one could do the trick. But I am not sure if it is enough, maybe this would just result in a 32-bit code running in 64-bit PV domain, which is not what we want. (I am not even sure how to check it.) I haven't found any relevant configuration where I could configure the stubdomain mode.

But I've decided to try to compile the stubdom and to try it. I've checked out the code and switched to 4.6.6 tag. The code looks promising, some parts seem to be ready for x86_32, despite the fact that it is no longer supported platform for Xen itself. I however have failed the compilation itself, regardless of the target architecture. I have tried debian-9. I needed few additional packages to pass ./configure, that's OK. (I won't name them, because you might miss different packages and because the error messages are pretty clear.) There seem to be some new warnings in GCC that make the compilation to fail, so I had to adjust tools/Rules.mk by adding line `CFLAGS += -Wno-misleading-indentation -Wno-unused-function`. This shifts me to another problem: command `./configure --enable-stubdom --disable-tools --disable-xen --disable-docs --host=x86_64 && make` results in the following error message I am unable to resolve:


make -C seabios-dir all
make[6]: Entering directory '/home/user/xen/tools/firmware/seabios-dir-remote'
Compile checking out/src/stacks.o
src/stacks.c: Assembler messages:
src/stacks.c:635: Error: found '(', expected: ')'
src/stacks.c:635: Error: junk `(%ebp))' after expression
src/stacks.c:636: Warning: indirect call without `*'


Regards,
Vít Šesták 'v6ak'

Jean-Philippe Ouellet

unread,
Jan 22, 2018, 2:04:10 AM1/22/18
to Vít Šesták, qubes-users
On Thu, Jan 18, 2018 at 3:49 PM, Vít Šesták
<groups-no-private-mail--con...@v6ak.com>
wrote:
Err, I'm not sure this is the case. I suspect it is not due to all
guest-physical memory being mapped in the guest kernel's direct map.

See:
- https://cs.brown.edu/~vpk/papers/ret2dir.sec14.pdf
- https://www.blackhat.com/docs/eu-14/materials/eu-14-Kemerlis-Ret2dir-Deconstructing-Kernel-Isolation.pdf

Andrew David Wong

unread,
Jan 24, 2018, 4:29:27 AM1/24/18
to qubes-a...@googlegroups.com, qubes...@googlegroups.com, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Dear Qubes Community,

We have just updated Qubes Security Bulletin (QSB) #37:
Information leaks due to processor speculative execution bugs.

The text of the main changes are reproduced below. For the full
text, please see the complete QSB in the qubes-secpack:

<https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-037-2018.txt>

Learn about the qubes-secpack, including how to obtain, verify, and
read it:

<https://www.qubes-os.org/security/pack/>

View all past QSBs:

<https://www.qubes-os.org/security/bulletins/>

View XSA-254 in the XSA Tracker:

<https://www.qubes-os.org/security/xsa/#254>

```
Changelog
==========

2018-01-11: Original QSB published
2018-01-23: Updated mitigation plan to XPTI; added Xen package versions

[...]

(Proper) patching
==================

## Qubes 4.0

As explained above, almost all the VMs in Qubes 4.0 are
fully-virtualized by default (specifically, they are HVMs), which
mitigates the most severe issue, Meltdown. The only PV domains in Qubes
4.0 are stub domains, which we plan to eliminate by switching to PVH
where possible. This will be done in Qubes 4.0-rc4 and also released as
a normal update for existing Qubes 4.0 installations. The only remaining
PV stub domains will be those used for VMs with PCI devices. (In the
default configuration, these are sys-net and sys-usb.) To protect those
domains, we will provide the Xen page-table isolation (XPTI) patch, as
described in the following section on Qubes 3.2.

## Qubes 3.2

Previously, we had planned to release an update for Qubes 3.2 that would
have made almost all VMs run in PVH mode by backporting support for this
mode from Qubes 4.0. However, a much less drastic option has become
available sooner than we and the Xen Security Team anticipated: what the
Xen Security Team refers to as a "stage 1" implementation of the Xen
page-table isolation (XPTI) mitigation strategy [5]. This mitigation
will make the most sensitive memory regions (including all of physical
memory mapped into Xen address space) immune to the Meltdown attack. In
addition, this mitigation will work on systems that lack VT-x support.
(By contrast, our original plan to backport PVH would have worked only
when the hardware supported VT-x or equivalent technology.)

Please note that this mitigation is expected to have a noticeable
performance impact. While there will be an option to disable the
mitigation (and thereby avoid the performance impact), doing so will
return the system to a vulnerable state.

The following packages contain the patches described above:

- Xen packages, version 4.6.6-36

[...]

Here is an overview of the VM modes that correspond to each Qubes OS
version:

VM type \ Qubes OS version | 3.2 | 4.0-rc1-3 | 4.0-rc4 |
- ---------------------------------- | --- | --------- | ------- |
Default VMs without PCI devices | PV | HVM | PVH |
Default VMs with PCI devices | PV | HVM | HVM |
Stub domains - Default VMs w/o PCI | N/A | PV | N/A |
Stub domains - Default VMs w/ PCI | N/A | PV | PV |
Stub domains - HVMs | PV | PV | PV |

```

On 2018-01-11 08:57, Andrew David Wong wrote:
> Dear Qubes Community,
>
> We have just published Qubes Security Bulletin (QSB) #37:
> Information leaks due to processor speculative execution bugs.
> The text of this QSB is reproduced below. This QSB and its accompanying
> signatures will always be available in the Qubes Security Pack
> (qubes-secpack).
>
> View QSB #37 in the qubes-secpack:
>
> <https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-037-2018.txt>
>
> Learn about the qubes-secpack, including how to obtain, verify, and
> read it:
>
> <https://www.qubes-os.org/security/pack/>
>
> View all past QSBs:
>
> <https://www.qubes-os.org/security/bulletins/>
>
> View XSA-254 in the XSA Tracker:
>
> <https://www.qubes-os.org/security/xsa/#254>
>
> ```
> ---===[ Qubes Security Bulletin #37 ]===---
>
> January 11, 2018
>
>
> Information leaks due to processor speculative execution bugs
>
> Summary
> ========
>
> On the night of January 3, two independent groups of researchers
> announced the results of their months-long work into abusing modern
> processors' so-called speculative mode to leak secrets from the system's
> privileged memory [1][2][3][4]. As a response, the Xen Security Team
> published Xen Security Advisory 254 [5]. The Xen Security Team did _not_
> previously share information about these problems via their (non-public)
> security pre-disclosure list, of which the Qubes Security Team is a
> member.
>
> In the limited time we've had to analyze the issue, we've come to the
> following conclusions about the practical impact on Qubes OS users and
> possible remedies. We'll also share a plan to address the issues in a
> more systematic way in the coming weeks.
>
> Practical impact and limiting factors for Qubes users
> ======================================================
>
> ## Fully virtualized VMs offer significant protection against Meltdown
>
> Meltdown, the most reliable attack of the three discussed, cannot be
> exploited _from_ a fully-virtualized (i.e. HVM or PVH) VM. It does not
> matter whether the _target_ VM (i.e. the one from which the attacker
> wants to steal secrets) is fully-virtualized. In Qubes 3.x, all VMs are
> para-virtualized (PV) by default, though users can choose to create
> fully-virtualized VMs. PV VMs do not protect against the Meltdown
> attack. In Qubes 4.0, almost all VMs are fully-virtualized by default
> and thus offer protection. However, the fully-virtualized VMs in Qubes
> 3.2 and in release candidates 1-3 of Qubes 4.0 still rely on PV-based
> "stub domains", making it possible for an attacker who can chain another
> exploit for qemu to attempt the Meltdown attack.
>
> ## Virtualization makes at least one variant of Spectre seem difficult
>
> Of the two Spectre variants, it _seems_ that at least one of them might
> be significantly harder to exploit under Xen than under monolithic
> systems because there are significantly fewer options for the attacker
> to interact with the hypervisor.
>
> ## All attacks are read-only
>
> It's important to stress that these attacks allow only _reading_ memory,
> not modifying it. This means that an attacker cannot use Spectre or
> Meltdown to plant any backdoors or otherwise compromise the system in
> any persistent way. Thanks to the Qubes OS template mechanism, which is
> used by default for all user and system qubes (AppVMs and ServiceVMs),
> simply restarting a VM should bring it back to a good known state for
> most attacks, wiping out the potential attacking code in the
> TemplateBasedVM (unless an attacker found a way to put triggers within
> the user's home directory; please see [8] for more discussion).
>
> ## Only running VMs are vulnerable
>
> Since Qubes OS is a memory-hungry system, it seems that an attacker
> would only be able to steal secrets from VMs running concurrently with
> the attacking VM. This is because any pages from shutdown VMs will
> typically very quickly get allocated to other, running VMs and get wiped
> as part of this procedure.
>
> ## PGP and other cryptographic keys are at risk
>
> For VMs that happen to be running concurrently with the attacking VM, it
> seems possible that these attacks might allow the attacker to steal
> cryptographic keys, including private PGP keys.
>
> ## Disk encryption and screenlocker passwords are at risk
>
> There is one VM that is always running concurrently with other VMs: the
> AdminVM (dom0). This VM contains at least two important user secrets:
>
> - The disk (LUKS) encryption key (and likely the passphrase)
> - The screenlocker passphrase
>
> In order to make use of these secrets, however, the attacker would have
> to conduct a physical attack on the user's computer (e.g. steal the
> laptop physically). Users who use the same passphrase to encrypt their
> backups may also be affected.
>
> Additional remedies available to Qubes users
> =============================================
>
> Thanks to the explicit Qubes partitioning model, it should be
> straightforward for users to implement additional hygiene by ensuring
> that, whenever less trusted VMs are running, highly sensitive VMs are
> shut down.
>
> Additionally, for some of the VMs that must run anyway (e.g. networking
> and USB qubes), it is possible to recreate the VM each time the user
> suspects it may have been compromised, e.g. after disconnecting from a
> less trusted Wi-Fi network, or unplugging an untrusted USB device. In
> Qubes 4.0, this is even easier, since Disposable VMs can now be used for
> the networking and USB VMs (see [10]).
>
> The Qubes firewalling and networking systems also make it easy to limit
> the networking resources VMs can reach, including making VMs completely
> offline. While firewalling in Qubes is not intended to be a
> leak-prevention mechanism, it likely has this effect in a broad class
> class of attack scenarios. Moreover, making a VM completely offline
> (i.e. setting its NetVM to "none") is a more robust way to limit the
> ability of an attacker to leak secrets stolen from memory to the outside
> world. While this mechanism should not be considered bullet-proof -- it
> is still possible to mount a specialized attack that exploits a covert
> channel to leak the data -- it could be considered as an additional
> layer of defense.
>
> Finally, Qubes offers mechanisms to allow for additional protection of
> user secrets, especially cryptographic keys, such as PGP keys used for
> encryption and signing. Qubes Split GPG [6] allows the user to keep
> these keys in an isolated VM. So, for example, the user might be running
> her "development" qube in parallel with a compromised qube, while
> keeping the GPG backend VM (where she keeps the signing key that she
> uses to sign her software releases) shut down most of the time (because
> it's only needed when a release is being made). This way, the software
> signing keys will be protected from the attack.
>
> The user could take this further by using Qubes Split GPG with a backend
> qube running on a physically separate computer, as has been demonstrated
> with the Qubes USB Armory project [7].
>
> (Proper) patching
> ==================
>
> Mitigations against the CPU bugs discussed here are in development but
> have not yet been released. The Xen Project is working on a set of
> patches (see XSA 254 [5] for updates). At the same time, we are working
> on similar mitigations where feasible.
>
> ## Qubes 4.0
>
> As explained above, almost all the VMs in Qubes 4.0 are
> fully-virtualized by default (specifically, they are HVMs), which
> mitigates the most severe issue, Meltdown. The only PV domains in
> Qubes 4.0 are stub domains, which we plan to eliminate by switching to
> PVH where possible. This will be done in Qubes 4.0-rc4 and also
> released as a normal update for existing Qubes 4.0 installations. The
> only remaining PV stub domains will be those used for VMs with PCI
> devices. (In the default configuration, these are sys-net and
> sys-usb.) The Xen Project has not yet provided any solution for this
> [9].
>
> ## Qubes 3.2
>
> For Qubes 3.2, we plan to release an update that will make almost all
> VMs run in a fully-virtualized mode. Specifically, we plan to backport
> PVH support from Qubes 4.0 and enable it for all VMs without PCI
> devices. After this update, all VMs that previously ran in PV mode (and
> that do not have PCI devices) will subsequently run in PVH mode, with
> the exception of stub domains. Any HVMs will continue to run in HVM
> mode.
>
> There are two important points regarding the Qubes 3.2 update. First,
> this update will work only when the hardware supports VT-x or equivalent
> technology. Qubes 3.2 will continue to work on systems without VT-x, but
> there will be no mitigation against Meltdown on such systems. Users on
> systems that do not support VT-x are advised to take this into
> consideration when assessing the trustworthiness of their systems.
>
> Second, the Qubes 3.2 update will also switch any VMs that use a custom
> kernel to PVH mode, which will temporarily prevent them from working.
> This is a deliberate security choice to protect the system as a whole
> (rather than leaving VMs with custom kernels in PV mode, which would
> allow attackers to use them to mount Meltdown attacks). In order to use
> a VM with a custom kernel after the update (whether the custom kernel
> was installed in dom0 or inside the VM), users must either manually
> change the VM back to PV or change the kernel that the VM uses. (Kernel
>> =4.11 is required, and booting an in-VM kernel is not supported in PVH
> mode.)
>
> We'll update this bulletin and issue a separate announcement once
> patches are available.
>
> Suggested actions after patching
> =================================
>
> While the potential attacks discussed in this bulletin are severe,
> recovering from these potential attacks should be easier than in the
> case of an exploit that allows the attacker to perform arbitrary code
> execution, resulting in a full system compromise. Specifically, we don't
> believe it is necessary to use Qubes Paranoid Backup Restore Mode to
> address these vulnerabilities because of the strict read-only character
> of the attacks discussed. Instead, users who believe they are affected
> should consider taking the following actions:
>
> 1. Changing the screenlocker passphrase.
>
> 2. Changing the disk encryption (LUKS) passphrase.
>
> 3. Re-encrypting the disk to force a change of the disk encryption
> _key_. (In practice, this can be done by reinstalling Qubes and
> restoring from a backup.)
>
> 4. Evaluating the odds that other secrets have been compromised,
> such as other passwords and cryptographic keys (e.g. private
> PGP, SSH, or TLS keys), and generate new secrets. It is unclear
> how easy it might be for attackers to steal such data in a
> real world Qubes environment.
>
> Technical discussion
> =====================
>
> - From a (high-level) architecture point of view, the attacks discussed in
> this bulletin should not concern Qubes OS much. This is because,
> architecture-wise, there should be no secrets or other sensitive data in
> the hypervisor memory. This is in stark contrast to traditional
> monolithic systems, where there is an abundance of sensitive information
> living in the kernel (supervisor).
>
> Unfortunately, for rather accidental reasons, the implementation of the
> particular hypervisor we happen to be using to implement isolation for
> Qubes, i.e. the Xen hypervisor, undermines this clean architecture by
> internally mapping all physical memory pages into its address space. Of
> course, under normal circumstances, this isn't a security problem,
> because no one is able to read the hypervisor memory. However, the bugs
> we're discussing today might allow an attacker to do just that. This is
> a great example of how difficult it can be to analyze the security
> impact of a feature when limiting oneself to only one layer of
> abstraction, especially a high-level one (also known as the "PowerPoint
> level").
>
> At the same time, we should point out that the use of full
> virtualization prevents at least one of the attacks, and incidentally
> the most powerful one, i.e. the Meltdown attack.
>
> However, we should also point out that, in Qubes 3.2, even HVMs still
> rely on PV stub domains to provide I/O emulation (qemu). In the case of
> an additional vulnerability within qemu, an attacker might compromise
> the PV stub domain and attempt to perform the Meltdown attack from
> there.
>
> This limitation also applies to HVMs in release candidates 1-3 of Qubes
> 4.0. Qubes 4.0-rc4, which we plan to release next week, should be using
> PVH instead of HVM for almost all VMs without PCI devices by default,
> thus eliminating this avenue of attack. As discussed in the Patching
> section, VMs with PCI devices will be the exception, which means that
> the Meltdown attack could in theory still be conducted if the attacker
> compromises a VM with PCI devices and afterward compromises the
> corresponding stub domain via a hypothetical qemu exploit.
> Unfortunately, there is not much we can do about this without
> cooperation from the Xen project [9][11].
>
> Here is an overview of the VM modes that correspond to each Qubes OS
> version:
>
> VM type \ Qubes OS version | 3.2 | 3.2+ | 4.0-rc1-3 | 4.0-rc4 |
> ---------------------------------- | --- | ---- | --------- | ------- |
> Default VMs without PCI devices | PV | PVH | HVM | PVH |
> Default VMs with PCI devices | PV | PV | HVM | HVM |
> Stub domains - VMs w/o PCI devices | PV | N/A | PV | N/A |
> Stub domains - VMs w/ PCI devices | PV | PV | PV | PV |
>
> ("3.2+" denotes Qubes 3.2 after applying the update discussed above,
> which will result in most VMs running in PVH mode. "N/A" means "not
> applicable," since PVH VMs do not require stub domains.)
>
> Credits
> ========
>
> See the original Xen Security Advisory.
>
> References
> ===========
>
> [1] https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
> [2] https://meltdownattack.com/
> [3] https://meltdownattack.com/meltdown.pdf
> [4] https://spectreattack.com/spectre.pdf
> [5] https://xenbits.xen.org/xsa/advisory-254.html
> [6] https://www.qubes-os.org/doc/split-gpg/
> [7] https://github.com/inversepath/qubes-qrexec-to-tcp
> [8] https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/
> [9] https://lists.xenproject.org/archives/html/xen-devel/2018-01/msg00403.html
> [10] https://www.qubes-os.org/news/2017/10/03/core3/
> [11] https://blog.xenproject.org/2018/01/04/xen-project-spectremeltdown-faq/
>
> --
> The Qubes Security Team
> https://www.qubes-os.org/security/
> ```

- --
Andrew David Wong (Axon)
Community Manager, Qubes OS
https://www.qubes-os.org

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEZQ7rCYX0j3henGH1203TvDlQMDAFAlpoUcIACgkQ203TvDlQ
MDBAAxAAn2mtJMs44kCMKVWmuTEunqOy/mbOLnWnSvCN4e1pT8RnujvErUrVk7OO
j7lARLXX9GoNQZcmr3ex+7qWXxN/FWarU8/C8yGDWBEhZYl6kB4B53LtgWB1Ggbn
J+3q1nzoyBtohXmuUECMciHTnTnmGSboawts33yH8+ayxgJcWHF0x+XZl2Cnh2cT
bftJBtX57nVQaNSyWN2tPMe9toceX2kd/M9HGYpib9M8tDatrK/SB6H7hL/ZjaTM
wpmJOvzwLCwRLA7f0jWP7OBMua400bd7xmSgJS+yvOGZLKUF40RrEnSoylT91kHj
3zMTvvjycPH59Qy4NGtrbTKBro1I7uzvxXt01aRstaGRYPebn6IckV99ORx/aWx9
RxFlnzDKOoY9j0DEGzuCe9xHgWGVR6WpmKbofN8Kl9c0DAa29ZVVA3T/OF5uDkuk
SXGT1RRFIGbTKt8NQxXzmbYq07uK05X5yy16yoD1h9nPpXvXR/GmXuEC+xyErhMw
FmpixIYIy596xhKrws64xZpB5563krYe9A7yZVbR118v7dJzG7CdJpQ9erotqEio
xQLnZVPva8LoYDrLvVm33o6VkZW4fi6fpeI3kkQIBmYCfptVx7walbGgREeZOoLa
FIGnKlKpvolgse6f2WdFIySwM8ecNcfh6gHmJWrswpSRwpWExOw=
=JtlX
-----END PGP SIGNATURE-----

Ed

unread,
Jan 24, 2018, 9:52:09 AM1/24/18
to qubes...@googlegroups.com, qubes...@googlegroups.com
On 01/24/2018 04:29 AM, Andrew David Wong wrote:

> ## Qubes 3.2
>
> Previously, we had planned to release an update for Qubes 3.2 that would
> have made almost all VMs run in PVH mode by backporting support for this
> mode from Qubes 4.0.

Out of curiosity, is this still going to happen? I would love to see
this if possible, not only helping mitigate Meltdown without the
performance penalty (I believe), but also would give a nice general
security boost to 3.2

Thanks,
Ed

Reg Tiangha

unread,
Jan 24, 2018, 2:16:18 PM1/24/18
to qubes...@googlegroups.com, qubes...@googlegroups.com
The thing is, if Qubes intends on sticking with Xen 4.6 on Qubes R3.2,
then the promise of 1 year extended support after R4.0 is officially
released may be hard to meet since Xen will discontinue security support
in Oct 2018 (Source:
https://wiki.xenproject.org/wiki/Xen_Project_Release_Features ). That
means there could be a 3-4+ month period where the Qubes devs would need
to manually backport from newer versions of Xen any security fixes found
in Xen during that time frame (in essence, the Qubes project would need
to take over maintenance of the Xen 4.6 branch for that time period).
That could increase the support/maintenance burden for the Qubes devs by
a lot, depending on how complex the security issues are (worse case
would be another thing like Meltdown/Spectre happening again during that
time frame after official Xen support ends).

Xen 4.8 will be supported with security fixes by Xen until Dec 2019, so
assuming that Qubes R4.0 comes out this calendar year, then there'd
still be time left over to honor that 1 year extended support promise,
at least when it comes to any Xen fixes. So backporting Xen 4.8 to Qubes
R3.2 might actually be the better move in the long term, if the devs
really intend to honor that 1 year extended support promise. But that's
just my opinion.

yre...@riseup.net

unread,
Jan 24, 2018, 5:14:19 PM1/24/18
to qubes...@googlegroups.com
So... there are packages *to be released *at some undefined point in the
near future?
--
The following packages contain the patches described above:

- Xen packages, version 4.6.6-36
--

via the normal dom0 update process ? would be nice to see it in simple
English

Marek Marczykowski-Górecki

unread,
Jan 24, 2018, 6:26:43 PM1/24/18
to Reg Tiangha, qubes...@googlegroups.com, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On Wed, Jan 24, 2018 at 12:15:25PM -0700, Reg Tiangha wrote:
> On 01/24/2018 07:51 AM, Ed wrote:
> > On 01/24/2018 04:29 AM, Andrew David Wong wrote:
> >
> >> ## Qubes 3.2
> >>
> >> Previously, we had planned to release an update for Qubes 3.2 that would
> >> have made almost all VMs run in PVH mode by backporting support for this
> >> mode from Qubes 4.0.
> >
> > Out of curiosity, is this still going to happen? 

Not unless forced otherwise to do so.

> > I would love to see
> > this if possible, not only helping mitigate Meltdown without the
> > performance penalty (I believe), but also would give a nice general
> > security boost to 3.2
> >
> > Thanks,
> > Ed
> >
>
> The thing is, if Qubes intends on sticking with Xen 4.6 on Qubes R3.2,
> then the promise of 1 year extended support after R4.0 is officially
> released may be hard to meet since Xen will discontinue security support
> in Oct 2018 (Source:
> https://wiki.xenproject.org/wiki/Xen_Project_Release_Features ). That
> means there could be a 3-4+ month period where the Qubes devs would need
> to manually backport from newer versions of Xen any security fixes found
> in Xen during that time frame (in essence, the Qubes project would need
> to take over maintenance of the Xen 4.6 branch for that time period).
> That could increase the support/maintenance burden for the Qubes devs by
> a lot, depending on how complex the security issues are (worse case
> would be another thing like Meltdown/Spectre happening again during that
> time frame after official Xen support ends).

We've tested backported Xen 4.8 with PVH on various machines well
supported by R3.2 and there are some cases where it breaks badly. The
most extreme is hardware lacking EPT, where PVH is like 16x slower than
PV. I'm sure "just" upgrading Xen (without switching to PVH) will also
bring some compatibility problems, maybe for small minority of users,
but still. Similar to major kernel upgrade, as we've seen multiple times.
We promised Qubes 3.2 to be stable, supported release.

See "Upgrade instructions for R3.2 and QSB37 patches" thread on
xen-devel for some examples, and also comments here:
https://github.com/QubesOS/qubes-core-admin/pull/178
https://github.com/QubesOS/qubes-vmm-xen/pull/24

So yes, this means we'll need to support Xen 4.6 ourself for a few
months. It may happen that yet another bug will be found, requiring very
hard to backport changes. But I think it is quite unlikely event. And
even if that happen, we can decide to upgrade Xen then. We already have
part of this work done.

- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAlppFeMACgkQ24/THMrX
1yxmGQgAk/LLxD5LywTDpZi+Ii7XDHPRT7rkTvbQv6K79S7WfdXvNoXpyCoBOO0/
iR+L4RMzF6OIu861aBiUZo8WiTfoE6dgDu4X/5MNPcewbtGhaOnq6DOiBCMRTW8+
mXtSepu/XVtMqZnAI7vZyBVijVh7UI2CfUfOJnk1Z3zhU9phAGePh7ywSRskrOVI
qOEVb70f9qaB9BV81MjtOjn+nz4IiTid2CQEL2CFPhEWoXqbd1dtQLnemH8j1f1a
uwlNIf76foiJLr0I8iei/SLjLG5YHOtKWNUtBf1jRtTbpZCbu96o9MCMQSjn5ZG8
SR8McukKMgRCYPOCvAA+GJ1DxVWE6w==
=qfqr
-----END PGP SIGNATURE-----

Andrew David Wong

unread,
Jan 24, 2018, 8:12:10 PM1/24/18
to yre...@riseup.net, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On 2018-01-24 16:14, yre...@riseup.net wrote:
> [...]
>
> So... there are packages *to be released *at some undefined point
> in the near future? -- The following packages contain the patches
> described above:
>
> - Xen packages, version 4.6.6-36 --
>
> via the normal dom0 update process ? would be nice to see it in
> simple English
>

Sorry! We forgot to include our usual patching instructions. I've just
created a pull request [1] to have this added to the QSB:

```
The specific packages that contain the XPTI patches for Qubes 3.2 are
as follows:

- Xen packages, version 4.6.6-36

The packages are to be installed in dom0 via the Qubes VM Manager or via
the qubes-dom0-update command as follows:

For updates from the stable repository (not immediately available):
$ sudo qubes-dom0-update

For updates from the security-testing repository:
$ sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing

A system restart will be required afterwards.

These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community.

If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new Xen
binaries.
```

[1] https://github.com/QubesOS/qubes-secpack/pull/18

- --
Andrew David Wong (Axon)
Community Manager, Qubes OS
https://www.qubes-os.org

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEZQ7rCYX0j3henGH1203TvDlQMDAFAlppLtoACgkQ203TvDlQ
MDDTmRAAw2QrNmo6JHHrgwa7Y2izZXIA84sjE/3HRVA1PkTZTlef3y6xLzCB9nYn
o3TFnIvgHztkqlNFw67DMIGaXj8lf/KotlknnEIl8uuDufJNtjZHQ36KFkOud4KU
H7nlKGSYELzAllydPhQZy6joqCcgCWAscJDwHNq8wDcMjgXVp0/E42jP6tGvYylL
4cCg66sjAoGH3jN8sJZZhaWKfhS7N+CagATUgHPjSy7d3Zh2hTI7JGOofGJ7V92f
YZ2s7CEu8RAxYld8GHzyRasKL8Ri+gLx2uUa+qlKmBVNvRMuoUeysu0oHJmEHMEG
uZsrrCL/ldL1jYyaXhsJt6lrUxgwYC7MuVp5NlnFsKxmw1fN2enjnOUVMV4/ikdI
iGq02cagbtLowO5vctQz4heNFo583xhFRk0ib9BBeb1vx4qfhMrJkZ3qEmWFlq8j
ZFbE13YNnJflOappGmSIXlB1hx4OqeaZS55ORjIbiIKTM/dZ0wRNDO5LxFFv9b1W
rLF4HFP5AIpNF4AxBp5AeYcPee++8Jqtdgb4nBjlWtiYNIFAwg52xLW6DJLErrOy
YCZ2Ujq+XxtXHFd4Ci131TXTCVkGH3+YzzvYAgErxPMh+wDPT1yhaI8YkhQxExMG
+yBiofdwk10b5k0TVUncW6UgNqo+96cGlIJFxiKjQl4h7X9G0+o=
=W9b/
-----END PGP SIGNATURE-----

yre...@riseup.net

unread,
Jan 24, 2018, 9:17:20 PM1/24/18
to Andrew David Wong, qubes...@googlegroups.com
1)
The latter (security) packages will migrate, I'd assume this means ?

2)
Where would I find the repositories in dom0 for the track I'm currently
using?

3)
after doing the 1x securitytesting repo update, how do I check which Xen
package is now installed? and/or how do I bring up the GUI
update manager when it doesn't actually need to update it doesn't
persist

cc: thelist

awokd

unread,
Jan 25, 2018, 4:20:56 AM1/25/18
to yre...@riseup.net, qubes...@googlegroups.com
On Thu, January 25, 2018 2:17 am, yre...@riseup.net wrote:
> On 2018-01-24 15:12, Andrew David Wong wrote:

>>
>> These packages will migrate from the security-testing repository to the
>> current (stable) repository over the next two weeks after being tested
>> by the community.
>
>
> 1)
> The latter (security) packages will migrate, I'd assume this means ?

Yes, this is the standard model for deploying all updates including
security. They appear in testing first for bleeding edge users, then
stable for everyone. Sometimes bugs are found in the testing phase causing
the package to be pulled, so unless you are comfortable rolling back
packages yourself you should leave it on stable.

> 2)
> Where would I find the repositories in dom0 for the track I'm currently
> using?

If you haven't changed it manually, you are on stable.

> 3)
> after doing the 1x securitytesting repo update, how do I check which Xen
> package is now installed?

In dom0, "dnf list installed".

> and/or how do I bring up the GUI
> update manager when it doesn't actually need to update it doesn't persist

No GUI, but in dom0 you can force it to check for updates with "sudo
qubes-dom0-update". Might not be following your question here.

Vít Šesták

unread,
Jan 25, 2018, 4:42:03 AM1/25/18
to qubes-users
There actually is a GUI for checking dom0 updates. In Qubes VM manager, select dom0 and click the update button in top toolbar. Or you can also use the context menu.

OTOH, in this case, the main benefit of the GUI are the notifications. The update process itself is usually more friendly from commandline. And you cannot install security-testing using GUI.

yre...@riseup.net

unread,
Jan 25, 2018, 5:51:08 PM1/25/18
to aw...@danwin1210.me, qubes...@googlegroups.com
Mostly, got it. Just the one item I'm unsure about. @URL:
https://www.qubes-os.org/doc/software-update-dom0/

it mentions:
--
To temporarily enable any of these repos, use the
--enablerepo=<repo-name> option. Example commands:

sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing
sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing
sudo qubes-dom0-update --enablerepo=qubes-dom0-unstable

To enable or disable any of these repos permanently, change the
corresponding boolean in /etc/yum.repos.d/qubes-dom0.repo.
--


*by this if I ran sudo qubes-dom0-update
--enablerepo=qubes-dom0-security-testing *once, I take it , that
I am still on the Stable Track "repo" .... so somehow magically I
have the current testing Xen version (I checked and do), but when the
security Xen goes to Stable , they will just be integrated ..... so
currently I have a combination of 1 time security Xen and the rest
is "current" (Not testing) ?

awokd

unread,
Jan 25, 2018, 6:33:18 PM1/25/18
to yre...@riseup.net, aw...@danwin1210.me, qubes...@googlegroups.com
On Thu, January 25, 2018 10:51 pm, yre...@riseup.net wrote:

> *by this if I ran sudo qubes-dom0-update
> --enablerepo=qubes-dom0-security-testing *once, I take it , that
> I am still on the Stable Track "repo" .... so somehow magically I
> have the current testing Xen version (I checked and do), but when the
> security Xen goes to Stable , they will just be integrated ..... so
> currently I have a combination of 1 time security Xen and the rest is
> "current" (Not testing) ?

Exactly!

yre...@riseup.net

unread,
Jan 25, 2018, 9:40:45 PM1/25/18
to aw...@danwin1210.me, qubes...@googlegroups.com
fwiw, I am noticing "qrexec not connected" in AppVM triangle in the GUI
Manager on what appears to be a normal operating AppVM , but think I
saw it on a frozen HVM before rebooting


is this of any particular concern .....or possibly related to the new
Testing Xen packages?

yre...@riseup.net

unread,
Jan 25, 2018, 9:43:59 PM1/25/18
to aw...@danwin1210.me, qubes...@googlegroups.com
On 2018-01-25 13:33, awokd wrote:
sorry, plz just disregard, restart the AppVM disappears , guess I don't
need to know :)

Vít Šesták

unread,
Feb 1, 2018, 4:31:14 AM2/1/18
to qubes-users
I have installed the patch from security-testing. On system resume, I sometimes notice effects like:

* Time synced noticeably late. For example, when my laptop wakes up on morning, Thunderbird considers today's e-mails as e-mails from future day (so it displays date, not only time).
* Some VMs don't get the time synced at all (or at least after a huge delay that looks like forever). I've repeatedly seen this at some VM with a background bot.
* The same applies to Wi-Fi. It sometimes seems to be attached even after I type the password (which is not short).

I have also seen one strange change (not sure about the timing, but it might be related to the update) that might affect security of those who use some pseudo-DVM for sys-usb. When I remove USB „mouse“* and attach it back, the mouse is automatically allowed. Maybe the connection has not been closed. The strange part is that this does not apply for USB keyboard, although the input proxy works virtually the same.

So, before adding an untrusted device, it is not enough to disconnect USB keyboard/touchpad. I also have to reboot the sys-usb VM.

Regards,
Vít Šesták 'v6ak'

*) I have two USB „mice“, none of them is actual traditional mouse. One of them is a touchpad that uses mouse USB protocol. The other one is a keyboard that has capability of clicking and looks like multiple input devices on the USB protocol.

Vít Šesták

unread,
Feb 8, 2018, 3:58:07 AM2/8/18
to qubes-users
On Thursday, February 1, 2018 at 10:31:14 AM UTC+1, Vít Šesták wrote:
> I have also seen one strange change (not sure about the timing, but it might be related to the update) that might affect security of those who use some pseudo-DVM for sys-usb. When I remove USB „mouse“* and attach it back, the mouse is automatically allowed. Maybe the connection has not been closed. The strange part is that this does not apply for USB keyboard, although the input proxy works virtually the same.

Please ignore this part, the issues with touchpad/mice rather looks like a my fault.

V6

Andrew David Wong

unread,
Mar 15, 2018, 8:44:54 PM3/15/18
to qubes-a...@googlegroups.com, qubes...@googlegroups.com, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Dear Qubes Community,

We have just updated Qubes Security Bulletin (QSB) #37:
Information leaks due to processor speculative execution bugs.

The text of the main changes are reproduced below. For the full
text, please see the complete QSB in the qubes-secpack:

<https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-037-2018.txt>

Learn about the qubes-secpack, including how to obtain, verify, and
read it:

<https://www.qubes-os.org/security/pack/>

View all past QSBs:

<https://www.qubes-os.org/security/bulletins/>

View XSA-254 in the XSA Tracker:

<https://www.qubes-os.org/security/xsa/#254>

```
Changelog
==========

2018-01-11: Original QSB published
2018-01-23: Updated mitigation plan to XPTI; added Xen package versions
2018-03-14: Updated package versions with Spectre SP2 mitigations

[...]

(Proper) patching
==================

## Qubes 4.0

[...]

Additionally, Xen provided patches to mitigate Spectre variant 2. While
we don't believe this variant is reliably exploitable to obtain
sensitive information from other domains, it is possible to use it
for help with other attacks inside a domain (like escaping a sandbox
of web browser). This mitigation to be fully effective require
updated microcode - refer to your BIOS vendor for updates.

The specific packages that contain the XPTI and Spectre variant 2
patches for Qubes 4.0 are as follows:

- Xen packages, version 4.8.3-3

The packages are to be installed in dom0 via the Qubes VM Manager or via
the qubes-dom0-update command as follows:

For updates from the stable repository (not immediately available):
$ sudo qubes-dom0-update

For updates from the security-testing repository:
$ sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing

A system restart will be required afterwards.

These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community.

If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new Xen
binaries.

## Qubes 3.2

[...]

Additionally, Xen provided patches to mitigate Spectre variant 2. While
we don't believe this variant is reliably exploitable to obtain
sensitive information from other domains, it is possible to use it
for help with other attacks inside a domain (like escaping a sandbox
of web browser). This mitigation to be fully effective require updated
microcode - refer to your BIOS vendor for updates.

The specific packages that contain the XPTI and Spectre variant 2
patches for Qubes 3.2 are as follows:

- Xen packages, version 4.6.6-37

[...]

```

This announcement is also available on the Qubes website:
https://www.qubes-os.org/news/2018/03/15/qsb-37-update/

- --
Andrew David Wong (Axon)
Community Manager, Qubes OS
https://www.qubes-os.org

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEZQ7rCYX0j3henGH1203TvDlQMDAFAlqrE18ACgkQ203TvDlQ
MDApFw/8DZdF/WhE31coNPbW7tbroScuADS08kWhWG7QkiYqtzoERCq1TIxNfEiU
KEMlpw28NJc+/NlesPEM/lB9W21eyR9VcIUea9aO98gwX938iTVT2MTMD0lwinnb
Qg+K/jmAW8LMnJ2kDHZ93+GhAuLU9NOUZVdsmnF5tNsmW7NIKDgk7Fx8pGb32u9c
nVL5HVd0SX1QLEanFZ7Jgapstt+6nVfkayCSZEp4gFpzF+drWRdJL/0Z0Qi6EJYr
x29UKFuU+WPqNutxcL88usCwBthOuOgpdh0D+LxnIMaZfjkT002403Vcgqd3DrAw
Jclwh+VOg+e5S4/fA3fFxeRhPrSJuuSvQ2Ik8WUhaE5p10gS6TAoP+fR0z7zBSZ9
7teiZQMORoTWWj02TmoUuf3sL9sEsec6IC+obTKtGr6qU5ntW2RDhMGiQetQO3zU
jyro7p2cGVc8B6SSEZ//bUOpGTujppTAsrK/KAMZQ8Plu/KWOzuCdgIrnFRcoSsW
NPONF8BASlFLUg/hjPbuO0NQwyWYOnejwhaaEcCP4eU9/dudLAvUWb9oTWGevwq5
o29TalXxx7+ZqJXeYt3MECv0pYv/GzeZtX50vaknJjmBYMtoF5l7s8AjiwtgvJep
85j4sMIH/8R/VmqqdpH/HZUkjB7R1/hRpp144mLqvOelvd8OP5Q=
=Z2TQ
-----END PGP SIGNATURE-----

Lorenzo Lamas

unread,
Mar 16, 2018, 10:34:05 AM3/16/18
to qubes-users
After updating to Xen 4.6.6-37, with updated BIOS/microcode, I executed Spectre & Meltdown Checker(https://github.com/speed47/spectre-meltdown-checker) in a PV Fedora 26 AppVM.(Kernel 4.14.18-1)

Hardware support is now supported:
* Hardware support (CPU microcode) for mitigation techniques
* Indirect Branch Restricted Speculation (IBRS)
* SPEC_CTRL MSR is available: YES
* CPU indicates IBRS capability: YES (SPEC_CTRL feature bit)
* Indirect Branch Prediction Barrier (IBPB)
* PRED_CMD MSR is available: YES
* CPU indicates IBPB capability: YES (IBPB_SUPPORT feature bit)
* Single Thread Indirect Branch Predictors (STIBP)
* SPEC_CTRL MSR is available: YES
* CPU indicates STIBP capability: YES

However, the VM kernel does not seem to support the migitations:

CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
* Mitigated according to the /sys interface: NO (kernel confirms your system is vulnerable)
* Mitigation 1
* Kernel is compiled with IBRS/IBPB support: NO
* Currently enabled features
* IBRS enabled for Kernel space: NO
* IBRS enabled for User space: NO
* IBPB enabled: NO
* Mitigation 2
* Kernel compiled with retpoline option: YES
* Kernel compiled with a retpoline-aware compiler: NO (kernel reports minimal retpoline compilation)
> STATUS: VULNERABLE (Vulnerable: Minimal generic ASM retpoline, IBPB)


Does this mean the kernel compiled by Qubes does not support the migitations yet, or that this test cannot get proper info from the kernel, since the kernel is provided by Dom0 instead of the VM? Or are both true?

Yuraeitha

unread,
Mar 16, 2018, 3:43:12 PM3/16/18
to qubes-users

I do by no means have proper insight into this, but I believe for this particular case it doesn't matter much if the VM's kernel is not updated against these attacks. I will stand corrected if I'm wrong about that.

My reasoning is, despite that information about the CPU can be seen in the VM's, as long as the lower system levels can't be exploited (CPU/BIOS/Xen), then it won't matter if the AppVM's kernel is exploitable, because it can't reach deeper down, and will be blocked by the patch fixes on the lower system levels.

However, like Andrew mentioned above, it might still be possible to some extent use it in combination with other attacks (hypothetically), so it's not deemed completely secure (yet, at least).

An illustrative example,
- The dig-able dirt is the exploitable VM's.
- The fence and cemented ground below the dirt inside the fence's area, is the secured VM environment.

So a successful attack on an VM would be like the soft dirt ground in the VM's can be dug and breach the cement, in order to get out of the protected area (prison break). If the ground is cemented below the area inside the fence, then you cannot dig further down to escape the fenced area. So too for the AppVM's, the soft dirt ground being dug-able, but since you can't dig further down to exploit further than the lower level security (cemented ground) then it won't matter anyway.

However, the issue being, if some places are not fully cemented, then it might be possible to escape. The question then is, since no one can see the cement without first digging (not the protectors, not the attackers, essentially no one knows without first digging), then it remains unknown if the area is inescapable or not.

The aim of Qubes is to secure the cement and fence, not the dirty ground, i.e. no matter what you run in the VM's, it should stay secure. While true securing the VM's can add extra security, it is however not the aim here. You yourself can install more secure VM's if you prefer. I believe, while not knowing, that the Qubes team might focus more on securing the VM's dirt (in above's analogy), but right now, it's all on the fence and cemented ground inside it.

Qubes OS's work, as I perceive it, focuses on securing the environment from below up. So if security inside a VM is needed, then they are not meeting their own set goals to allow a any insecure code run wild in VM's without it compromising the Qubes OS infrastructure.

I have absolutely no deep insight into any of this, however, this is my perspective, perhaps it can be of use, or perhaps it can't.

Yuraeitha

unread,
Mar 16, 2018, 3:48:49 PM3/16/18
to qubes-users
On Friday, March 16, 2018 at 3:34:05 PM UTC+1, Lorenzo Lamas wrote:

Important typo, I forgot to add 'in the future'.

"I believe, while not knowing, that the Qubes team might focus more on securing the VM's dirt (in above's analogy), but right now, it's all on the fence and cemented ground inside it."

should be:

"I believe, while not knowing, that the Qubes team might in the future focus more on securing the VM's dirt (in above's analogy), but right now, it's all on the fence and cemented ground inside it."

Reply all
Reply to author
Forward
0 new messages