-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
On Thu, Feb 05, 2026 at 04:33:36AM +0000, 'Anderson Rosenberg' via qubes-devel wrote:
> Dear Qubes OS Users and Developer Team,
Hello!
> I am the developer of the
https://codeberg.org/andersonarc/reliant-system project, which provides support for plausible deniability in Qubes OS via volatile dom0 and
https://codeberg.org/shufflecake/shufflecake-c encryption system. As of now, it relies on separating qubes.xml into 'shards' located in each deniable volume and merging them during the initramfs stage, as well as applying patches to the core system in order to maintain a RAM-based dom0. Both of these methods are unreliable and that is not something I believe should be kept in a security-focused project. Given that both
https://github.com/QubesOS/qubes-issues/issues/2402 and
https://github.com/QubesOS/qubes-issues/issues/4982 have been explicitly requested by users, as well as
https://forum.qubes-os.org/t/reliant-deniable-encryption-for-qubes-os , I would like to propose a set of changes to the
https://github.com/QubesOS/qubes-core-admin codebase that would make this possible without relying on unstable workarounds.
This is quite complicated topic, as you already know. In general,
proposed features mostly align with some other planned features.
Especially, it would be useful for several other reasons to store some
of the qubes on separate partitions or even external disks - and make
them available only when such partition/disk is visible. Not only for
plausible deniability reasons.
A stretch goal could an architecture allowing such qubes (on an external
disk) to be connected to different Qubes OS installations, and just
work (imagine having home PC and work PC, and being able to easily take
some of the qubes with you and connect to the other). Ideally, such
mechanism should treat external qubes as untrusted (in other words:
prevent "home PC" compromising "work PC" by some malicious XML
modification, for example). But I'm not very optimistic about
feasibility of this scenario (see below)...
But also, due to complexity I would advice some patience in having them
properly implemented.
To the collection of your related tickets, I'd also add this one:
https://github.com/QubesOS/qubes-issues/issues/3820
and also kinda related:
https://github.com/QubesOS/qubes-issues/issues/1293
> 1. Support for split qubes.xml.
>
>
>
> We need a robust way to dynamically pick up qubes from the following locations,
>
> /var/lib/qubes/qubes.xml for core domains, templates and sys-*,
>
> /run/shufflecake/sflc_X_X/qubes.xml for deniable domains.
>
>
> Currently this is handled by
https://codeberg.org/andersonarc/reliant-system/src/branch/master/tools/surgeon-dissect and
https://codeberg.org/andersonarc/reliant-system/src/branch/master/tools/surgeon-suture . The locations are hardcoded into the script. My proposal is as follows,
>
> Designate the qubes.xml in /var/lib/qubes as authoritative,
>
> Establish a non-authoritative XML format as a set of <domain> entries,
>
> Allow the original qubes.xml or other configuration file deemed appropriate to link to an additional root folder,
>
> Recursively load additional domains.xml files within the linked folder.
>
>
> We could modify
https://github.com/QubesOS/qubes-core-admin/blob/main/qubes/app.py in qubesd to support sharding on both save and load. Otherwise, there could be an official tool to merge and split qubes.xml which respects any changes to the format. However, I believe the former would be both easier to implement and more robust, despite requiring changes to a core system component.
This one is I think the most complex change from those proposed here.
This is because of dependencies between qubes - they must not fail, even
if some of the qubes in separate qubes.xml (lets call them "external
qubes" for now) are not available at the time. For example, imagine the
following situations:
1. You have an external qube based on debian-12-xfce template. You start
your system with external qubes not visible, and then remove
debian-12-xfce template (for install debian-13-xfce, migrate all visible
qubes to it and then remove debian-12-xfce). What should happen next
time you boot with external qubes visible? Same applies to other types
of dependencies (netvm, default_dispvm, audiovm, guivm, etc).
2. You create a template as an external qube, or vpn qube as an external
qube, and then make one of the standard qubes (non-external) use it (as
template or netvm respectively). What should happen next time you boot
with external qubes not visible?
The second situation might be "solved" by simply forbidding it - it
would limit usefulness of the feature (especially forbidding external
vpn qube), but it wouldn't be too bad. And for the deniable plausibility
case, it wouldn't be a problem at all - you don't want standard qubes to
have any trace of any external qubes. Note it might still be okay to
reference one external qube from another external qube (as long as they
are part of the same external storage/qubes.xml "shard").
But the first case is more problematic. You don't want to save any
information about external qubes in the main qubes.xml (that would be
counter deniable plausibility), but without such info, you cannot
prevent breaking dependencies. So, for this feature to work, it would
require some recovery mechanism, I think. I have two ideas:
1. Allowing to load such broken qube, but prevent starting it, until
user fixes the dependencies (in the above example, like change the
template to debian-13-xfce). The problem with this approach is, we
generally assume no broken objects all over the code - for example if
you have vm.template, then you can access vm.template.netvm. When
allowing loading broken objects, that wouldn't be the case anymore, and
would require carefully reviewing all the places where dependencies are
accessed and choosing a fallback in all of them. This also applies to
presenting such situation in CLI and GUI tools. TBH, I'm not optimistic
about feasibility of such change.
2. Automatically change such broken dependencies to some other value
(the obvious choice would be the default one - based on global
properties). While I think in most situations it would just work, there
are surely some corner cases. For example resetting, say, netvm from
sys-whonix to sys-firewall (just because you renamed sys-whonix) might
be very undesirable. Some workaround might be preventing starting such
"recovered" external qube until user review the change (this could use
'prohibit-start' feature added in R4.3), but it would still be clunky
and prone to errors IMHO...
> 2. Randomized qube IDs.
>
>
>
> Sequential qube IDs will leak information about the presence of qubes in 'gaps' when the system is booted under duress. Current workaround is to programmatically change them in the XML files. A better solution would be to modify
https://github.com/QubesOS/qubes-core-admin/blob/main/qubes/app.py to provide randomized QIDs from a CSPRNG following a collision check with existing identifiers. In order to avoid confusing existing users, this feature should be highly optional. Existing qube IDs can be left as-is.
I don't think randomized QIDs are a good idea.
The QID range is just 16 bits, and it's IMO too little to avoid
conflicts just by hoping CSPRNG will not hit one. Note, you still need
to be able to create qubes while external qubes are not visible.
I see two alternative options:
1. Use dedicated QID ranges per qubes.xml "shard". You still need to
avoid conflicts between those ranges, but requiring all shards to be
visible when allocating new range is IMHO an acceptable limitation.
2. Don't store QIDs for external qubes at all - allocate them at load
time (possibly from a single range dedicated for all external qubes). QID is
used mostly at runtime (in various structures), but on-disk metadata
operate on names (most cases) and UUID (very few cases). The only
downside I can think of is dynamic IP allocation (internal IP addresses
are built based on QID) - this would break some custom firewall rules.
But if you need static IP address, you can simply set "ip" property to a
static value (and avoid IP conflicts on your own...).
But this also brings up another problem: how to avoid qube name
conflicts? What should happen if you create a qube with the same name as
one of external ones (while external qubes are not visible)?
> 3. create-snapshot must respect rw="False".
>
>
>
> Otherwise, create-snapshot fails for large images (>~2 GB) such as templates under volatile dom0. For images tagged as rw="False",
https://github.com/QubesOS/qubes-core-admin/blob/main/linux/system-config/create-snapshot should invoke losetup with the --readonly flag. The condition is to be identified within
https://github.com/QubesOS/qubes-core-admin/blob/main/qubes/storage/file.py and passed to create-snapshot as a commandline argument. Since readonly images must not be modified, this should not contain any breaking changes and can be seamlessly implemented without opt-in flags.
While I'm not strictly against such change, there are two things you
need to be aware:
1. Such block device (with rw=False) is connected as read-only block
device to the VM anyway (see templates/libvirt/xen.xml). So, setting
loop device read-only is not strictly necessary.
2. The "file" storage driver should be avoided, and will eventually be
removed (in Qubes OS 5.0, whenever that happens). The use of dm-snapshot
is very inefficient (especially when reverting to an earlier snapshot),
and is incompatible with many features (like making a backup while the
qube is running).
If you need plain files, I'd recommend using file-reflink driver, on a
CoW-enabled filesystem (like xfs or btrfs).
> 4. Copying between qubes in deniable volumes must be forbidden.
>
>
>
> Suppose you copy any file from a hidden qube 'secret' to a public qube 'work'. This will clearly reveal the source of the copied files as ~/QubesIncoming/secret, and the worst part is that even when you delete this directory, there will be forensically visible traces in the filesystem journal. Therefore, this must be strictly forbidden. I believe this can be handled via a tagging policy in Qubes RPC, but I will need to look deeper into that. The main point is that there must be a policy filter which can be applied based on storage pools (same/different) or an equivalent measure. An alternative solution could be to optionally obfuscate the source qube name in Qubes RPC for the target qube.
Indeed a policy is a way to go. And you can quite easily make an
extension that adds tags based on the storage pool. See for example an
addon that add tags based on a template:
https://github.com/QubesOS/qubes-core-admin-addon-kicksecure/blob/main/qubeskicksecure/__init__.py
- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
-----BEGIN PGP SIGNATURE-----
iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmmEpCAACgkQ24/THMrX
1yySFgf+N1XwZ5IkJ5M2AVa3zki/c+CgJwRorTeFk1ntQppQzbj7poOBlH+zC13z
INkXdOiX2HqjSjnAJYoyB2/ocBwGyOpt0I8J19El2PFYojEEkQvl6X8vpk1Ci+M/
0KagqU7e9Lus4zl4A4Pf6s2j7YHF2E9gusyYKKN7nktDUTeecptiIq+AelSrxMgi
cs4+E8axHkuf59n9joFHNIJ2wmSwx2U33I7DpP27KY4CU23JNp7Jzq5sItPOgvTT
X8cHf5FsGX4VIdHPufECEClo+KdZ8VgX/vXUp9FhvdF1ZMrZignUqCu7xcJQgn4w
VSu4TVpm5oy6eYE8/c96dnZ1cyt+bQ==
=WkFQ
-----END PGP SIGNATURE-----