Where is the volatile.img CoW partiton associated with an AppVM Root FS?

16 views
Skip to first unread message

qube...@4forl1st5.slmail.me

unread,
Jul 13, 2025, 4:06:17 AMJul 13
to qubes...@googlegroups.com
Very much new to Qubes, and trying to ease my way into it,
albeit possibly hindered by having had some previous exposure
to non-Qubes Xen environments.


I'd like to ask a question about the way in which an AppVM's
Copy-on-Write partition, from within the "volatile.img" VBD
is used.


From reading the Template Implementation page, I note


Block devices of a VM

Every VM has 4 block devices connected:

* xvda – base root device (/) – details described below
* xvdb – private.img – place where VM always can write.
* xvdc – volatile.img, discarded at each VM restart – here is placed
swap and temporal “/” modifications (see below)
* xvdd – modules.img – kernel modules and firmware


and then, below,


Snapshot device in Dom0

This device consists of:

* root.img – real template filesystem
* root-cow.img – differences between the device as seen by AppVM
and the current root.img

The above is achieved through creating device-mapper snapshots for each
version of root.img. When an AppVM is started, a xen hotplug script
(/etc/xen/scripts/block-snapshot) reads the inode numbers of root.img and
root-cow.img; these numbers are used as the snapshot device’s name. When a
device with the same name exists the new AppVM will use it – therefore,
AppVMs based on the same version of root.img will use the same device. Of
course, the device-mapper cannot use the files directly – it must be
connected through /dev/loop*. The same mechanism detects if there is a
loop device associated with a file determined by the device and inode
numbers – or if creating a new loop device is necessary.

Then, from inspection of the block devices within a VM, I can see

xvda

Number Start End Size File system Name Flags
34s 2047s 2014s Free Space
1 1.00MiB 201MiB 200MiB EFI System boot, esp
2 201MiB 203MiB 2.00MiB BIOS boot partition bios_grub
3 0.02GiB 20.0GiB 19.8GiB ext4 Root filesystem
20.0GiB 20.0Gib 2015s Free Space

xvdc

Number Start End Size Type File system Flags
63s 2047s 1985s Free Space
1 0.00GiB 1.00GiB 1.00GiB primary linux-swap(v1)
3 1.00GiB 10.0GiB 9.00GiB primary


but what I can't seem to work out is where the Copy-on-Write partition
(as I think of it: xvdc3) is being "associated" with the VM's "Root
filesystem" (xvda3), nor where the loop devices, required for it all
to hang together, are created.

The reference to the

"xen hotplug script (/etc/xen/scripts/block-snapshot)"

has me thinking that the "association" is happening in the Dom0,
but I can't seem to see the "various parts", when taking a look
around the Dom0 or AppVM, after invoking an "Xfce Terminal" from
the personal qube.

I do note though, that inside the VM, a 'df' shows the root device
being presented as

/dev/mapper/dmroot

and not

/dev/xvda3

which then has me thinking that the "association" might be
taking place within the AppVM, but again, I can't see any
obvious evidence for that.


I feel that I should be able to see the "various parts", but,
when looking around, am clearly missing them.


Could someone point me to a document, or previous answer, that
makes things clearer, and/or to what I might have missed in
looking around inside the Dom0 and AppVM.



Marek Marczykowski-Górecki

unread,
Jul 13, 2025, 7:10:03 AMJul 13
to qube...@4forl1st5.slmail.me, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
The above documentation is outdated a bit - with LVM thin provisioning
the CoW layer on root volume is done in dom0, so VM gets read-write
snapshot as xvda and doesn't need to do CoW on its own. So, volatile
volume is used only for swap.

If you want this CoW layer to be done in VM, it is still supported
option, and you can select it by setting root volume to read-only
(qvm-volume config VMNAME:root rw false). But it will be a bit slower.

> The reference to the
>
> "xen hotplug script (/etc/xen/scripts/block-snapshot)"
>
> has me thinking that the "association" is happening in the Dom0,
> but I can't seem to see the "various parts", when taking a look
> around the Dom0 or AppVM, after invoking an "Xfce Terminal" from
> the personal qube.
>
> I do note though, that inside the VM, a 'df' shows the root device
> being presented as
>
> /dev/mapper/dmroot
>
> and not
>
> /dev/xvda3
>
> which then has me thinking that the "association" might be
> taking place within the AppVM, but again, I can't see any
> obvious evidence for that.

Generally VM's initramfs takes care of assembling /dev/mapper/dmroot.
But if you look closely, /dev/mapper/dmroot is simply a symlink to
/dev/xvda3.

> I feel that I should be able to see the "various parts", but,
> when looking around, am clearly missing them.
>
>
> Could someone point me to a document, or previous answer, that
> makes things clearer, and/or to what I might have missed in
> looking around inside the Dom0 and AppVM.

- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmhzlAQACgkQ24/THMrX
1yxKyQf/SuWrqss+GGIrmU5I1C9V1bbFXV1Iwu/viZf8Kq5sHDPS+G0EJG2NlMH8
EShQa4sJ0qERAit6H36XC4H5dJp+r+TDxbB9nOMx+oWtERstugWMN2lQ/g8R4djd
yF0mm3Szvf9JyT9KxpVM4AxchzpnD+FWAfF83Fc5DH3GghWNPEGob3J7NOreGA/U
kcTXgo2Up/nQuxDGgsbVjsCqJgme3nssGdU9ZkZLNuqY1YVX+iHTzKClqDnB//yg
QH7cDqvqtoPpNpr4kfdsq2rra+OYAJhZEG8w7QFPzhWTpA6J+DHY5j+QvQd5EkP2
7MNmHWStm+Gz15Bu2bCJZCVECXZSJw==
=nbom
-----END PGP SIGNATURE-----

qube...@4forl1st5.slmail.me

unread,
Jul 14, 2025, 6:26:18 AMJul 14
to qubes...@googlegroups.com, Marek Marczykowski-Górecki
On Sunday, July 13th, 2025 at 11:10, Marek Marczykowski-Górecki <marm...@invisiblethingslab.com> wrote:
>
> The above documentation is outdated a bit - ...

Ah-ha: thanks for pointing that out.


> ... with LVM thin provisioning
> the CoW layer on root volume is done in dom0, so VM gets read-write
> snapshot as xvda and doesn't need to do CoW on its own.

Yeah: so I did dump, and take a look in, the virsh files and saw the
read-write config there, but clearly didn't appreciate the implications
of it, especially as I was looking for something read-only.

I just assumed I had missed something.


> So, volatile > volume is used only for swap.

To just clarify that last bit though:

the 9G partition in the volatile VBD doesn't even play a part in the
in-Dom0 CoW layering: it's just 9G of unused space in a 10G volume
that will get created for every VM instance?

Asking, as I have an old laptop on which I haven't been able to get
Qubes to install, but was hoping to still replicate most of the Qubes
compartmentalisation for the VMs, but running a vanilla Xen.



> Generally VM's initramfs takes care of assembling /dev/mapper/dmroot.
> But if you look closely, /dev/mapper/dmroot is simply a symlink to
> /dev/xvda3.

That may well have been a case of not seeing the wood for the trees,
because I thought I was in the middle of a dmsetup forest!


Feedback much appreciated: I think I can get to where I want to now.


Marek Marczykowski-Górecki

unread,
Jul 14, 2025, 6:35:26 AMJul 14
to qube...@4forl1st5.slmail.me, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On Mon, Jul 14, 2025 at 10:26:00AM +0000, qube...@4forl1st5.slmail.me wrote:
> On Sunday, July 13th, 2025 at 11:10, Marek Marczykowski-Górecki <marm...@invisiblethingslab.com> wrote:
> >
> > The above documentation is outdated a bit - ...
>
> Ah-ha: thanks for pointing that out.
>
>
> > ... with LVM thin provisioning
> > the CoW layer on root volume is done in dom0, so VM gets read-write
> > snapshot as xvda and doesn't need to do CoW on its own.
>
> Yeah: so I did dump, and take a look in, the virsh files and saw the
> read-write config there, but clearly didn't appreciate the implications
> of it, especially as I was looking for something read-only.
>
> I just assumed I had missed something.
>
>
> > So, volatile > volume is used only for swap.
>
> To just clarify that last bit though:
>
> the 9G partition in the volatile VBD doesn't even play a part in the
> in-Dom0 CoW layering: it's just 9G of unused space in a 10G volume
> that will get created for every VM instance?

Yes, when the CoW layer is done in dom0, that 9G partition is unused.
Thanks to thin provisioning/sparse files it doesn't occupy disk space
either, so it's harmless.

Some users use this it for a larger temp directory or more swap.
In fact, it's specifically created as xvdc3 (instead of xvdc2) when it's
unused, to ease detection when it's safe to use it.

> Asking, as I have an old laptop on which I haven't been able to get
> Qubes to install, but was hoping to still replicate most of the Qubes
> compartmentalisation for the VMs, but running a vanilla Xen.
>
>
>
> > Generally VM's initramfs takes care of assembling /dev/mapper/dmroot.
> > But if you look closely, /dev/mapper/dmroot is simply a symlink to
> > /dev/xvda3.
>
> That may well have been a case of not seeing the wood for the trees,
> because I thought I was in the middle of a dmsetup forest!

Yeah, it is confusing a bit. In fact, earlier even in this case there
was device-mapper involved, just dm-linear to map it 1:1 to xvda3. But
later we simplified it to a symlink (one layer less) but keeping the
name so it's always /dev/mapper/dmroot regardless of the configuration
(so configs like /etc/fstab stays the same).

- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmh03WgACgkQ24/THMrX
1yxWiQf/cnms5RoRZHnb18ZwHC2lR8uA4RmioHIH68KWzuUWHvqNfqlJzPl1bEGR
qfJO9zxuGk3seRasLMSh/725sZkRf4mW0/+FBx/ZgvGSBLYVRhAsWxzK2+1MTQWt
p4uq5WRhL159BfxJZSQAvNZjo+uVRwcZzvry0VmBwYD9cT1BZURdOce/Epg7HHA+
71K34xFBtP1CXEx4SKv65Uis+A64RUpRMGBiJcqSz3Tc7ULKqGQbBSCdT6LBBAUV
dCNPqIDy+7SVaAPa1Hq/K5ZHRfzA9n22HBy5J2EUHu3FstNIbsRbjBOj0u7pjMbE
SET8l3RE8YtJTQvEbtwV8J4Y2bPJZw==
=oz7c
-----END PGP SIGNATURE-----
Reply all
Reply to author
Forward
0 new messages