Disk space--R4 lies through its teeth

389 views
Skip to first unread message

Bill Wether

unread,
Mar 19, 2018, 1:34:05 PM3/19/18
to qubes-users
This has been mentioned before in <https://groups.google.com/forum/#!msg/qubes-users/Y1QjsK5fp1A>, but I don't see anywhere that it's fixed.

In R3.2, df in Dom0 would show how much actual disk space remained. That's a critical piece of data for production use, given the sheer amount of breakage caused by running out of space.

I have a 1TB SSD with Qubes 4.0 RC5 and about 450GB of restored VMs, but when I type 'df' in dom0 I get:

Use% Mounted on
devtmpfs 1995976 0 1995976 0% /dev
tmpfs 2009828 0 2009828 0% /dev/shm
tmpfs 2009828 1612 2008216 1% /run
tmpfs 2009828 0 2009828 0% /sys/fs/cgroup
/dev/mapper/qubes_dom0-root 935037724 3866076 883604596 1% /
tmpfs 2009828 8 2009820 1% /tmp
xenstore 2009828 416 2009412 1% /var/lib/xenstored
/dev/sda1 999320 79676 850832 9% /boot
tmpfs 401964 8 401956 1% /run/user/1000

You'd never know that the disk is actually half full or a little more. I have no idea how to manage my disk space on Qubes 4.0.

Suggestions?

Thanks

BillW

Unman

unread,
Mar 19, 2018, 1:55:39 PM3/19/18
to Bill Wether, qubes-users
Qubes 4.0 uses LVM thin pools.
Try using sudo lvs to see the actual data used in the pool.

Bill Wether

unread,
Mar 19, 2018, 2:22:38 PM3/19/18
to qubes-users

>

> Qubes 4.0 uses LVM thin pools.
> Try using sudo lvs to see the actual data used in the pool.

Ah, okay, thanks. When I do that, I get

[billw@dom0 Desktop]$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
pool00 qubes_dom0 twi-aotz-- 906.96g 52.49 28.32
root qubes_dom0 Vwi-aotz-- 906.96g pool00 2.14
swap qubes_dom0 -wi-ao---- 7.55g

and so forth.

Does that mean that my drive is actually 81% full with only 450 GB of VMs? I sure hope not. That's over 50% overhead!

Cheers

BillW

Yuraeitha

unread,
Mar 19, 2018, 2:28:09 PM3/19/18
to qubes-users

In addition to using "sudo lvs", I believe this too may also be relevant.

quote:
"In all versions of Qubes, you may want to set up a periodic job in dom0 to trim the disk. This can be done with either systemd (weekly only) or cron (daily or weekly)."
...
"Although discards can be issued on every delete inside dom0 by adding the discard mount option to /etc/fstab, this option can hurt performance so the above procedure is recommended instead. However, inside App and Template qubes, the discard mount option is on by default to notify the LVM thin pool driver (R4.0) or sparse file driver (R3.2) that the space is no longer needed and can be zeroed and re-used."
https://www.qubes-os.org/doc/disk-trim/

In general, if your trimming is not working correctly, either in VM's or in dom0, then you may get wrong numbers, even if you use the correct commands to list your drive space usage.

The reason you get so much drive space usage reported, may very likely be because your trimming isn't working or isn't enabled.

JonHBit

unread,
Mar 19, 2018, 3:23:11 PM3/19/18
to qubes-users

81% is probably not accurate, since the metadata is stored in an LV that seems to start out at 16 GB [1]

If you want more precise info on used space, qvm-pool is useful (specifically, qvm-pool -i lvm)

The attached script will calculate free space in the main lvm pool and percentage used, and you can use it with a Xfce Generic Monitor to add its output to your panel.

Also, note that lvs shows the maximum sizes for the LVs assigned to TemplateVMs & AppVMs, not space used.

1 - https://github.com/QubesOS/qubes-issues/issues/3240

diskcheck.sh

Chris Laprise

unread,
Mar 20, 2018, 2:39:35 PM3/20/18
to Yuraeitha, qubes-users
This has become a tricky subject because TRIM and discard have different
but overlapping effects.

The disk-trim doc you linked could give the impression that editing
lvm.conf is necessary (at least within the context of this thread). All
that's required to reclaim unused dom0 space is fstrim (such as the
timed examples in the doc) or adding 'discard' option to / in fstab.
Some prefer the timer which is slightly safer for dom0, but could lead
to greater risk of running out of space overall.

Until the disk space widget becomes available, you can view the LVM
pool's free space with the command qubesuser posted here:
https://github.com/QubesOS/qubes-issues/issues/3240#issuecomment-340088432

--

Chris Laprise, tas...@posteo.net
https://github.com/tasket
https://twitter.com/ttaskett
PGP: BEE2 20C5 356E 764A 73EB 4AB3 1DC4 D106 F07F 1886
Reply all
Reply to author
Forward
0 new messages