In R3.2, df in Dom0 would show how much actual disk space remained. That's a critical piece of data for production use, given the sheer amount of breakage caused by running out of space.
I have a 1TB SSD with Qubes 4.0 RC5 and about 450GB of restored VMs, but when I type 'df' in dom0 I get:
Use% Mounted on
devtmpfs 1995976 0 1995976 0% /dev
tmpfs 2009828 0 2009828 0% /dev/shm
tmpfs 2009828 1612 2008216 1% /run
tmpfs 2009828 0 2009828 0% /sys/fs/cgroup
/dev/mapper/qubes_dom0-root 935037724 3866076 883604596 1% /
tmpfs 2009828 8 2009820 1% /tmp
xenstore 2009828 416 2009412 1% /var/lib/xenstored
/dev/sda1 999320 79676 850832 9% /boot
tmpfs 401964 8 401956 1% /run/user/1000
You'd never know that the disk is actually half full or a little more. I have no idea how to manage my disk space on Qubes 4.0.
Suggestions?
Thanks
BillW
>
> Qubes 4.0 uses LVM thin pools.
> Try using sudo lvs to see the actual data used in the pool.
Ah, okay, thanks. When I do that, I get
[billw@dom0 Desktop]$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
pool00 qubes_dom0 twi-aotz-- 906.96g 52.49 28.32
root qubes_dom0 Vwi-aotz-- 906.96g pool00 2.14
swap qubes_dom0 -wi-ao---- 7.55g
and so forth.
Does that mean that my drive is actually 81% full with only 450 GB of VMs? I sure hope not. That's over 50% overhead!
Cheers
BillW
In addition to using "sudo lvs", I believe this too may also be relevant.
quote:
"In all versions of Qubes, you may want to set up a periodic job in dom0 to trim the disk. This can be done with either systemd (weekly only) or cron (daily or weekly)."
...
"Although discards can be issued on every delete inside dom0 by adding the discard mount option to /etc/fstab, this option can hurt performance so the above procedure is recommended instead. However, inside App and Template qubes, the discard mount option is on by default to notify the LVM thin pool driver (R4.0) or sparse file driver (R3.2) that the space is no longer needed and can be zeroed and re-used."
https://www.qubes-os.org/doc/disk-trim/
In general, if your trimming is not working correctly, either in VM's or in dom0, then you may get wrong numbers, even if you use the correct commands to list your drive space usage.
The reason you get so much drive space usage reported, may very likely be because your trimming isn't working or isn't enabled.
81% is probably not accurate, since the metadata is stored in an LV that seems to start out at 16 GB [1]
If you want more precise info on used space, qvm-pool is useful (specifically, qvm-pool -i lvm)
The attached script will calculate free space in the main lvm pool and percentage used, and you can use it with a Xfce Generic Monitor to add its output to your panel.
Also, note that lvs shows the maximum sizes for the LVs assigned to TemplateVMs & AppVMs, not space used.