Disk usage warning

41 views
Skip to first unread message

Franz

unread,
Aug 12, 2019, 12:05:18 PM8/12/19
to qubes...@googlegroups.com
On the upper right corner of the screen a black message alert:

Disk usage warning!
You are running out of disk space. 4.8% space left in pool lvm.

Sometimes the message is a bit different referring to rw instead

If I run df in dom0 all items usage Is below 24%

So what should I do with that alert?

IMHO these alerts are too cryptic
Best. 

Chris Laprise

unread,
Aug 12, 2019, 12:22:23 PM8/12/19
to Franz, qubes...@googlegroups.com
Df is completely inaccurate when Qubes 4 is using the default (lvm).

The best overall indicator to use is the disk space widget in the systray.

The best way to view individual VM disk usage is from Qube Manager.

To graphically examine disk usage within a VM, you can use the gnome
"Disk usage" app. The shell executable name for this is "baobab".

--

Chris Laprise, tas...@posteo.net
https://github.com/tasket
https://twitter.com/ttaskett
PGP: BEE2 20C5 356E 764A 73EB 4AB3 1DC4 D106 F07F 1886

Franz

unread,
Aug 12, 2019, 7:41:47 PM8/12/19
to Chris Laprise, qubes-users
@Chris

On Mon, Aug 12, 2019 at 1:22 PM Chris Laprise <tas...@posteo.net> wrote:
On 8/12/19 12:03 PM, Franz wrote:
> On the upper right corner of the screen a black message alert:
>
> Disk usage warning!
> You are running out of disk space. 4.8% space left in pool lvm.
>
> Sometimes the message is a bit different referring to rw instead
>
> If I run df in dom0 all items usage Is below 24%
>
> So what should I do with that alert?
>
> IMHO these alerts are too cryptic
> Best.

Df is completely inaccurate when Qubes 4 is using the default (lvm).

The best overall indicator to use is the disk space widget in the systray.

Thanks, I understand you are referring to a new plugin to add to the panel. if you are using default Xfce. There is one called "Free Space Checker" .

Once installed it tells 416GB are free. So no reason for the alert here


The best way to view individual VM disk usage is from Qube Manager.

Yes this shows disk usage for each VM , but no way to understand which VM may have problems.

In other words, an alert that does not tell which VM is involved have a sort of terror effect. You understand that something can break every moment, but no way to understand  where the problem is.


To graphically examine disk usage within a VM, you can use the gnome
"Disk usage" app. The shell executable name for this is "baobab".


This can be done if you already know which VM has problems. But starting tens of VMs  fishing for the one with problems seems too much. Also I suspect that those black alert messages in the upper right corner of the screen are related to dom0, rather than to specific VMs.

So from my viewpoint these alerts have no practical use, other then remembering that computers are a fragile thing  and that it may be time to do a backup.
Best

brenda...@gmail.com

unread,
Aug 12, 2019, 8:55:24 PM8/12/19
to qubes-users
On Monday, August 12, 2019 at 7:41:47 PM UTC-4, Francesco wrote:
@Chris
On Mon, Aug 12, 2019 at 1:22 PM Chris Laprise <tas...@posteo.net> wrote:
On 8/12/19 12:03 PM, Franz wrote:
> On the upper right corner of the screen a black message alert:
>
> Disk usage warning!
> You are running out of disk space. 4.8% space left in pool lvm.
> So what should I do with that alert?

Df is completely inaccurate when Qubes 4 is using the default (lvm).

The best overall indicator to use is the disk space widget in the systray.

Thanks, I understand you are referring to a new plugin to add to the panel. if you are using default Xfce. There is one called "Free Space Checker" .

No, there should already be one running in the upper right called "Qubes Disk Space Monitor".
 
The best way to view individual VM disk usage is from Qube Manager.

Yes this shows disk usage for each VM , but no way to understand which VM may have problems.

In other words, an alert that does not tell which VM is involved have a sort of terror effect. You understand that something can break every moment, but no way to understand  where the problem is.

My general recommendation is:

1. Use the following to show the data usage in your thin pool (it should give the same percent as the "Qubes Disk Space Monitor":
        sudo lvs qubes_dom0/pool00

2. Sort your VMs by size from largest to smallest in Qubes Manager, start one up, and execute the following, and repeat as necessary from largest to smallest.
    sudo fstrim -av ; sudo shutdown -h now

3. Periodically check the pool usage via the "Qubes Disk Space Monitor" or the command in #1 above. If my hunch is correct, step #2 may recover some data.

If not, then start deleting VMs you don't need.

Also: dom0 VM usage as well as all combined domU VMs usage is allocated from the same shared thinpool pool00 in a default setup.

Brendan

Chris Laprise

unread,
Aug 12, 2019, 9:28:24 PM8/12/19
to Franz, qubes-users
On 8/12/19 7:41 PM, Franz wrote:
> @Chris
>
> On Mon, Aug 12, 2019 at 1:22 PM Chris Laprise <tas...@posteo.net
> <mailto:tas...@posteo.net>> wrote:
>
> On 8/12/19 12:03 PM, Franz wrote:
> > On the upper right corner of the screen a black message alert:
> >
> > Disk usage warning!
> > You are running out of disk space. 4.8% space left in pool lvm.
> >
> > Sometimes the message is a bit different referring to rw instead
> >
> > If I run df in dom0 all items usage Is below 24%
> >
> > So what should I do with that alert?
> >
> > IMHO these alerts are too cryptic
> > Best.
>
> Df is completely inaccurate when Qubes 4 is using the default (lvm).
>
> The best overall indicator to use is the disk space widget in the
> systray.
>
>
> Thanks, I understand you are referring to a new plugin to add to the
> panel. if you are using default Xfce. There is one called "Free Space
> Checker" .
>
> Once installed it tells 416GB are free. So no reason for the alert here

This doesn't sound like the same tool. The Qubes disk space widget says
'qui-disk-space' when you mouse over it, and when you click it shows
total disk usage along with a line for 'lvm' near the bottom. If its not
running on your systray already, you can start it manually from the
shell by typing 'qui-disk-space'.

>
> The best way to view individual VM disk usage is from Qube Manager.
>
>
> Yes this shows disk usage for each VM , but no way to understand which
> VM may have problems.

Because Qubes is using over-provisioning, none of the individual VMs may
have a problem per se, even though you are running out of space in the
overall lvm storage.

If you add up the numbers shown in Qube Manager, you can get an idea of
how each VM is contributing to the low space warning.

Also, doing an 'lvs' in the shell will show meaningful stats if you know
what to look for (IMO, using the space widget and QM is easier).

>
> In other words, an alert that does not tell which VM is involved have a
> sort of terror effect. You understand that something can break every
> moment, but no way to understand  where the problem is.

I doubt its designed to say which of the VMs has most recently expanded,
if that's what you mean by "where the problem is". The basic problem is
being aware of your total space, and how much usage each VM _should_
have. It may be that you forgot about disk space and unexpectedly ran
low, so there is a decision to make whether to remove unneeded items or
upgrade to more storage.

OTOH, maybe one or two VMs expanded beyond reasonable expectations.
Looking inside the larger VMs with 'baobab' is the best way to start
investigating that possibility.

>
>
> To graphically examine disk usage within a VM, you can use the gnome
> "Disk usage" app. The shell executable name for this is "baobab".
>
>
> This can be done if you already know which VM has problems. But starting
> tens of VMs  fishing for the one with problems seems too much. Also I
> suspect that those black alert messages in the upper right corner of the
> screen are related to dom0, rather than to specific VMs.

In QM, click on the 'Disk Usage' column heading and it will sort the VMs
by disk space used. That should point you in the right direction and
maybe you'll only need to look at the 2-3 of the largest VMs.

>
> So from my viewpoint these alerts have no practical use, other then
> remembering that computers are a fragile thing  and that it may be time
> to do a backup.
> Best

The practical part is that you know you should avoid activities that
would use (significant) additional disk space, such as downloading or
duplicating large files... at least until you've cleared out some space
or added more storage. Its not terribly unlike a regular disk warning,
only instead of just files it is VMs+files you have to think about.

Franz

unread,
Aug 13, 2019, 4:52:52 AM8/13/19
to brenda...@gmail.com, qubes-users
@brendan, @Chris Many thanks
 
Also: dom0 VM usage as well as all combined domU VMs usage is allocated from the same shared thinpool pool00 in a default setup.

Now I understand.  Considering that Qubes-settings on Qubes Manager allows to set the System storage max size for each VM, I imagined that the alerts where connected to that, that is to a problem of a single VM. But now I understand that the alerts are connected to a general pool hosting all VMs. So it is not a problem of a single VM, rather of the pool.

This way it is much simpler.  Qubes disk space monitor shows that 71% of the space is already used, and when I do and verify a backup it uses much more space to extract the backup, so this is the reason of the alerts.

Many thanks brothers

Brendan Hoar

unread,
Aug 13, 2019, 5:19:16 AM8/13/19
to Franz, qubes-users
Ok, that makes sense.

However now I am concerned that qubes backup verification can easily lead to a pool full situation, which can be a fatal condition* for the pool and the qubes install. 

:(

B

* for typical users to resolve

Chris Laprise

unread,
Aug 13, 2019, 4:12:30 PM8/13/19
to Brendan Hoar, Franz, qubes-users
Indeed. There is an old issue with qubes-backup that asks for leaner
operation.

Wyng backup (name changed from 'sparsebak') treats volume data like a
stream, so no extra copies are stored... using it avoids the problem in
addition to other large reductions in time/resource use.
Reply all
Reply to author
Forward
0 new messages