Contradictory measures of disk space in a VM

49 views
Skip to first unread message

Franz

unread,
Apr 27, 2020, 8:06:59 PM4/27/20
to qubes-users
Hello all,

Various application lament lack of disk space in a particular qube, but measuring it gives very different results.

1 Qubes manager tells that Disk Usage is 4973 MB
2 Setting tells that Private Storage Max size is 18524 MB
3 Baobab tells 2.1 GB available and 12.4 GB total
4 Nautilus home properties tells free space 5.3MB contents 4.2GB
5 df tells that /rw is 100% used and total 4.9GB

So, it seems that Qubes Manager, Nautilus and df are consistent, telling that available space is almost 5GB totally used.

But Qubes VM settings with Private Storage Max Size of 18524 MB is totally out of sync, as well as Baobab with 2.1 GB available and 12.4 total.

Well it seems a theoretical  speculation, but increasing space using Qubes VM settings changes only a useless number into the settings, but nothing in practice.

So how one is supposed to increase space for this VM?
Also, am I right thinking this is not consistent? If yes, is there a way to fix it?

Well a workaround may be to create a new qubes, copy the content of the old one to the new and hope that everything works, then delete the old qube. But I may be wrong, or there may be an easy fix

Best
Franz

dhorf-hfre...@hashmail.org

unread,
Apr 28, 2020, 5:02:55 AM4/28/20
to Franz, qubes-users
On Mon, Apr 27, 2020 at 09:06:41PM -0300, Franz wrote:
> So, it seems that Qubes Manager, Nautilus and df are consistent, telling
> that available space is almost 5GB totally used.
>
> But Qubes VM settings with Private Storage Max Size of 18524 MB is totally

a) check what the actual size is:
dom0$ sudo lvs -a | grep yourvmname
=> whats the size of the yourvm-private volume?

b1) if it is 18GB already, check that it is actualy 18GB inside the vm:
yourvm$ grep xvdb /proc/partitions

c1) if it is 18GB inside the VM too, you are just missing the fs resize:
yourvm$ sudo resize2fs /dev/xvdb

d1) check with df that the full size is now avail:
yourvm$ df -h /rw

b2) if the device in dom0 is _not_ the right size, or you dont want to
bother with b1/c1/d1, just give it a bump with qvm tools again:
dom0$ qvm-volume resize yourvm:private 19GiB

c2) check with lvs/df that it is now the right size. (see above)


this should all have happened automaticly on resize, but there is a
bunch of ways/reasons why it might have not happened.
for example if you resized the vm while running (which is supported)
but then had a hard full-system crash/stop/reboot before shutting
down the vm or the vm was running an image without resize2fs at the
time of resize...


Franz

unread,
Apr 28, 2020, 2:27:05 PM4/28/20
to dhorf-hfre...@hashmail.org, qubes-users
On Tue, Apr 28, 2020 at 6:02 AM <dhorf-hfre...@hashmail.org> wrote:
On Mon, Apr 27, 2020 at 09:06:41PM -0300, Franz wrote:
> So, it seems that Qubes Manager, Nautilus and df are consistent, telling
> that available space is almost 5GB totally used.
>
> But Qubes VM settings with Private Storage Max Size of 18524 MB is totally

a) check what the actual size is:
        dom0$ sudo lvs -a | grep yourvmname
        => whats the size of the yourvm-private volume?


did it, but there is no unit, so number are difficult to interpreter, snapshot enclosed

b1) if it is 18GB already, check that it is actualy 18GB inside the vm:
        yourvm$ grep xvdb /proc/partitions

 same, snapshot enclosed


c1) if it is 18GB inside the VM too, you are just missing the fs resize:
        yourvm$ sudo resize2fs /dev/xvdb


Permission denied, see snapshot
Thanks
Franz

lvs.png
xvdb.png
resize.png

dhorf-hfre...@hashmail.org

unread,
Apr 28, 2020, 3:37:57 PM4/28/20
to Franz, qubes-users
On Tue, Apr 28, 2020 at 03:26:44PM -0300, Franz wrote:
> > a) check what the actual size is:
> > dom0$ sudo lvs -a | grep yourvmname
> > => whats the size of the yourvm-private volume?
> did it, but there is no unit, so number are difficult to interpreter,
> snapshot enclosed

not sure what that means.
the "g" in "18.09g" is short for "giga".
and the unit is most likely "bytes".

so the blockdevice is actualy the right size.


> b1) if it is 18GB already, check that it is actualy 18GB inside the vm:
> > yourvm$ grep xvdb /proc/partitions
> same, snapshot enclosed

the size is again 18.x GB.


> > c1) if it is 18GB inside the VM too, you are just missing the fs resize:
> > yourvm$ sudo resize2fs /dev/xvdb
> Permission denied, see snapshot

no idea what that means.
but you are using an ancient version of resize2fs there.

please try with a template/distro that is not shipping outdated versions.
a fedora 30 or 31 should work.
just starting the appvm once with a working template should be enough.
(you could also resize the FS from dom0, but that has security
implications i dont want to explain)

but, yes, if your appvm has no working resize2fs, resizing the
volumes is not going to work too well and could result in the
symptoms you are describing.




Franz

unread,
Apr 29, 2020, 7:11:54 AM4/29/20
to dhorf-hfre...@hashmail.org, qubes-users
On Tue, Apr 28, 2020 at 4:37 PM <dhorf-hfre...@hashmail.org> wrote:
On Tue, Apr 28, 2020 at 03:26:44PM -0300, Franz wrote:
> > a) check what the actual size is:
> >         dom0$ sudo lvs -a | grep yourvmname
> >         => whats the size of the yourvm-private volume?
> did it, but there is no unit, so number are difficult to interpreter,
> snapshot enclosed

not sure what that means.
the "g" in "18.09g" is short for "giga".
and the unit is most likely "bytes".

so the blockdevice is actualy the right size.


> b1) if it is 18GB already, check that it is actualy 18GB inside the vm:
> >         yourvm$ grep xvdb /proc/partitions
>  same, snapshot enclosed

the size is again 18.x GB.


> > c1) if it is 18GB inside the VM too, you are just missing the fs resize:
> >         yourvm$ sudo resize2fs /dev/xvdb
> Permission denied, see snapshot

no idea what that means.
but you are using an ancient version of resize2fs there.

please try with a template/distro that is not shipping outdated versions.
a fedora 30 or 31 should work.

Did it with Fedora 30, but with exactly the same result

But checking other VMs I am getting the same error on some of them :((
So this is a widespread problem.

May it be that this is caused by running Qubes manager / vm settings/ increase private size  when the VM is running? Sometimes I get an error on that. It may be that this creates a problem that cannot be removed.

But if this is the case users should be allowed to increase size when the VM is running.

dhorf-hfre...@hashmail.org

unread,
Apr 29, 2020, 8:42:11 AM4/29/20
to Franz, qubes-users
On Wed, Apr 29, 2020 at 08:11:37AM -0300, Franz wrote:

> Did it with Fedora 30, but with exactly the same result
> But checking other VMs I am getting the same error on some of them :((
> So this is a widespread problem.

did you try to google your problem?
because at this point it is starting to look less of a qubes problem,
and more like generic filesystem damage.

fsck+repair the volume, then resize it.
consult the documentation of your favorite distro template on how
to do this inside the vm.
you may have to connect to the vm through "xl console" in dom0.

or spawn a fsck-vm, and temporarily attach the volumes to that.
https://www.qubes-os.org/doc/mount-lvm-image/

or just fsck+resize from dom0 if you consider the risk of something
exploiting e2fstools through this acceptable.



> May it be that this is caused by running Qubes manager / vm settings/
> increase private size when the VM is running? Sometimes I get an error on
> that. It may be that this creates a problem that cannot be removed.

no, online resize is completely normal and supported.



Franz

unread,
Apr 29, 2020, 2:11:07 PM4/29/20
to dhorf-hfre...@hashmail.org, qubes-users
Dear Dhorf,
I tried fsck on dom0, but got the enclosed terror screen.
For me it is enough, I created a new VM, copied content of the old corrupted one to the new one. Everything works and do not afford to do more because it is too complicated for  my little mind.
But i want to thank you, with your help I understood were the problem is and for the future I'll avoid to chenge size of a running VM. I understand it can theoretically be done, but it does not work for me. On the other hand, it works if the VM is not running.
So thanks Dhorf I appreciate the time you devote for me
fsck.png

dhorf-hfre...@hashmail.org

unread,
Apr 29, 2020, 2:27:16 PM4/29/20
to Franz, qubes-users
On Wed, Apr 29, 2020 at 03:10:48PM -0300, Franz wrote:

> I tried fsck on dom0, but got the enclosed terror screen.

it would have helped if you had tried to fsck the right filesystem:
fsck /dev/qubes_dom0/vm-per-dec-private

and if it is asking for repair a bazillion times, perhaps with -y -f
(and i would keep running it with "-y -f" until at least one run
is completely clean...)
"how to repair a filesystem" is basic system management.

the only mildly qubes-specific part about it is that the fsck needs to
be done while the filesystem is not mounted, which in most cases means
the vm is not running.


> and for the future I'll avoid to chenge size of a running VM. I understand
> it can theoretically be done, but it does not work for me. On the other
> hand, it works if the VM is not running.

you will run into exactly the same problems if you try to resize
a damaged filesystem while the vm is not running.

actualy for some constellations of resize and system, an offline
resize is _more_ likely to go wrong.
(it will do the resize on next vm startup, and if the resize takes
longer than the vm-startup-timeout... ouch)




Franz

unread,
May 1, 2020, 9:27:10 AM5/1/20
to dhorf-hfre...@hashmail.org, qubes-users
On Wed, Apr 29, 2020 at 3:27 PM <dhorf-hfre...@hashmail.org> wrote:
On Wed, Apr 29, 2020 at 03:10:48PM -0300, Franz wrote:

> I tried fsck on dom0, but got the enclosed terror screen.

it would have helped if you had tried to fsck the right filesystem:
        fsck /dev/qubes_dom0/vm-per-dec-private

and if it is asking for repair a bazillion times, perhaps with -y -f
(and i would keep running it with "-y -f" until at least one run
 is completely clean...)
"how to repair a filesystem" is basic system management.


Dhorf, you are right. What happened is exactly what you have foreseen. It found a lot of errors and corrected them all, and running it again  says clean.

Well, now I have two similar VMs the original repaired and the new on which I copied the content of the original. Which one should I choose? Which is better?

Also thanks, now I understand things, I did not even imagine to exist.
Best

Ulrich Windl

unread,
May 4, 2020, 3:13:08 PM5/4/20
to 169...@gmail.com, dhorf-hfre...@hashmail.org, qubes...@googlegroups.com
>>> Franz <169...@gmail.com> schrieb am 29.04.2020 um 20:10 in Nachricht
<12446_1588183865_5EA9C338_12446_12_1_CAPzH-qB-p_Q3kCn7kGUE8F4=Bsn4mPPrmU-w76NO0
2L7f...@mail.gmail.com>:
> Dear Dhorf,
> I tried fsck on dom0, but got the enclosed terror screen.
> For me it is enough, I created a new VM, copied content of the old
> corrupted one to the new one. Everything works and do not afford to do more
> because it is too complicated for my little mind.
> But i want to thank you, with your help I understood were the problem is
> and for the future I'll avoid to chenge size of a running VM. I understand
> it can theoretically be done, but it does not work for me. On the other
> hand, it works if the VM is not running.
> So thanks Dhorf I appreciate the time you devote for me

Hi!

While I'd be able to perform the needed fsck, I wonder:
Is it possible to set a flag that makes fsck do a full filesystem check on the next boot?
Havent found one for ext3...

If the fs tools don't have that feature (yet), maybe it could be build into initrd (some flag just causes a full fsck to be performed before the fileystems are mounted (read-write).

Regards,
Ulrich


>
> On Wed, Apr 29, 2020 at 9:42 AM <dhorf-hfre...@hashmail.org> wrote:
>
> --
> You received this message because you are subscribed to the Google Groups
> "qubes-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to qubes-users...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/qubes-users/CAPzH-qB-p_Q3kCn7kGUE8F4%3DBsn4
> mPPrmU-w76NO0B2L7fYFaQ%40mail.gmail.com.




dhorf-hfre...@hashmail.org

unread,
May 4, 2020, 4:19:22 PM5/4/20
to Ulrich Windl, 169...@gmail.com, qubes...@googlegroups.com
On Mon, May 04, 2020 at 09:12:59PM +0200, Ulrich Windl wrote:
> Is it possible to set a flag that makes fsck do a full filesystem
> check on the next boot?
> Havent found one for ext3...

use tune2fs to set the current-mountcount (-C) to something bigger
than the max-mountcount (check with -l, adjust with -c).
reading the tune2fs documentation is recommended before use.


> If the fs tools don't have that feature (yet), maybe it could be build
> into initrd (some flag just causes a full fsck to be performed before
> the fileystems are mounted (read-write).

they do. how to trigger it depends on your initrd.
reading the distro specific initrd documentation is recommended.




Reply all
Reply to author
Forward
0 new messages