Deleting app VMs in Qubes 4.0 doesn't free up disk space

144 views
Skip to first unread message

floasretch

unread,
Oct 10, 2018, 3:26:57 AM10/10/18
to qubes...@googlegroups.com
My hard disk was 95% full, as shown by usage/size in the output of qvm-pool -i lvm. So I decided to delete some old unneeded app qubes, each taking a few hundred MB or a few GB of space.

Before deleting each one, I started it to check the contents, to verify there was nothing I needed to save. HD usage crept up a little, then a little more after I killed the qube. But surprisingly, HD usage crept up a little more after I deleted the qube, instead of decreasing!

I guessed it was a fluke, so I did the same with the next qube. The same thing happened! Then again with the third and fourth qubes. At this point, my HD is 99% full. I ran fstrim / in dom0, and it reported about 5GB freed, but HD is still 99% full.

I'm starting to panic, since I know Linux fails badly when an LVM pool that's over-committed with thin volumes (as Qubes 4.0 is designed) fills up.

I know the qubes were deleted successfully, since lvs | grep <qube name> no longer shows them.

No running qubes are writing any significant data to disk. Now that I'm not deleting any more qubes, HD usage is holding steady at 99%.

Why isn't my HD usage decreasing when I delete qubes? How can I free up some space?


Sent with ProtonMail Secure Email.

unman

unread,
Oct 10, 2018, 8:53:00 AM10/10/18
to qubes...@googlegroups.com
> Sent with [ProtonMail](https://protonmail.com) Secure Email.

I admire your persistence in continuing to remove qubes.

Have you tried running 'sudo fstrim -av' in dom0?

floasretch

unread,
Oct 10, 2018, 1:26:14 PM10/10/18
to unman, qubes...@googlegroups.com
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Wednesday, October 10, 2018 6:52 AM, unman <un...@thirdeyesecurity.org> wrote:

> I admire your persistence in continuing to remove qubes.
>
> Have you tried running 'sudo fstrim -av' in dom0?

After I already trimmed / in dom0 as I mentioned (and it reported about 5GB freed), I ran sudo fstrim -av per your suggestion, and it reported 0 bytes trimmed.

On further investigation, I discovered that /var/log/xen/xenstored-trace.log in dom0 was 155GB.

So I did sudo truncate /var/log/xen/xenstored-trace.log --size 0
Then again sudo fstrim -av, and this time for / it reported 80.7GiB trimmed. (I don't know why not all 155GB trimmed.)

But qvm-pool -vi lvm still reports 99% HD usage (468GB used out of 473GB size for pool00)! The HD is 500GB, so trimming 80.7GiB should have freed 17%.

OTOH, sudo lvs did report that pool00 data% dropped from 99 to 67, and meta% dropped from 55 to 39. So the problem is clearly with Qubes, not LVM.

awokd

unread,
Oct 12, 2018, 1:03:20 AM10/12/18
to qubes...@googlegroups.com
'floasretch' via qubes-users wrote on 10/10/18 5:26 PM:
> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> On Wednesday, October 10, 2018 6:52 AM, unman <un...@thirdeyesecurity.org> wrote:
>
>> I admire your persistence in continuing to remove qubes.
>>
>> Have you tried running 'sudo fstrim -av' in dom0?
>
> After I already trimmed / in dom0 as I mentioned (and it reported about 5GB freed), I ran sudo fstrim -av per your suggestion, and it reported 0 bytes trimmed.
>
> On further investigation, I discovered that /var/log/xen/xenstored-trace.log in dom0 was 155GB.

I don't have this log on mine. Did you maybe enable it somewhere?

> So I did sudo truncate /var/log/xen/xenstored-trace.log --size 0
> Then again sudo fstrim -av, and this time for / it reported 80.7GiB trimmed. (I don't know why not all 155GB trimmed.)
>
> But qvm-pool -vi lvm still reports 99% HD usage (468GB used out of 473GB size for pool00)! The HD is 500GB, so trimming 80.7GiB should have freed 17%.
>
> OTOH, sudo lvs did report that pool00 data% dropped from 99 to 67, and meta% dropped from 55 to 39. So the problem is clearly with Qubes, not LVM.
>

Have you rebooted since cleaning up? Maybe qvm-pool just hasn't caught
up yet.

floasretch

unread,
Oct 15, 2018, 1:30:55 AM10/15/18
to qubes...@googlegroups.com
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Thursday, October 11, 2018 11:02 PM, 'awokd' via qubes-users <qubes...@googlegroups.com> wrote:
> > On further investigation, I discovered that /var/log/xen/xenstored-trace.log in dom0 was 155GB.
>
> I don't have this log on mine. Did you maybe enable it somewhere?

Yes, and then I forgot to disable it when I was done with it. Then the extreme growth of the log coincidentally (!) canceled out the space freed up by deleting qubes, which accounted for the surprising behavior I was seeing.

> > But qvm-pool -vi lvm still reports 99% HD usage (468GB used out of 473GB size for pool00)! The HD is 500GB, so trimming 80.7GiB should have freed 17%.
> > OTOH, sudo lvs did report that pool00 data% dropped from 99 to 67, and meta% dropped from 55 to 39. So the problem is clearly with Qubes, not LVM.
>
> Have you rebooted since cleaning up? Maybe qvm-pool just hasn't caught
> up yet.

I noticed some hours later that qv-pool finally caught up. Don't know what triggered it (but not a reboot). Also don't know why lvs and qvm-pool should ever report different free space values, but anyway my immediate problem is solved.

Reply all
Reply to author
Forward
0 new messages