"Root File out of memory warning"?

57 views
Skip to first unread message

V C

unread,
Sep 16, 2019, 9:38:01 PM9/16/19
to qubes-users
Sorry for the noob question but I am getting a pop-up warning that my "Root File is almost out of memory"? Its kinda scary...

The error pops up more regularly now...is there any maintenance I can do? I have had this set up for over a year, working well...not sure the reason, possibly:

The way I am deleting old templates (I recently upgraded whonix and made a few clones, deleted a few, etc)
I haven't added to many files or photos but I have added. I am not sure of the exact language in the pop-up but it stated "Root", "Low-memory" and "File"....

Any guidance would be appreciated...

Thanks,
VC

awokd

unread,
Sep 17, 2019, 8:31:55 AM9/17/19
to qubes...@googlegroups.com
V C:
> Sorry for the noob question but I am getting a pop-up warning that my "Root
> File is almost out of memory"? Its kinda scary...

Not noob to ask and fix before it causes more problems! Click on the
Qubes disk monitor widget in the top right; that's probably where the
warnings are coming from. Also, check "sudo lvdisplay qubes_dom0/pool00"
in dom0. It should show a similar pool data %. If your allocated
metadata is over 80%, don't do the next steps.

Try running "sudo fstrim -av" in dom0 first, and see if that helps. If
not, find your largest qube with the most free space and run the same
command inside it. Check lvdisplay again and see if the data % went
down, and that metadata % didn't increase too much.

--
- don't top post
Mailing list etiquette:
- trim quoted reply to only relevant portions
- when possible, copy and paste text instead of screenshots

V C

unread,
Sep 17, 2019, 4:45:37 PM9/17/19
to qubes-users
Thanks Awokd...

I ran the commands, the changes were very small but a reduction did occur. The pop-up did come from the top right near the disk monitor widget. My "Total disc usage" states it is only at 31%...

It seems to have settled down as my new templates are finalized and old ones deleted...I'll grab a screen shot with more details if it happens again.

As always...thank you!

awokd

unread,
Sep 17, 2019, 6:15:12 PM9/17/19
to qubes...@googlegroups.com
V C:

> I ran the commands, the changes were very small but a reduction did occur.
> The pop-up did come from the top right near the disk monitor widget. My
> "Total disc usage" states it is only at 31%...

No problem, it looks like you had already caught it in time anyways.
Good thing, because it's a lot easier to clean up before the disk (or
worse, metadata) gets full!

On a side note, anyone know why "sudo fstrim -av" in dom0 now says 0
bytes trimmed for root? I double-checked and have discard specified
everywhere it should be. Only thing I don't remember seeing before is
stripe=64 in the mount, but I searched issues and qubes-src for "stripe"
and didn't find anything related.

/dev/mapper/qubes_dom0-root on / type ext4 (rw,relatime,discard,stripe=64)

brenda...@gmail.com

unread,
Sep 18, 2019, 6:29:58 AM9/18/19
to qubes-users
On Tuesday, September 17, 2019 at 6:15:12 PM UTC-4, awokd wrote:
On a side note, anyone know why "sudo fstrim -av" in dom0 now says 0
bytes trimmed for root? I double-checked and have discard specified
everywhere it should be. Only thing I don't remember seeing before is
stripe=64 in the mount, but I searched issues and qubes-src for "stripe"
and didn't find anything related.

/dev/mapper/qubes_dom0-root on / type ext4 (rw,relatime,discard,stripe=64)

You have discards enabled at all layers (fs and crypt)?

What I think you are seeing is this: Linux keeps tracks of discards in the current session and won't re-issue discards if it hasn't subsequently written to the already-discarded area. Reboot and try again. The first time after reboot, it should issue discards to the non-allocated portion of the volume.

This performance-oriented kernel behavior is one reason I am a proponent of activating/issuing discards in all the layers. Another is that SSDs consume the actual discards very quickly: hundreds of GBs can be discarded in seconds utilizing range discard requests supported by the internal queuing of the range discard requests.

Brendan

awokd

unread,
Sep 18, 2019, 2:22:56 PM9/18/19
to qubes...@googlegroups.com
brenda...@gmail.com:

> What I think you are seeing is this: Linux keeps tracks of discards in the
> current session and won't re-issue discards if it hasn't subsequently
> written to the already-discarded area. Reboot and try again. The first time
> after reboot, it should issue discards to the non-allocated portion of the
> volume.

That was it; thank you. I have a daily job that runs trim and must have
checked afterwards.
Reply all
Reply to author
Forward
0 new messages