Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[gentoo-user] tmpfs filling up with nothing

25 views
Skip to first unread message

Neil Bothwick

unread,
Nov 8, 2023, 9:20:05 AM11/8/23
to
I have PORTAGE_TMPDIR on /tmp, which is a 24GB tmpfs. Last night, an
update failed with an out of space error. df showed only 440MB free but
du and ndcu both showed well under 1GB in use (including hidden files).
this has happened on the odd occasion in the past and the only solution
appears to be to reboot. Of course, that means I cannot provide any more
information until it happens again.

Has anyone else experienced this or, hopefully, resolved it without
rebooting?


--
Neil Bothwick

Your lack of organisation does not represent an
emergency in my world.

Alan McKinnon

unread,
Nov 8, 2023, 9:20:05 AM11/8/23
to
Hey Neil,

Yeah had this a few times. Always turns out to be deleted files that something still has a handle on

Run this:
# lsof /tmp | grep deleted
sddm-help 324615 root   13u   REG   0,32       96  184 /tmp/#184 (deleted)

So here I have one. To release the 1 file, kill the process holding it open

Alan


--
Alan McKinnon
alan dot mckinnon at gmail dot com

Neil Bothwick

unread,
Nov 8, 2023, 2:10:06 PM11/8/23
to
On Wed, 8 Nov 2023 16:17:19 +0200, Alan McKinnon wrote:

> On Wed, Nov 8, 2023 at 4:10 PM Neil Bothwick <ne...@digimed.co.uk> wrote:
>
> > I have PORTAGE_TMPDIR on /tmp, which is a 24GB tmpfs. Last night, an
> > update failed with an out of space error. df showed only 440MB free
> > but du and ndcu both showed well under 1GB in use (including hidden
> > files). this has happened on the odd occasion in the past and the
> > only solution appears to be to reboot. Of course, that means I cannot
> > provide any more information until it happens again.
> >
> > Has anyone else experienced this or, hopefully, resolved it without
> > rebooting?

> Hey Neil,
>
> Yeah had this a few times. Always turns out to be deleted files that
> something still has a handle on

Hah! I never thought of that one. I'll try that next time it happens.


--
Neil Bothwick

Irritable? Who the bloody hell are you calling irritable?

Neil Bothwick

unread,
Nov 9, 2023, 3:30:07 AM11/9/23
to
On Wed, 8 Nov 2023 16:17:19 +0200, Alan McKinnon wrote:

> On Wed, Nov 8, 2023 at 4:10 PM Neil Bothwick <ne...@digimed.co.uk> wrote:
>
> > I have PORTAGE_TMPDIR on /tmp, which is a 24GB tmpfs. Last night, an
> > update failed with an out of space error. df showed only 440MB free
> > but du and ndcu both showed well under 1GB in use (including hidden
> > files). this has happened on the odd occasion in the past and the
> > only solution appears to be to reboot. Of course, that means I cannot
> > provide any more information until it happens again.
>
> Yeah had this a few times. Always turns out to be deleted files that
> something still has a handle on
>
> Run this:
> # lsof /tmp | grep deleted
> sddm-help 324615 root 13u REG 0,32 96 184 /tmp/#184
> (deleted)

That was it. It happened again today, it was my duplicity backup script
filling /tmp. Killing duplicity released the space.


--
Neil Bothwick

Sometimes too much to drink is not enough.

Mart Raudsepp

unread,
Nov 13, 2023, 8:20:05 AM11/13/23
to
Another common case is that it runs out of inodes, not space,
especially if df actually says there is free spaces. Check
df -i /tmp
instead then - it might tell IFree is 0 and IUsed and Inodes are the
same non-0 value.
The option for this is:

nr_inodes: The maximum number of inodes for this instance. The default
is half of the number of your physical RAM pages, or (on a
machine with highmem) the number of lowmem RAM pages,
whichever is the lower.

And for me with a 32GB tmpfs, it could have easily hit the default
limit when having e.g. firefox + webkit-gtk + chromium or some such
unpacked and built at once, or one of them failed without cleaning up,
etc.
A value of 0 would disable the limit altogether, but this comes with
the caveat of the possibility of a memory DoS when something malicious
could be writing 0 length files in there, because the size limit
doesn't get hit by it, but tracking the inodes does take some memory
itself and there's no inodes limit to deny that at some point then.
So it might be best to figure out some good value for that, perhaps
e.g. 8 times the default (what you see with your tmpfs size under
Inodes column with `df -i /tmp`).


HTH,
Mart

Alan McKinnon

unread,
Nov 13, 2023, 10:00:06 AM11/13/23
to
Interesting side note:

I used to worry about free inodes a lot, but stopped when I realised I had only ever run into the problem once:

some damn fool had created an account on the company FTP server for CDRs to be uploaded that goe crunched and sent somewhere in the bowels of the billing dept.
The same damn fool neglected to write any kind of cleanup code, and when the sender started having difficulties I had myself a look.
That upload/ dir had 1.5 million files in it and yet the server was working fine, except if you tried to ls or do anything that needed to read the dir.
Deleting that lot took IIRC 6 or 8 hours!

I suppose this and things like it are why the big players are now making XFS the default fs on install.
Even a mid-sized machine these days can max out ext4


Neil Bothwick

unread,
Nov 13, 2023, 4:00:05 PM11/13/23
to
It was not inodes, df was showing close to 100% full. The problem was as
Alan suggested, deleted files still locked.


--
Neil Bothwick

Dolly Parton-- silicone based life
0 new messages