Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

solution to / full

441 views
Skip to first unread message

lina

unread,
Mar 1, 2023, 8:40:07 AM3/1/23
to
Hi,

My / is almost full.

# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            126G     0  126G   0% /dev
tmpfs            26G  2.3M   26G   1% /run
/dev/nvme0n1p2   23G   21G  966M  96% /
tmpfs           126G   15M  126G   1% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
/dev/nvme0n1p6  267M   83M  166M  34% /boot
/dev/nvme0n1p1  511M  5.8M  506M   2% /boot/efi
/dev/nvme0n1p3  9.1G  3.2G  5.5G  37% /var
/dev/nvme0n1p5  1.8G   14M  1.7G   1% /tmp
/dev/nvme0n1p7  630G  116G  482G  20% /home

# ncdu -x
--- / --------------------------------------------------------------------------
   17.4 GiB [##########] /usr                                                  
    3.2 GiB [#         ] /opt
   16.5 MiB [          ] /etc
    7.3 MiB [          ] /root

What is the best solution so far?

I have done some purging already.
:/usr# du -sh *
742M bin
4.0K games
260M include
8.1G lib
36M lib32
4.0K lib64
140M libexec
33M libx32
3.4G local
53M sbin
4.6G share
215M src


Thanks,

Jochen Spieker

unread,
Mar 1, 2023, 9:20:07 AM3/1/23
to
lina:
>
> My / is almost full.
>
> # df -h
> Filesystem Size Used Avail Use% Mounted on
> udev 126G 0 126G 0% /dev
> tmpfs 26G 2.3M 26G 1% /run
> /dev/nvme0n1p2 23G 21G 966M 96% /
> tmpfs 126G 15M 126G 1% /dev/shm
> tmpfs 5.0M 4.0K 5.0M 1% /run/lock
> /dev/nvme0n1p6 267M 83M 166M 34% /boot
> /dev/nvme0n1p1 511M 5.8M 506M 2% /boot/efi
> /dev/nvme0n1p3 9.1G 3.2G 5.5G 37% /var
> /dev/nvme0n1p5 1.8G 14M 1.7G 1% /tmp
> /dev/nvme0n1p7 630G 116G 482G 20% /home

This is a good example why it often makes sense to use LVM even on a
private system. With LVM you could have allocated only 20% of space
where you actually need it and resize filesystems on-demand (and
online). But that does not help you now, sorry.

> I have done some purging already.
> :/usr# du -sh *
> 742M bin
> 4.0K games
> 260M include
> 8.1G lib
> 36M lib32
> 4.0K lib64
> 140M libexec
> 33M libx32
> 3.4G local
> 53M sbin
> 4.6G share
> 215M src

/usr/local might be worth a look. You probably have some stuff there
that you put in manually.

The program dpigs from the package debian-goodies can help you find the
biggest debian packages you have installed. Of course you need to check
yourself whether you need them.

J.
--
I frequently find myself at the top of the stairs with absolutely
nothing happening in my brain.
[Agree] [Disagree]
<http://archive.slowlydownward.com/NODATA/data_enter2.html>
signature.asc

Klaus Singvogel

unread,
Mar 1, 2023, 11:00:05 AM3/1/23
to
lina wrote:
> Filesystem Size Used Avail Use% Mounted on
[...]
> /dev/nvme0n1p2 23G 21G 966M 96% /
> /dev/nvme0n1p3 9.1G 3.2G 5.5G 37% /var
> /dev/nvme0n1p5 1.8G 14M 1.7G 1% /tmp
> /dev/nvme0n1p7 630G 116G 482G 20% /home

[...]
> I have done some purging already.
> :/usr# du -sh *
[...]
> 742M bin
> 8.1G lib
> 3.4G local

Perhaps it might be a solution to
- move your /usr/local to /home (do as root: mv /usr/local /home)
- create a symlink from /home/local to /usr/local (do as root: ln -s /home/local /usr/)

I can't recommend this to do it with /usr/*bin oder with any /usr/*lib* directories, as booting might not work anymore, or at least not properply.

I can't say for sure that my solution has no impact on starting services in your system, as the risk exists, that starting some services from /usr/local might happen before mounting /home at system start.

And as a final word: even this/my suggestion might not work forever, as your / partition is really small (btw, your /tmp either). I would suggest to buy and install a second 500 Gb disk (don't do that much segmentation on the disk and use LVMs) and work on that disk instead. You might use the current disk later as backup space, or for a raid-1.

Best regards,
Klaus.
--
Klaus Singvogel
GnuPG-Key-ID: 1024R/5068792D 1994-06-27

to...@tuxteam.de

unread,
Mar 1, 2023, 11:50:06 AM3/1/23
to
The one which sticks out a bit is /lib, but not outrageously so.
My /usr/lib is 4.1G.

You just might need a bigger disk?

In a pinch, you can "sudo apt-get clean", which purges the APT
package cache, which lives in /var. You didn't show us /var,
which might be interesting too (/var/log, in case some logs
aren't rotated properly?)

Cheers
--
t
>
>
> Thanks,
signature.asc

The Wanderer

unread,
Mar 1, 2023, 12:01:56 PM3/1/23
to
It might also be worth having a look at the output of

# du -hx --max=1 /

rather than just looking at /usr alone. The '-x' will mean it won't
cross the boundaries into the other filesystems, so you'll still just be
looking at what's on / ; '--max=1' means it'll report one directory
level deep from the items you specified on the command line ('--max=0'
is equivalent to '-s').

You might even find benefit from repeating that same command with /usr,
or with any other directory that specifically looks to be bigger than
expected, to find out what part of it is taking up so much of the space.

--
The Wanderer

The reasonable man adapts himself to the world; the unreasonable one
persists in trying to adapt the world to himself. Therefore all
progress depends on the unreasonable man. -- George Bernard Shaw

signature.asc

Andy Smith

unread,
Mar 1, 2023, 12:10:05 PM3/1/23
to
Hi,

On Wed, Mar 01, 2023 at 02:35:17PM +0100, lina wrote:
> My / is almost full.
>
> # df -h
> Filesystem Size Used Avail Use% Mounted on
> udev 126G 0 126G 0% /dev
> tmpfs 26G 2.3M 26G 1% /run
> /dev/nvme0n1p2 23G 21G 966M 96% /
> tmpfs 126G 15M 126G 1% /dev/shm
> tmpfs 5.0M 4.0K 5.0M 1% /run/lock
> /dev/nvme0n1p6 267M 83M 166M 34% /boot
> /dev/nvme0n1p1 511M 5.8M 506M 2% /boot/efi
> /dev/nvme0n1p3 9.1G 3.2G 5.5G 37% /var
> /dev/nvme0n1p5 1.8G 14M 1.7G 1% /tmp
> /dev/nvme0n1p7 630G 116G 482G 20% /home

This is an excellent illustration of why creating tons of partitions
like it's 1999 can leave you in a difficult spot. You are bound to
make poor guesses as to what actual size you need, which leads later
situations where some partitions are hardly used while others get
full.

Ideally you'd use LVM, or filesystems that do volume management like
btrfs or zfs, so you can start small and adjust later based on your
needs. If that seems daunting, I think the user is not ready for
complex partitioning and would almost always be best off just using
one big / for everything.

It is difficult to say if you have things installed that you don't
need, because we don't know your needs nor what you have installed!

If you have many old kernel packages installed these can take up a
lot of space and it's probably safe to purge all but the newest one
and the one before it (and the one you're currently using).

$ dpkg -l | grep linux-image

for a quick overview.

> # ncdu -x
> --- /
> --------------------------------------------------------------------------
> 17.4 GiB [##########] /usr
>
> 3.2 GiB [# ] /opt

3GiB seems quite large for /opt so you probably have some manually
installed things in there that you might no longer need.

> What is the best solution so far?

Here's how I'd get out of this. These steps are off the top of my
head and though I have done them hundreds of times before I may not
have remembered exactly the correct syntax so please research every
step and understand what they are doing.

1. Install lvm2 package

# apt install lvm2

2. Boot into single user mode¹ and shrink /dev/nvme0n1p7 (/home) to
about 150G down from its ridiculous current size of 630G.

# umount /home

I'm assuming you're using ext4 filesystem there so it would be:

# resize2fs -p /dev/nvme0n1p7 145g

It will probably tell you to do e2fsck first; if so do that then
repeat the resize2fs command.

The choice of 145g was on purpose because next we'll use parted
to shrink the actual partition and we don't want to be doing any
mathematics. So next step is to shrink the partition, then grow
the filesystem again to its maximum for the partition it's in.

# parted /dev/nvme0n1 resizepart 7 150g
# resize2fs -p /dev/nvme0n1p7

3. Create a /dev/nvme0n1p8 in the newly-created space.

# parted /dev/nvme0n1
(parted) print
(parted) mkpart

(accept all the default suggestions as it will just put it at the
end)

4. Turn that into an LVM Physical Volume (PV)

# pvcreate /dev/nvme0n1p8

5. Create an LVM Volume Group (VG) that uses that PV. Pick any name
you like for "myvg".

# vgcreate myvg /dev/nvme0n1p8

6. Create a new Logical Volume (LV) that you'll use for what is
currently /opt. You currently have about 3.2G in there and I'll
assume you didn't want to delete any of it so I will assume
making it 5GiB will work and allow for some growth there.

# lvcreate -L5g /dev/myvg/opt
# mkfs.ext4 /dev/myvg/opt

7. Temporarily mount new filesystem somewhere.

# mkdir -vp /mnt/newopt
# mount /dev/myvg/opt /mnt/newopt

8. Copy all the data from current /opt to /mnt/newopt then switch
them around

# tar -C /opt -Spcf - . | tar -C /mnt/newopt -xvf -
# mv -v /opt /opt.old
# mkdir -vp /opt
# umount /mnt/newopt
# rmdir /mnt/newopt

Note the >> on this next command to append to — NOT overwrite —
/etc/fstab!

# cat <<EOF >> /etc/fstab
/dev/myvg/opt /opt ext4 noatime 0 2
EOF

# mount -v /opt

(at this point if you are at all unsure about what you're doing,
make sure you have a backup of /opt.old)

# rm -vr /opt.old

At this point you have moved what was in /opt into LVM in the
space freed up by shrinking your /home. This will have freed up
about 3GiB from /.

9. Reboot and hope you did everything correctly.

I chose /opt because it was a simple example and probably not
catastrophic to the running of your system if you mess it up. It is
a very small win, though. You can easily do the same procedure to
move your current /usr/share and /usr/local into there, moving
another 8GiB or so away from /. You could also just put whole of
/usr into LVM. This works fine as long as the /usr filesystem is
mounted by the initramfs which is what happens by default.

Ultimately I personally would move every partition into LVM and then
repurpose each partition as an LVM PV as I went, adding the PV to
the volume group. Eventually the aggregate space of all the
partitions /dev/nvme0n1p3 (currently /var), /dev/nvme0n1p7
(currently /home) and /dev/nvme0n1p8 (proposed new PV) would be
available to LVM.

You can't put /boot/efi in LVM and it's not worth putting /boot in
there in my opinion. In your case I expect I'd stop after putting
/opt, /usr/local, /usr/share, /var and /home in LVM.

If you don't like LVM then you can do the same thing with btrfs
after shrinking /dev/nvme0n1p7 and creating /dev/nvme0n1p8. zfs is a
bit more inflexible and not well-suited to this kind of
reorganisation of an existing system.

Finally, if you are terrified of doing this sort of thing and don't
mind gross and ugly hacks, you could just move /opt/, /usr/local and
/usr/share into somewhere under /home and symlink them back to where
they should be.

Cheers,
Andy

¹ For the example of /opt it's highly likely that you could do this
without dropping to single user mode. The main point is that you
won't be able to completely remove and unmount things while there
is a process running from it; neither is it a good idea to be
copying data files that may be currently in use by running
processes. You are likely to want to go on and do more of this, so
I am advising doing it in single user mode.

If you know what you're doing you can find the particular running
processes, kill them, and move around all of /opt, /home, /var,
/usr/local and /usr/share without rebooting or dropping down to
single user, but explaining how to do that probably makes this
email four times longer and a lot more error-prone.

--
https://bitfolk.com/ -- No-nonsense VPS hosting

Brian

unread,
Mar 1, 2023, 1:20:06 PM3/1/23
to
On Wed 01 Mar 2023 at 17:43:41 +0100, to...@tuxteam.de wrote:

[...]

> In a pinch, you can "sudo apt-get clean", which purges the APT
> package cache, which lives in /var. You didn't show us /var,
> which might be interesting too (/var/log, in case some logs
> aren't rotated properly?)

There should not be any actual packages in /var/cache/apt.
Cleaning out pkgcache.bin and srcpkgcache.bin is not really
of permanment value as they reappear after 'apt update'.

--
Brian.

to...@tuxteam.de

unread,
Mar 1, 2023, 1:40:05 PM3/1/23
to
Doh. Forget my post anyway. I've had a better look at the mount
table now.

Sorry for the noise
--
t
signature.asc

Greg Wooledge

unread,
Mar 1, 2023, 1:40:05 PM3/1/23
to
On Wed, Mar 01, 2023 at 06:12:09PM +0000, Brian wrote:
> On Wed 01 Mar 2023 at 17:43:41 +0100, to...@tuxteam.de wrote:
>
> [...]
>
> > In a pinch, you can "sudo apt-get clean", which purges the APT
> > package cache, which lives in /var. You didn't show us /var,
> > which might be interesting too (/var/log, in case some logs
> > aren't rotated properly?)
>
> There should not be any actual packages in /var/cache/apt.

This depends on whether one uses "apt" or "apt-get" or some other
program to install packages. By default, "apt" removes the .deb files
from /var/cache/apt/archives/ after installing them, but "apt-get"
does not. For other programs, who knows.

Nicolas George

unread,
Mar 1, 2023, 2:02:03 PM3/1/23
to
Andy Smith (12023-03-01):
> > /dev/nvme0n1p2 23G 21G 966M 96% /
> > /dev/nvme0n1p6 267M 83M 166M 34% /boot
> > /dev/nvme0n1p1 511M 5.8M 506M 2% /boot/efi
> > /dev/nvme0n1p3 9.1G 3.2G 5.5G 37% /var
> > /dev/nvme0n1p5 1.8G 14M 1.7G 1% /tmp
> > /dev/nvme0n1p7 630G 116G 482G 20% /home
> This is an excellent illustration of why creating tons of partitions
> like it's 1999 can leave you in a difficult spot.

No it is not. The /boot and /tmp partitions are superfluous, and
/boot/efi is too large (but at a guess it was already there), but they
would barely make a difference.

On the other hand, in 2023, it is still a very good idea to separate the
system filesystem that gets written frequently from the one that gets
written rarely from the user data filesystem.

A good illustration of that fact (which I do not contest) would be if
you saw a /usr separate from / or a /usr/local separate from /+/usr with
very unbalanced usage ratio.

> It is difficult to say if you have things installed that you don't
> need, because we don't know your needs nor what you have installed!

Ah, finally the only relevant answer!

Regards,

--
Nicolas George
signature.asc

Joe

unread,
Mar 1, 2023, 2:50:06 PM3/1/23
to
On Wed, 1 Mar 2023 18:12:09 +0000
Brian <ad...@cityscape.co.uk> wrote:

> On Wed 01 Mar 2023 at 17:43:41 +0100, to...@tuxteam.de wrote:
>
> [...]
>
> > In a pinch, you can "sudo apt-get clean", which purges the APT
> > package cache, which lives in /var. You didn't show us /var,
> > which might be interesting too (/var/log, in case some logs
> > aren't rotated properly?)
>
> There should not be any actual packages in /var/cache/apt.

What should cause/prevent that?

On unstable, I have a /var/cache/apt/archives directory, from which apt
autoclean, which I do occasionally, recently removed about 5G of
packages (obviously too occasionally). There's still quite a bit there
as it was only autoclean and I prefer to keep downloads around for a
while, as this is unstable.

--
Joe

Joe

unread,
Mar 1, 2023, 3:00:05 PM3/1/23
to
I've just asked about this but forgot to mention that I use apt, I'll
only use apt-get if a version upgrade recommends it. As I said, I have a
fairly well-used archives directory. I do recall, when apt became a
thing, reading what you posted there about apt removing the debs. There
must be a configuration which prevents that.

--
Joe

Andy Smith

unread,
Mar 1, 2023, 3:00:05 PM3/1/23
to
Hello,

On Wed, Mar 01, 2023 at 07:53:19PM +0100, Nicolas George wrote:
> Andy Smith (12023-03-01):
> > > /dev/nvme0n1p2 23G 21G 966M 96% /
> > > /dev/nvme0n1p6 267M 83M 166M 34% /boot
> > > /dev/nvme0n1p1 511M 5.8M 506M 2% /boot/efi
> > > /dev/nvme0n1p3 9.1G 3.2G 5.5G 37% /var
> > > /dev/nvme0n1p5 1.8G 14M 1.7G 1% /tmp
> > > /dev/nvme0n1p7 630G 116G 482G 20% /home
> > This is an excellent illustration of why creating tons of partitions
> > like it's 1999 can leave you in a difficult spot.
>
> No it is not. The /boot and /tmp partitions are superfluous, and
> /boot/efi is too large (but at a guess it was already there), but they
> would barely make a difference.

I was talking about them going to the effort of separating /home and
/var and ending up with completely inappropriate sizings. They would
have been much better off just not bothering and having it all in /.
The mere presence of all these other partitions laid out on this
disk after the one for / makes resizing things a lot harder than it
needs to be.

> On the other hand, in 2023, it is still a very good idea to separate the
> system filesystem that gets written frequently from the one that gets
> written rarely from the user data filesystem.

No argument there, but not with disk partitions as they end up hard
to resize, as seen here. OP is quite fortunate that their last
partition is one that can be most easily shrunk as that at least
gives them some easier options. I'd agree it would be a better
example of a tight spot if their last partition were one they
couldn't shrink!

Cheers,
Andy
signature.asc

Greg Wooledge

unread,
Mar 1, 2023, 3:10:06 PM3/1/23
to
Indeed. That's why I said "by default".

unicorn:~$ apt-config dump | grep Keep
Binary::apt::APT::Keep-Downloaded-Packages "0";

You probably created or modified some file under /etc/apt/apt.conf.d/
which changes the default behavior.

Brian

unread,
Mar 1, 2023, 3:20:06 PM3/1/23
to
On Wed 01 Mar 2023 at 19:48:59 +0000, Joe wrote:

> On Wed, 1 Mar 2023 18:12:09 +0000
> Brian <ad...@cityscape.co.uk> wrote:
>
> > On Wed 01 Mar 2023 at 17:43:41 +0100, to...@tuxteam.de wrote:
> >
> > [...]
> >
> > > In a pinch, you can "sudo apt-get clean", which purges the APT
> > > package cache, which lives in /var. You didn't show us /var,
> > > which might be interesting too (/var/log, in case some logs
> > > aren't rotated properly?)
> >
> > There should not be any actual packages in /var/cache/apt.
>
> What should cause/prevent that?

apt deletes them after they are installed. Unless, of course,
other arrangements are made.

--
Brian.

Brian

unread,
Mar 1, 2023, 3:30:06 PM3/1/23
to
No problem. I've been caught out by this regeneration on a
space-constrained system in the past.

--
Brian.

Brian

unread,
Mar 1, 2023, 3:30:06 PM3/1/23
to
Of course it depends. I assumed a default usage usage of
package management. It's been around long enough.

--
Brian.

David Christensen

unread,
Mar 1, 2023, 4:00:10 PM3/1/23
to
I put the vast majority of my user data on a file server. I keep my
FOSS system images small enough to fit onto "16 GB" devices (USB flash,
SDHC, HDD, SSD, etc.), to facilitate portability, migration, imaging
time, and image storage. I am the sole user of my systems, and keep my
home directory inside the root filesystem. Root drive space is an
ongoing issue for me.


I use apt-get(8) to update/ upgrade/ dist-upgrade my Debian systems on a
monthly cycle. Afterwards, I run 'apt autoremove' and 'apt clean'. I
install, remove, and/or upgrade packages at random intervals in between.
Doing autoremove and clean today:

2023-03-01 10:40:15 root@laalaa ~
# df /
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/mapper/sdb3_crypt 12084M 8486M 2964M 75% /

2023-03-01 10:40:18 root@laalaa ~
# apt autoremove
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

2023-03-01 10:40:25 root@laalaa ~
# apt clean

2023-03-01 10:40:30 root@laalaa ~
# df /
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/mapper/sdb3_crypt 12084M 8418M 3031M 74% /


So, 68E+6 bytes reclaimed.


When I want to clean deeper, I use a pipeline with du(1), sort(1),
head(1) to find likely candidates:

2023-03-01 10:44:39 root@laalaa ~
# du -mx / | sort -rn | head -n 20
8418 /
5113 /usr
3158 /usr/lib
1993 /home
1893 /home/dpchrist
1578 /usr/share
1247 /var
973 /var/log
970 /usr/lib/x86_64-linux-gnu
945 /var/log/journal/0ef88c23a8cf40c883469b3b34665f5f
945 /var/log/journal
790 /home/dpchrist/.cache
729 /home/dpchrist/.cache/thumbnails
664 /usr/lib/modules
612 /home/dpchrist/.thunderbird
596 /home/dpchrist/.thunderbird/dpchrist
396 /home/dpchrist/.cache/thumbnails/large
334 /usr/share/locale
333 /home/dpchrist/.cache/thumbnails/normal
330 /home/dpchrist/.thunderbird/dpchrist/Mail/Local
Folders/dpchrist-mail.sbd


I know that /home/dpchrist/.cache/thumbnails is managed by the Xfce
desktop (notably thunar(1)). I believe I can remove the contents by
hand without damaging my installation:

2023-03-01 10:52:34 root@laalaa ~
# ls -l /home/dpchrist/.cache/thumbnails/normal | head -n 3
total 339876
-rw-r--r-- 1 dpchrist dpchrist 26496 Sep 22 10:57
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.png
-rw-r--r-- 1 dpchrist dpchrist 6273 Jun 2 2022
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.png

2023-03-01 11:01:41 root@laalaa ~
# find /home/dpchrist/.cache -name '*.png' -delete

2023-03-01 11:01:50 root@laalaa ~
# du -mx / | sort -rn | head -n 20
7692 /
5113 /usr
3158 /usr/lib
1578 /usr/share
1266 /home
1247 /var
1167 /home/dpchrist
973 /var/log
970 /usr/lib/x86_64-linux-gnu
945 /var/log/journal/0ef88c23a8cf40c883469b3b34665f5f
945 /var/log/journal
664 /usr/lib/modules
612 /home/dpchrist/.thunderbird
597 /home/dpchrist/.thunderbird/dpchrist
334 /usr/share/locale
330 /home/dpchrist/.thunderbird/dpchrist/Mail/Local
Folders/dpchrist-mail.sbd
330 /home/dpchrist/.thunderbird/dpchrist/Mail/Local Folders
330 /home/dpchrist/.thunderbird/dpchrist/Mail
316 /usr/lib/modules/5.10.0-21-amd64
316 /usr/lib/modules/5.10.0-20-amd64


So, 726E+6 bytes reclaimed.


Looking at /home/dpchrist/.thunderbird:

2023-03-01 11:08:02 root@laalaa ~
# du -sm /home/dpchrist/.thunderbird/*
1 /home/dpchrist/.thunderbird/6rfpmfz4.default
1 /home/dpchrist/.thunderbird/Crash Reports
1 /home/dpchrist/.thunderbird/Pending Pings
596 /home/dpchrist/.thunderbird/dpchrist
1 /home/dpchrist/.thunderbird/installs.ini
16 /home/dpchrist/.thunderbird/otrar7nk.default-default
1 /home/dpchrist/.thunderbird/profiles.ini


I have a basic understanding of Thunderbird and its data directories.
6rfpmfz4.default, otrar7nk.default-default, and dpchrist are Thunderbird
profile directories. 6rfpmfz4.default was likely created the first time
I ran Thunderbird. otrar7nk.default-default was likely created when I
connected to my mail server. I restored dpchrist from backup when I
migrated from my previous daily driver computer, I configured
Thunderbird to use it, and it now contains my live profile and e-mail
files. If I want to touch any of those profile directories, or their
contents, I must use Thunderbird -- rm(1) is a bad idea (been there,
done that).


My approach to ~/.thunderbird applies to every other directory on the
system -- you have to know what application or service uses that
directory, and the proper way to do housekeeping.


Of course, it is wise to backup, archive, and/or image regularly; and
especially before going on a cleaning rampage.


I keep a plaintext sysadmin log file with console sessions for every
computer.


I check in the sysadmin log and all system configuration files to a
networked configuration management system (cvs(1)).


I think your best answer is to do a backup, wipe, fresh install, restore
cycle, taking into account the usage information you posted. I would
partition the SSD with 1E+9 byte EFI, 1E+9 byte ext4 boot, 1E+9 byte
random key encrypted swap, and 28E+9 byte LUKS extr4 root (e.g. for "32
GB" devices). When there is a non-trivial amount of space left on the
device, I typically make a LUKS ext4 "scratch" partition/ filesystem.
Your usage would indicate "home".


David

Charles Curley

unread,
Mar 1, 2023, 4:30:06 PM3/1/23
to
On Wed, 1 Mar 2023 14:35:17 +0100
lina <lina.l...@gmail.com> wrote:

> My / is almost full.
>
> # df -h
> Filesystem Size Used Avail Use% Mounted on
> udev 126G 0 126G 0% /dev
> tmpfs 26G 2.3M 26G 1% /run
> /dev/nvme0n1p2 23G 21G 966M 96% /

You can find the large directory culprits quickly enough with

cd /
du -h | sort -h

Then move down into likely directories as you go.

Of course, don't just delete indiscriminately; some care is in order.



--
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/

Jeffrey Walton

unread,
Mar 1, 2023, 6:00:06 PM3/1/23
to
On Wed, Mar 1, 2023 at 8:35 AM lina <lina.l...@gmail.com> wrote:
>
> My / is almost full.
>
> # df -h
> Filesystem Size Used Avail Use% Mounted on
> udev 126G 0 126G 0% /dev
> tmpfs 26G 2.3M 26G 1% /run
> /dev/nvme0n1p2 23G 21G 966M 96% /
> tmpfs 126G 15M 126G 1% /dev/shm
> tmpfs 5.0M 4.0K 5.0M 1% /run/lock
> /dev/nvme0n1p6 267M 83M 166M 34% /boot
> /dev/nvme0n1p1 511M 5.8M 506M 2% /boot/efi
> /dev/nvme0n1p3 9.1G 3.2G 5.5G 37% /var
> /dev/nvme0n1p5 1.8G 14M 1.7G 1% /tmp
> /dev/nvme0n1p7 630G 116G 482G 20% /home

You can probably reclaim a couple of GB by trimming systemd logs. It
should get you some room to work. Something like:

journalctl --vacuum-time=14d

I need to clear systemd logs on some IoT gadgets on occasion. They use
SDcards, though. Not the big, juicy disks you have.

Jeff

Felix Miata

unread,
Mar 1, 2023, 6:10:07 PM3/1/23
to
Jeffrey Walton composed on 2023-03-01 17:53 (UTC-0500):

> You can probably reclaim a couple of GB by trimming systemd logs. It
> should get you some room to work. Something like:

> journalctl --vacuum-time=14d

I limit journal size this way:
# cat /etc/systemd/journald.conf.d/local.conf
[Journal]
Storage=persistent
SystemMaxFiles=10
RuntimeMaxFiles=12
--
Evolution as taught in public schools is, like religion,
based on faith, not based on science.

Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

Felix Miata

Greg Wooledge

unread,
Mar 1, 2023, 6:20:06 PM3/1/23
to
On Wed, Mar 01, 2023 at 05:53:18PM -0500, Jeffrey Walton wrote:
> On Wed, Mar 1, 2023 at 8:35 AM lina <lina.l...@gmail.com> wrote:
> >
> > My / is almost full.
> >
> > # df -h
> > Filesystem Size Used Avail Use% Mounted on
> > udev 126G 0 126G 0% /dev
> > tmpfs 26G 2.3M 26G 1% /run
> > /dev/nvme0n1p2 23G 21G 966M 96% /
> > tmpfs 126G 15M 126G 1% /dev/shm
> > tmpfs 5.0M 4.0K 5.0M 1% /run/lock
> > /dev/nvme0n1p6 267M 83M 166M 34% /boot
> > /dev/nvme0n1p1 511M 5.8M 506M 2% /boot/efi
> > /dev/nvme0n1p3 9.1G 3.2G 5.5G 37% /var
> > /dev/nvme0n1p5 1.8G 14M 1.7G 1% /tmp
> > /dev/nvme0n1p7 630G 116G 482G 20% /home
>
> You can probably reclaim a couple of GB by trimming systemd logs. It
> should get you some room to work. Something like:
>
> journalctl --vacuum-time=14d

Aren't those stored in /var, though? There's a separate /var file system,
which isn't low on space.

David Wright

unread,
Mar 1, 2023, 6:30:10 PM3/1/23
to
On Wed 01 Mar 2023 at 19:53:09 (+0000), Andy Smith wrote:
> On Wed, Mar 01, 2023 at 07:53:19PM +0100, Nicolas George wrote:
> I was talking about them going to the effort of separating /home and
> /var and ending up with completely inappropriate sizings. They would
> have been much better off just not bothering and having it all in /.
> The mere presence of all these other partitions laid out on this
> disk after the one for / makes resizing things a lot harder than it
> needs to be.

I always keep /home separate from the root filesystem(s). It makes
upgrading more flexible (in-place vs reinstall), and I also typically
encrypt /home.

> > On the other hand, in 2023, it is still a very good idea to separate the
> > system filesystem that gets written frequently from the one that gets
> > written rarely from the user data filesystem.
>
> No argument there, but not with disk partitions as they end up hard
> to resize, as seen here. OP is quite fortunate that their last
> partition is one that can be most easily shrunk as that at least
> gives them some easier options. I'd agree it would be a better
> example of a tight spot if their last partition were one they
> couldn't shrink!

I don't understand why being the last partition matters. The partition
shouldn't be aware of your shrinking and growing the filesystem within
it, and the partitioner should be able to repartition a disk without
being aware of the contents of the sectors themselves.

(Mind you, I don't partition disks in units of GB, but always sectors,
and I keep a sector listing of the partitions in the disk's log.)

Cheers,
David.

to...@tuxteam.de

unread,
Mar 2, 2023, 12:50:06 AM3/2/23
to
On Wed, Mar 01, 2023 at 06:12:05PM -0500, Greg Wooledge wrote:
> On Wed, Mar 01, 2023 at 05:53:18PM -0500, Jeffrey Walton wrote:
> > On Wed, Mar 1, 2023 at 8:35 AM lina <lina.l...@gmail.com> wrote:
> > >
> > > My / is almost full.

[...]

> > > /dev/nvme0n1p2 23G 21G 966M 96% /

[...]

> > > /dev/nvme0n1p3 9.1G 3.2G 5.5G 37% /var

[...]

> > You can probably reclaim a couple of GB by trimming systemd logs. It
> > should get you some room to work. Something like:
> >
> > journalctl --vacuum-time=14d
>
> Aren't those stored in /var, though? There's a separate /var file system,
> which isn't low on space.

I'd hope that. I made the same mistake abovethread :)

Cheers
--
t
signature.asc

lina

unread,
Mar 2, 2023, 3:50:06 AM3/2/23
to
Hi all,

Thanks for your suggestions,

I take the least risk way, just move the things from /opt away,

I hope I can make it in the next few months, the biggest problem was created by the R associated package.

/dev/nvme0n1p2   23G   18G  4.5G  80% /

Thanks again, lina

lina

unread,
Mar 2, 2023, 4:00:06 AM3/2/23
to
:/usr/lib$ du -sh * | sort -nr | grep -v K  | head
981M R
591M rstudio
591M jvm
554M mega
538M llvm-11
343M modules
313M libreoffice

to...@tuxteam.de

unread,
Mar 2, 2023, 4:20:06 AM3/2/23
to
On Thu, Mar 02, 2023 at 09:53:29AM +0100, lina wrote:
> :/usr/lib$ du -sh * | sort -nr | grep -v K | head
> 981M R
> 591M rstudio
> 591M jvm
> 554M mega
> 538M llvm-11
> 343M modules
> 313M libreoffice

Insightful, thanks :)

Cheers
--
t
signature.asc

Jonathan Dowland

unread,
Mar 2, 2023, 4:40:05 AM3/2/23
to
On Wed, Mar 01, 2023 at 02:27:58PM -0700, Charles Curley wrote:
>You can find the large directory culprits quickly enough with
>
>cd /
>du -h | sort -h

OP demonstrated that they know how to use ncdu, which is a far superior
way of achieving the same result.

Personally I like duc for this job (and so I took over maintaining it):
https://duc.zevv.nl/

--
Please do not CC me for listmail.

👱🏻 Jonathan Dowland
jm...@debian.org
🔗 https://jmtd.net

Jonathan Dowland

unread,
Mar 2, 2023, 4:50:06 AM3/2/23
to
On Wed, Mar 01, 2023 at 03:15:07PM +0100, Jochen Spieker wrote:
>The program dpigs from the package debian-goodies can help you find the
>biggest debian packages you have installed. Of course you need to check
>yourself whether you need them.

It's a shame that this requires installing debian-goodies (and
associated transitive dependencies), which can be a problem when the
root filesystem is full or nearly so.

A while ago I (privately) re-wrote dpigs in standard tools for this
reason (mostly for operating inside small containers). Once I got to
feature parity I was going to submit a wishlist bug to split it out from
debian-goodies, but the last feature was awkward to implement and I
never finished it.

Anyway, for OP's purpose, what I have is good enough. Presented in case
it's useful:

--✂--✂--✂--✂--✂--✂--✂--✂--✂--✂ --✂--✂--✂--✂--✂--✂--✂--✂--✂--✂--

STATUS_FILE=/var/lib/dpkg/status
dpigs()
{
TL=${1-10}
awk -v RS='' '/Status:.*installed\n/' "$STATUS_FILE" \
| grep -E '^(Installed-Size|Package)' \
| cut -d: -f2- \
| paste - - \
| sort -rnk2 \
| awk '{ print $2 "\t" $1 }' \
| head -n "$TL" \
| tac
}
dpigs "$@"

--✂--✂--✂--✂--✂--✂--✂--✂--✂--✂ --✂--✂--✂--✂--✂--✂--✂--✂--✂--✂--

Greg Wooledge

unread,
Mar 2, 2023, 7:30:06 AM3/2/23
to
On Thu, Mar 02, 2023 at 09:45:38AM +0000, Jonathan Dowland wrote:
> --✂--✂--✂--✂--✂--✂--✂--✂--✂--✂ --✂--✂--✂--✂--✂--✂--✂--✂--✂--✂--
>
> STATUS_FILE=/var/lib/dpkg/status
> dpigs()
> {
> TL=${1-10}
> awk -v RS='' '/Status:.*installed\n/' "$STATUS_FILE" \
> | grep -E '^(Installed-Size|Package)' \
> | cut -d: -f2- \
> | paste - - \
> | sort -rnk2 \
> | awk '{ print $2 "\t" $1 }' \
> | head -n "$TL" \
> | tac
> }
> dpigs "$@"
>
> --✂--✂--✂--✂--✂--✂--✂--✂--✂--✂ --✂--✂--✂--✂--✂--✂--✂--✂--✂--✂--

I don't understand why you used sort -r, but then reversed it again with
tac at the end. You could drop both of the reversals, and just change
head to tail.

Anyway... I wrote mine in perl, quite a few years ago (timestamp says
September 2004). There's a copy at <https://wooledge.org/~greg/ds>.
I named it before I even knew that "dpigs" existed. I can't blame past
me for not knowing... dpigs is extremely well hidden.

Jonathan Dowland

unread,
Mar 2, 2023, 8:20:06 AM3/2/23
to
On Thu, Mar 02, 2023 at 07:25:58AM -0500, Greg Wooledge wrote:
>I don't understand why you used sort -r, but then reversed it again with
>tac at the end. You could drop both of the reversals, and just change
>head to tail.

The short answer is because I wrote all but the last "tac" several years
ago, and added the last "tac" in writing the mail, when I realised the
output was the other way around to how I'd prefer.

David Christensen

unread,
Mar 2, 2023, 5:50:06 PM3/2/23
to
On 3/1/23 05:35, lina wrote:
> My / is almost full.
>
> # df -h
> Filesystem Size Used Avail Use% Mounted on
> udev 126G 0 126G 0% /dev
> tmpfs 26G 2.3M 26G 1% /run
> /dev/nvme0n1p2 23G 21G 966M 96% /


On 3/1/23 15:03, Felix Miata wrote:
> I limit journal size this way:
> # cat /etc/systemd/journald.conf.d/local.conf
> [Journal]
> Storage=persistent
> SystemMaxFiles=10
> RuntimeMaxFiles=12


On 3/1/23 14:53, Jeffrey Walton wrote:
> You can probably reclaim a couple of GB by trimming systemd logs. It
> should get you some room to work. Something like:
>
> journalctl --vacuum-time=14d
>
> I need to clear systemd logs on some IoT gadgets on occasion. They use
> SDcards, though. Not the big, juicy disks you have.


On 3/2/23 00:48, lina wrote:
> Hi all,
>
> Thanks for your suggestions,
>
> I take the least risk way, just move the things from /opt away,
>
> I hope I can make it in the next few months, the biggest problem was
> created by the R associated package.
>
> /dev/nvme0n1p2 23G 18G 4.5G 80% /
>
> Thanks again, lina


On 3/2/23 00:53, lina wrote:
> :/usr/lib$ du -sh * | sort -nr | grep -v K | head
> 981M R
> 591M rstudio
> 591M jvm
> 554M mega
> 538M llvm-11
> 343M modules
> 313M libreoffice


So, your computer has 3911M of apps in /usr/share.


My /usr/share has a somewhat less, but /usr/share is also non-trivial:

2023-03-02 14:27:31 root@laalaa ~
# du -m -d 2 /usr | sort -rn | head
5113 /usr
3158 /usr/lib
1578 /usr/share
970 /usr/lib/x86_64-linux-gnu
664 /usr/lib/modules
334 /usr/share/locale
286 /usr/lib/libreoffice
252 /usr/lib/go-1.15
215 /usr/lib/firefox-esr
206 /usr/share/doc


Looking at my /var:

2023-03-02 14:29:33 root@laalaa ~
# du -m -d 2 /var | sort -rn | head
1317 /var
974 /var/log
945 /var/log/journal
194 /var/lib
123 /var/lib/apt
86 /var/cache
68 /var/cache/apt
61 /var/spool
60 /var/spool/cups
60 /var/lib/dpkg


/var/log/journal is getting big. Following Felix' and Jeffrey's
suggestions, and RTFM journald.conf(5) and journalctl(1), I can imagine:

2023-03-02 14:40:48 root@laalaa ~
# cat /etc/systemd/journald.conf.d/local.conf
[Journal]
SystemMaxFileSize=5M
SystemMaxUse=50M
RuntimeMaxFileSize=5M
RuntimeMaxUse=50M


How do I verify the syntax?


How do I make the settings live (other than rebooting, which might hang
if there is a syntax error)?


David

David Christensen

unread,
Mar 2, 2023, 5:50:06 PM3/2/23
to
On 3/2/23 14:41, David Christensen wrote:

> On 3/2/23 00:53, lina wrote:
> > :/usr/lib$ du -sh * | sort -nr | grep -v K  | head
> > 981M R
> > 591M rstudio
> > 591M jvm
> > 554M mega
> > 538M llvm-11
> > 343M modules
> > 313M libreoffice
>
>
> So, your computer has 3911M of apps in /usr/share.

Corrections:

So, your computer has 3911M of apps in /usr/lib.


> My /usr/share has a somewhat less, but /usr/share is also non-trivial:

My /usr/lib has a somewhat less


David

Felix Miata

unread,
Mar 2, 2023, 6:20:06 PM3/2/23
to
David Christensen composed on 2023-03-02 14:41 (UTC-0800):

> How do I make the settings live (other than rebooting, which might hang
> if there is a syntax error)?

I think this is one of those things that systemctl daemon-reload does.

[quote]
So, it's a "soft" reload, essentially; taking changed configurations from filesystem and regenerating dependency trees.
[/quote]
https://unix.stackexchange.com/questions/364782/what-does-systemctl-daemon-reload-do

I know there's a way to confirm syntax, but I never remember what to search for to rediscover.

David Christensen

unread,
Mar 2, 2023, 7:20:06 PM3/2/23
to
On 3/2/23 15:19, Felix Miata wrote:
> David Christensen composed on 2023-03-02 14:41 (UTC-0800):
>
>> How do I make the settings live (other than rebooting, which might hang
>> if there is a syntax error)?
>
> I think this is one of those things that systemctl daemon-reload does.
>
> [quote]
> So, it's a "soft" reload, essentially; taking changed configurations from filesystem and regenerating dependency trees.
> [/quote]
> https://unix.stackexchange.com/questions/364782/what-does-systemctl-daemon-reload-do
>
> I know there's a way to confirm syntax, but I never remember what to search for to rediscover.


STFW "reload journald.conf" I see:

https://unix.stackexchange.com/questions/253203/how-to-tell-journald-to-re-read-its-configuration


Try a reload:

2023-03-02 15:50:15 root@laalaa ~
# systemctl reload systemd-journald
Failed to reload systemd-journald.service: Job type reload is not
applicable for unit systemd-journald.service.


Reading the above further, I see that the first answer, third comment,
shows the same result. The fourth comment links to:

https://unix.stackexchange.com/questions/379288/reloading-systemd-journald-config


Reading that answer is not encouraging.


Oh, well. I guess I'll try a restart:

2023-03-02 15:52:49 root@laalaa ~
# systemctl restart systemd-journald

2023-03-02 16:05:08 root@laalaa ~
#


Looks okay (?).


Reboot.


Now I am wondering where to look for systemd errors (?). STFW "systemd
error message":

https://unix.stackexchange.com/questions/332886/how-to-see-error-message-in-journald

2023-03-02 16:15:14 root@laalaa ~
# journalctl -p 3 -xb --no-pager
-- Journal begins at Fri 2023-02-24 11:38:23 PST, ends at Thu 2023-03-02
16:17:01 PST. --
Mar 02 16:06:25 laalaa kernel: iwlwifi 0000:03:00.0: firmware: failed to
load iwl-debug-yoyo.bin (-2)
Mar 02 16:06:25 laalaa kernel: firmware_class: See
https://wiki.debian.org/Firmware for information about missing firmware
Mar 02 16:06:30 laalaa pipewire[1165]: Failed to receive portal pid:
org.freedesktop.DBus.Error.NameHasNoOwner: Could not get PID of name
'org.freedesktop.portal.Desktop': no such name
Mar 02 16:06:36 laalaa lightdm[1227]: gkr-pam: unable to locate daemon
control file
Mar 02 16:06:36 laalaa pipewire[1247]: Failed to receive portal pid:
org.freedesktop.DBus.Error.NameHasNoOwner: Could not get PID of name
'org.freedesktop.portal.Desktop': no such name
Mar 02 16:06:38 laalaa kernel: nouveau 0000:01:00.0: firmware: failed to
load nouveau/nvd9_fuc084 (-2)
Mar 02 16:06:38 laalaa kernel: nouveau 0000:01:00.0: firmware: failed to
load nouveau/nvd9_fuc084d (-2)
Mar 02 16:06:38 laalaa kernel: nouveau 0000:01:00.0: msvld: unable to
load firmware data
Mar 02 16:06:38 laalaa kernel: nouveau 0000:01:00.0: msvld: init failed, -19


Nothing about /etc/systemd/journald.conf.d/local.conf.


Check disk usage of /var/log:

2023-03-02 16:13:39 root@laalaa ~
# du -mx -d 1 /var/log | sort -rn
94 /var/log
65 /var/log/journal
15 /var/log/installer
1 /var/log/speech-dispatcher
1 /var/log/samba
1 /var/log/runit
1 /var/log/private
1 /var/log/lightdm
1 /var/log/cups
1 /var/log/apt


Check disk free of root:

2023-03-02 16:15:04 root@laalaa ~
# df /
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/mapper/sdb3_crypt 12084M 6876M 4573M 61% /


It appears my /etc/systemd/journald.conf.d/local.conf is working. :-)


David

songbird

unread,
Mar 2, 2023, 11:10:06 PM3/2/23
to
Joe wrote:
...
> On unstable, I have a /var/cache/apt/archives directory, from which apt
> autoclean, which I do occasionally, recently removed about 5G of
> packages (obviously too occasionally). There's still quite a bit there
> as it was only autoclean and I prefer to keep downloads around for a
> while, as this is unstable.

yes, i prefer to keep at least one version back from the
most recent, but apt-get autoclean doesn't do that either
so once in a while i run that manually and hope for the best.

a Keep_Plus_One flag would be nice. :)


songbird

songbird

unread,
Mar 2, 2023, 11:10:06 PM3/2/23
to
Andy Smith wrote:
> Hello,
>
> On Wed, Mar 01, 2023 at 07:53:19PM +0100, Nicolas George wrote:
>> Andy Smith (12023-03-01):
>> > > /dev/nvme0n1p2 23G 21G 966M 96% /
>> > > /dev/nvme0n1p6 267M 83M 166M 34% /boot
>> > > /dev/nvme0n1p1 511M 5.8M 506M 2% /boot/efi
>> > > /dev/nvme0n1p3 9.1G 3.2G 5.5G 37% /var
>> > > /dev/nvme0n1p5 1.8G 14M 1.7G 1% /tmp
>> > > /dev/nvme0n1p7 630G 116G 482G 20% /home
>> > This is an excellent illustration of why creating tons of partitions
>> > like it's 1999 can leave you in a difficult spot.
>>
>> No it is not. The /boot and /tmp partitions are superfluous, and
>> /boot/efi is too large (but at a guess it was already there), but they
>> would barely make a difference.
>
> I was talking about them going to the effort of separating /home and
> /var and ending up with completely inappropriate sizings. They would
> have been much better off just not bothering and having it all in /.
> The mere presence of all these other partitions laid out on this
> disk after the one for / makes resizing things a lot harder than it
> needs to be.

yes, but old habits can die hard... and there weren't
these wonderful gadgets called SSDs around. in my own more
recent installations i got away from too much fragmentation
and i'm glad for that as then i do have more space to work
with.

mainly the other partitions are now backup, pictures or
a spare bootable stable partition and that has been working
out well.

i do not do llvm because i don't have that much need for
that level of complexity.


>> On the other hand, in 2023, it is still a very good idea to separate the
>> system filesystem that gets written frequently from the one that gets
>> written rarely from the user data filesystem.
>
> No argument there, but not with disk partitions as they end up hard
> to resize, as seen here. OP is quite fortunate that their last
> partition is one that can be most easily shrunk as that at least
> gives them some easier options. I'd agree it would be a better
> example of a tight spot if their last partition were one they
> couldn't shrink!

i could find a lot of space by deduping backups and pictures
but that is on my TODO list for the year 2026 at the rate i'm
going. it may end up being much more time efficient to just
go out and buy another 2TB SSD and swap that for my smaller
one and call it good enough.


songbird

David Wright

unread,
Mar 3, 2023, 12:20:06 AM3/3/23
to
apt-cacher-ng will do that, with modification to acng.conf:

# Regular expiration algorithm finds package files which are no longer listed
# in any index file and removes them of them after a safety period.
# This option allows to keep more versions of a package in the cache after
# the safety period is over.
#
# KeepExtraVersions: 0

Cheers,
David.

Richard Hector

unread,
Mar 3, 2023, 1:50:07 AM3/3/23
to
On 2/03/23 06:00, Andy Smith wrote:
> Hi,
>
> On Wed, Mar 01, 2023 at 02:35:17PM +0100, lina wrote:
>> My / is almost full.
>>
>> # df -h
>> Filesystem Size Used Avail Use% Mounted on
>> udev 126G 0 126G 0% /dev
>> tmpfs 26G 2.3M 26G 1% /run
>> /dev/nvme0n1p2 23G 21G 966M 96% /
>> tmpfs 126G 15M 126G 1% /dev/shm
>> tmpfs 5.0M 4.0K 5.0M 1% /run/lock
>> /dev/nvme0n1p6 267M 83M 166M 34% /boot
>> /dev/nvme0n1p1 511M 5.8M 506M 2% /boot/efi
>> /dev/nvme0n1p3 9.1G 3.2G 5.5G 37% /var
>> /dev/nvme0n1p5 1.8G 14M 1.7G 1% /tmp
>> /dev/nvme0n1p7 630G 116G 482G 20% /home
>
> This is an excellent illustration of why creating tons of partitions
> like it's 1999 can leave you in a difficult spot. You are bound to
> make poor guesses as to what actual size you need, which leads later
> situations where some partitions are hardly used while others get
> full.

Of course you can also get into this situation if you had everything in
one filesystem, and ran out of space, and had to split off /home, /var
etc to save room ...

Richard

Curt

unread,
Mar 3, 2023, 10:50:05 AM3/3/23
to
On 2023-03-02, Jonathan Dowland <jon+deb...@dow.land> wrote:
> On Thu, Mar 02, 2023 at 07:25:58AM -0500, Greg Wooledge wrote:
>>I don't understand why you used sort -r, but then reversed it again with
>>tac at the end. You could drop both of the reversals, and just change
>>head to tail.
>
> The short answer is because I wrote all but the last "tac" several years
> ago, and added the last "tac" in writing the mail, when I realised the
> output was the other way around to how I'd prefer.

You'd think you'd want the biggest pigs listed first.

But I haven't been following.


--

davidson

unread,
Mar 3, 2023, 2:10:06 PM3/3/23
to
Yeah, it makes no sense backwards

Home
All the way
This little pig went wee wee wee
This little pig had none
This little pig had roast beef
This little pig stayed home
This little pig went to market

> But I haven't been following.
>
>
>

--
Ce qui est important est rarement urgent
et ce qui est urgent est rarement important
-- Dwight David Eisenhower

David Wright

unread,
Mar 4, 2023, 12:20:05 PM3/4/23
to
But then when there's a drove, the biggest go AWOL
off the top of screen.

> But I haven't been following.

Cheers,
David.

Jonathan Dowland

unread,
Mar 6, 2023, 5:50:05 AM3/6/23
to
On Sat, Mar 04, 2023 at 11:10:48AM -0600, David Wright wrote:
>But then when there's a drove, the biggest go AWOL
>off the top of screen.

Quite. I habitually alias ls to 'ls -lhrt', (and cdls() { cd "$@" && ls
-lhrt; }; alias cd=cdls) so I'm very used to only looking at the bottom
of a long list of size-sorted-ascending. But I think it's a matter of
taste.

Jonathan Dowland

unread,
Mar 6, 2023, 9:20:05 AM3/6/23
to
On Mon, Mar 06, 2023 at 10:41:22AM +0000, Jonathan Dowland wrote:
>Quite. I habitually alias ls to 'ls -lhrt', (and cdls() { cd "$@" && ls
>-lhrt; }; alias cd=cdls) so I'm very used to only looking at the bottom
>of a long list of size-sorted-ascending.

Err, of course, that's date-sort-ascending, not size. But I hope the
point I was making got through regardless.
0 new messages