Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Resizing LVM partitions

306 views
Skip to first unread message

sko...@uns.ac.rs

unread,
Jan 22, 2024, 10:00:07 AM1/22/24
to
I am getting the following message at any boot:

"The volume "Filesystem root" has only 221.1 MB disk space remaining."

df -h says:

Filesystem Size Used Avail Use% Mounted on
udev 1.5G 0 1.5G 0% /dev
tmpfs 297M 9.0M 288M 4% /run
/dev/mapper/localhost-root 5.2G 4.7G 211M 96% /
/dev/mapper/localhost-usr 14G 12G 948M 93% /usr
tmpfs 1.5G 0 1.5G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup
/dev/sda1 228M 133M 84M 62% /boot
/dev/mapper/localhost-tmp 2.3G 57K 2.2G 1% /tmp
/dev/mapper/localhost-var 2.7G 2.5G 55M 98% /var
/dev/mapper/localhost-home 257G 73G 172G 30% /home
tmpfs 297M 40K 297M 1% /run/user/1000

As my system has encrypted LVM, I suppose that I shall reduce some space
used for /home, and then use it to extend /, /usr, and /var logical
partitions. I think I did (or tried to do) something similar several years
ago, but forgot the proper procedure. Any link for a good tutorial is
welcomed. Thanks.

Misko

Alain D D Williams

unread,
Jan 22, 2024, 10:20:06 AM1/22/24
to
The shrinking of /home is the hard part. You MUST first unmount /home, then
resize the file system, then resize the logical volume.

umount /home

Find out how big it is:
resize2fs /dev/mapper/localhost-home

Change the filesystem size:
resize2fs /dev/mapper/localhost-home NEW-SIZE

Change the partition size:
lvextend --size 200G /dev/mapper/localhost-home

The hard bit is working out what NEW-SIZE should be and having it such
that you use all of the partition but without making the file system size
greater than the partition size - ie getting the last few megabytes right.

What I do is make NEW-SIZE 2GB smaller than I want (assuming that it still fits),
the size I give to lvextend 1GB smaller - so it all works, but there is wasted
space & it is not quite big enough. I then do:

lvextend --size +1G --resizefs /dev/mapper/localhost-home

Ie get lvextend to do the maths & work it out for me.

Those who are cleverer than me might be able to tell you how to get it right
first time!

mount /home

Extending the others is easy and can be done when the system is running &
active, something like:

lvextend --size +1G --resizefs /dev/mapper/localhost-var

Finally: ensure that you have a good backup of /home before you start.

--
Alain Williams
Linux/GNU Consultant - Mail systems, Web sites, Networking, Programmer, IT Lecturer.
+44 (0) 787 668 0256 https://www.phcomp.co.uk/
Parliament Hill Computers. Registration Information: https://www.phcomp.co.uk/Contact.html
#include <std_disclaimer.h>

Stefan Monnier

unread,
Jan 22, 2024, 10:40:05 AM1/22/24
to
> lvextend --size +1G --resizefs /dev/mapper/localhost-home
>
> Ie get lvextend to do the maths & work it out for me.
>
> Those who are cleverer than me might be able to tell you how to get it right
> first time!

lvreduce --size -50G --resizefs /dev/mapper/localhost-home

?


Stefan

Alain D D Williams

unread,
Jan 22, 2024, 10:50:06 AM1/22/24
to
Oh, even better. It is a long time since I looked at than man page.

Does this still need to be done with the file system unmounted or can it be
done with an active file system these days ?

Greg Wooledge

unread,
Jan 22, 2024, 11:10:06 AM1/22/24
to
On Mon, Jan 22, 2024 at 03:17:36PM +0000, Alain D D Williams wrote:
> The shrinking of /home is the hard part. You MUST first unmount /home, then
> resize the file system, then resize the logical volume.

Before doing any of that, one should check the volume group and see
if there are unallocated hunks of free space that can simply be assigned
to the root LV.

One of the fundamental *reasons* to use LVM is to leave a bunch of space
unallocated, and assign it to whatever needs it later, once the storage
needs become known. Leaving some unallocated space also allows the
use of snapshots, which are nice when doing backups.

I heard someone say, once, that the Debian installer will assign all of
the space in a VG during installation, if you follow its "guided" path.
This is a tragedy, if it's still true.

to...@tuxteam.de

unread,
Jan 22, 2024, 1:00:08 PM1/22/24
to
On Mon, Jan 22, 2024 at 03:40:06PM +0000, Alain D D Williams wrote:
> On Mon, Jan 22, 2024 at 10:29:55AM -0500, Stefan Monnier wrote:
> > > lvextend --size +1G --resizefs /dev/mapper/localhost-home
> > >
> > > Ie get lvextend to do the maths & work it out for me.
> > >
> > > Those who are cleverer than me might be able to tell you how to get it right
> > > first time!
> >
> > lvreduce --size -50G --resizefs /dev/mapper/localhost-home
>
> Oh, even better. It is a long time since I looked at than man page.
>
> Does this still need to be done with the file system unmounted or can it be
> done with an active file system these days ?

You have first to shrink the file system (if it's ext4, you can use
resize2fs: note that you can only *grow* an ext4 which is mounted
(called "online resizing) -- to *shrink* it, it has to be unmounted.

Since I wasn't quite sure whether ext2's Gs are the same as LVM's
and didn't want to bother with whatever clippings each process
takes, what I did in this situation was:

- shrink (resize2fs) the file system to a size clearly below target
- resize the LVM to my target size
- resize2fs again without params, which lets it take whatever the
partition offers

Sounds complicated, but is not :-)

You can shrink the partition to be smaller than the file system,
but then you'll thrash it sooner or later, when two file sysems
start quibbling over blocks on the fence like angry neighbours :)

Cheers
--
t
signature.asc

to...@tuxteam.de

unread,
Jan 22, 2024, 1:10:05 PM1/22/24
to
On Mon, Jan 22, 2024 at 11:02:06AM -0500, Greg Wooledge wrote:
> On Mon, Jan 22, 2024 at 03:17:36PM +0000, Alain D D Williams wrote:
> > The shrinking of /home is the hard part. You MUST first unmount /home, then
> > resize the file system, then resize the logical volume.
>
> Before doing any of that, one should check the volume group and see
> if there are unallocated hunks of free space that can simply be assigned
> to the root LV.

Ah, forgot to say: "pvdisplay -m" will give you a "physical" map of
your physical volume. So you get an idea what is where and where
you find gaps.

Cheers
--
t
signature.asc

Gremlin

unread,
Jan 22, 2024, 1:20:06 PM1/22/24
to
I use to use LVM and RAID but I quit using that after finding out that
partition the drive and using gparted was way more easier

Greg Wooledge

unread,
Jan 22, 2024, 1:50:05 PM1/22/24
to
A volume group (VG) may be comprised of one or more physical volumes
(PV), and the free space would be counted at the VG level. So I'd suggest
"vgdisplay" instead. This tells you how many "PE" (physical extents,
aka hunks of space) are allocated, and how many are free.

Greg Wooledge

unread,
Jan 22, 2024, 1:50:05 PM1/22/24
to
On Mon, Jan 22, 2024 at 01:06:16PM -0500, Gremlin wrote:
> I use to use LVM and RAID but I quit using that after finding out that
> partition the drive and using gparted was way more easier

If you allocate all the space during installation and don't leave any
to make adjustments, or to make snapshots, then you're not getting
any of the benefits of LVM. In this case, you're just doing static
partitioning with extra complexity, and your conclusion would be correct.

The key to LVM is to leave some space unallocated. Then you get *options*.

Miroslav Skoric

unread,
Jan 22, 2024, 5:20:06 PM1/22/24
to
On 1/22/24 4:40 PM, Alain D D Williams wrote:
> On Mon, Jan 22, 2024 at 10:29:55AM -0500, Stefan Monnier wrote:
>>> lvextend --size +1G --resizefs /dev/mapper/localhost-home
>>>
>>> Ie get lvextend to do the maths & work it out for me.
>>>
>>> Those who are cleverer than me might be able to tell you how to get it right
>>> first time!
>>
>> lvreduce --size -50G --resizefs /dev/mapper/localhost-home
>
> Oh, even better. It is a long time since I looked at than man page.
>
> Does this still need to be done with the file system unmounted or can it be
> done with an active file system these days ?
>

As I need to extend & resize more than one LV in the file system (/,
/usr, and /var), should they all need to be unmounted before the
operation? As I remember, it is ext3 system on that comp.

Miroslav Skoric

unread,
Jan 22, 2024, 5:20:06 PM1/22/24
to
Sounds interesting. Thank you. Will see other opinions too.

Miroslav Skoric

unread,
Jan 22, 2024, 5:20:06 PM1/22/24
to
On 1/22/24 6:59 PM, to...@tuxteam.de wrote:
> On Mon, Jan 22, 2024 at 03:40:06PM +0000, Alain D D Williams wrote:
>> On Mon, Jan 22, 2024 at 10:29:55AM -0500, Stefan Monnier wrote:
>>>> lvextend --size +1G --resizefs /dev/mapper/localhost-home
>>>>
>>>> Ie get lvextend to do the maths & work it out for me.
>>>>
>>>> Those who are cleverer than me might be able to tell you how to get it right
>>>> first time!
>>>
>>> lvreduce --size -50G --resizefs /dev/mapper/localhost-home
>>
>> Oh, even better. It is a long time since I looked at than man page.
>>
>> Does this still need to be done with the file system unmounted or can it be
>> done with an active file system these days ?
>
> You have first to shrink the file system (if it's ext4, you can use
> resize2fs: note that you can only *grow* an ext4 which is mounted
> (called "online resizing) -- to *shrink* it, it has to be unmounted.
>

I will check it again but I think that file systems in that LVM are
ext3. So it requires all of them to be unmounted prior to resizing ?

> Since I wasn't quite sure whether ext2's Gs are the same as LVM's
> and didn't want to bother with whatever clippings each process
> takes, what I did in this situation was:
>
> - shrink (resize2fs) the file system to a size clearly below target
> - resize the LVM to my target size
> - resize2fs again without params, which lets it take whatever the
> partition offers
>

That last resize2fs (without params) would not work here, or at least it
would not work for my three file systems that need to be extended: / ,
/usr , and /var . Maybe to extend each of them separately like this:

lvextend --size +1G --resizefs /dev/mapper/localhost-root
lvextend --size +1G --resizefs /dev/mapper/localhost-usr
lvextend --size +1G --resizefs /dev/mapper/localhost-var

?

Greg Wooledge

unread,
Jan 22, 2024, 5:30:06 PM1/22/24
to
On Mon, Jan 22, 2024 at 10:41:57PM +0100, Miroslav Skoric wrote:
> As I need to extend & resize more than one LV in the file system (/, /usr,
> and /var), should they all need to be unmounted before the operation? As I
> remember, it is ext3 system on that comp.

Whaaaaat?? I don't think these words mean what you think they mean.

An LV is a logical volume, which is like a virtual partition. It's a
block device, like /dev/sda2. You can use an LV the same way you would
use a partition -- you can use it for swap space, or a file system, or
other purposes.

A file system is a mountable directory structure that you can put inside
a partition, or an LV. File system types include ext4, ext3, xfs, vfat,
and so on.

If your system has separately mounted file systems for /, /usr and
/var and you want to shrink ALL of them, then yes, you would need to
unmount all three of them, shrink them, then (re)boot. You can't
unmount / during normal operations, so the only ways to shrink / would
involved booting in a special way, either with some external medium,
or with specific kernel parameters. Thus, you'd typically reboot to
get back to normal operations afterward.

However, if you're in a position where you think you need to make
dramatic changes to FOUR of your mounted file systems, perhaps you
might want to consider restarting from scratch. Ponder why you have
separate file systems at all. Are they really giving you a benefit?
Have you ever filled up one of them and thought "Oh wow, I am *so*
glad I separated these file systems so I didn't fill up ___ as well!"
Or are they just giving you grief with no benefits?

to...@tuxteam.de

unread,
Jan 23, 2024, 12:50:05 AM1/23/24
to
On Mon, Jan 22, 2024 at 10:59:55PM +0100, Miroslav Skoric wrote:

[...]

> That last resize2fs (without params) would not work here, or at least it
> would not work for my three file systems that need to be extended: / , /usr
> , and /var . Maybe to extend each of them separately like this:
>
> lvextend --size +1G --resizefs /dev/mapper/localhost-root
> lvextend --size +1G --resizefs /dev/mapper/localhost-usr
> lvextend --size +1G --resizefs /dev/mapper/localhost-var

Ah, I didn't know of lvextend's --resizefs option. It seems lvreduce
has same. Their man pages refer to fsadm for that which is short in
details.

Still, yes, you have to unmount ext2/ext3/ext4 to reduce their sizes
(you can "grow" them while mounted).

Lvadm has an option to do that for you, no idea whether lvextend
or lvreduce can pass it to lvadm via the --resizefs option.

Cheers
--
t
signature.asc

Andy Smith

unread,
Jan 23, 2024, 1:40:05 AM1/23/24
to
Hi,

On Mon, Jan 22, 2024 at 10:59:55PM +0100, Miroslav Skoric wrote:
> On 1/22/24 6:59 PM, to...@tuxteam.de wrote:
> > On Mon, Jan 22, 2024 at 03:40:06PM +0000, Alain D D Williams wrote:
> > > On Mon, Jan 22, 2024 at 10:29:55AM -0500, Stefan Monnier wrote:
> > > > lvreduce --size -50G --resizefs /dev/mapper/localhost-home
> > >
> > > Oh, even better. It is a long time since I looked at than man page.
> > >
> > > Does this still need to be done with the file system unmounted or can it be
> > > done with an active file system these days ?
> >
> > You have first to shrink the file system (if it's ext4, you can use
> > resize2fs: note that you can only *grow* an ext4 which is mounted
> > (called "online resizing) -- to *shrink* it, it has to be unmounted.
> >
>
> I will check it again but I think that file systems in that LVM are ext3. So
> it requires all of them to be unmounted prior to resizing ?

ext filesystems do need to be unmounted when shrinking them (they can
grow online, though). When you use the --resizefs (-r) option, LVM asks
you if you wish to unmount. Obviously you cannot do that on a
fiulesystme which is in use, which means you'll need a live or rescue
environment to do it for the root filesystem.

I'd shrink what else I could and then see where I am at. It's okay to do
them one at a time. LVM will just not do it if there's a problem.
Another thing I sometimes do in these situations is make a new LV and
move some of the things in / out into it where possible, to free up some
more space on /.

Thanks,
Andy

--
https://bitfolk.com/ -- No-nonsense VPS hosting

Miroslav Skoric

unread,
Jan 23, 2024, 6:40:07 PM1/23/24
to
On 1/22/24 7:01 PM, to...@tuxteam.de wrote:
>
> Ah, forgot to say: "pvdisplay -m" will give you a "physical" map of
> your physical volume. So you get an idea what is where and where
> you find gaps.
>


"pvdisplay -m" provided some idea that there was some free space but (if
I am not wrong) not how much in MB, GB, or else.

I found gvdisplay more precise in that direction.

Miroslav Skoric

unread,
Jan 23, 2024, 6:40:07 PM1/23/24
to
On 1/23/24 7:36 AM, Andy Smith wrote:
>
> ext filesystems do need to be unmounted when shrinking them (they can
> grow online, though). When you use the --resizefs (-r) option, LVM asks
> you if you wish to unmount. Obviously you cannot do that on a
> fiulesystme which is in use, which means you'll need a live or rescue
> environment to do it for the root filesystem.
>
> I'd shrink what else I could and then see where I am at. It's okay to do
> them one at a time. LVM will just not do it if there's a problem.
> Another thing I sometimes do in these situations is make a new LV and
> move some of the things in / out into it where possible, to free up some
> more space on /.
>

Dunno ... in any case, for some reason the rescue mode I went to by
booting from an old installation CD (dated back to Debian 6.0.1A
Squeeze!) did not see partitions in form of e.g.
/dev/mapper/localhost-home, but rather /dev/localhost/home, so lvreduce
rejected to proceed.

So I tried vgdisplay. It returned ... among the others ...

...
Total PE 76249
Alloc PE / Size 74378 / 290.54 GiB
Free PE / Size 1871 / 7.31 GiB

... so I considered that 7.31 GB could be used for extending /, /usr,
and /var file systems. I rebooted machine into normal operation and did
the following:

# vgdisplay

--- Volume group ---
VG Name localhost
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 17
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 6
Open LV 6
Max PV 0
Cur PV 1
Act PV 1
VG Size <297.85 GiB
PE Size 4.00 MiB
Total PE 76249
Alloc PE / Size 74378 / <290.54 GiB
Free PE / Size 1871 / <7.31 GiB
VG UUID fbCaw1-u3SN-2HCy-w6y8-v0nK-QsFE-FETNZM

# df -h

Filesystem Size Used Avail Use% Mounted on
udev 1.5G 0 1.5G 0% /dev
tmpfs 297M 8.8M 288M 3% /run
/dev/mapper/localhost-root 5.2G 4.7G 211M 96% /
/dev/mapper/localhost-usr 14G 12G 948M 93% /usr
tmpfs 1.5G 0 1.5G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup
/dev/sda1 228M 133M 84M 62% /boot
/dev/mapper/localhost-tmp 2.3G 55K 2.2G 1% /tmp
/dev/mapper/localhost-var 2.7G 1.9G 659M 75% /var
/dev/mapper/localhost-home 257G 63G 182G 26% /home
tmpfs 297M 32K 297M 1% /run/user/1000

# lvextend --size +1G --resizefs /dev/mapper/localhost-root
Size of logical volume localhost/root changed from 5.32 GiB (1363
extents) to 6.32 GiB (1619 extents).
Logical volume localhost/root successfully resized.
resize2fs 1.44.5 (15-Dec-2018)
Filesystem at /dev/mapper/localhost-root is mounted on /; on-line
resizing required
old_desc_blocks = 22, new_desc_blocks = 26
The filesystem on /dev/mapper/localhost-root is now 6631424 (1k) blocks
long.

# df -h (to check the new status)

Filesystem Size Used Avail Use% Mounted on
udev 1.5G 0 1.5G 0% /dev
tmpfs 297M 8.8M 288M 3% /run
/dev/mapper/localhost-root 6.2G 4.7G 1.2G 81% /
/dev/mapper/localhost-usr 14G 12G 948M 93% /usr
tmpfs 1.5G 0 1.5G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup
/dev/sda1 228M 133M 84M 62% /boot
/dev/mapper/localhost-tmp 2.3G 55K 2.2G 1% /tmp
/dev/mapper/localhost-var 2.7G 1.9G 659M 75% /var
/dev/mapper/localhost-home 257G 63G 182G 26% /home
tmpfs 297M 32K 297M 1% /run/user/1000

# lvextend --size +1G --resizefs /dev/mapper/localhost-usr
Size of logical volume localhost/usr changed from <13.38 GiB (3425
extents) to <14.38 GiB (3681 extents).
Logical volume localhost/usr successfully resized.
resize2fs 1.44.5 (15-Dec-2018)
Filesystem at /dev/mapper/localhost-usr is mounted on /usr; on-line
resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/mapper/localhost-usr is now 3769344 (4k) blocks long.

# df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.5G 0 1.5G 0% /dev
tmpfs 297M 8.8M 288M 3% /run
/dev/mapper/localhost-root 6.2G 4.7G 1.2G 81% /
/dev/mapper/localhost-usr 15G 12G 1.9G 86% /usr
tmpfs 1.5G 0 1.5G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup
/dev/sda1 228M 133M 84M 62% /boot
/dev/mapper/localhost-tmp 2.3G 55K 2.2G 1% /tmp
/dev/mapper/localhost-var 2.7G 1.9G 659M 75% /var
/dev/mapper/localhost-home 257G 63G 182G 26% /home
tmpfs 297M 32K 297M 1% /run/user/1000

# lvextend --size +1G --resizefs /dev/mapper/localhost-var
Size of logical volume localhost/var changed from 2.79 GiB (715
extents) to 3.79 GiB (971 extents).
Logical volume localhost/var successfully resized.
resize2fs 1.44.5 (15-Dec-2018)
Filesystem at /dev/mapper/localhost-var is mounted on /var; on-line
resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/mapper/localhost-var is now 994304 (4k) blocks long.

# df -h

Filesystem Size Used Avail Use% Mounted on
udev 1.5G 0 1.5G 0% /dev
tmpfs 297M 8.8M 288M 3% /run
/dev/mapper/localhost-root 6.2G 4.7G 1.2G 81% /
/dev/mapper/localhost-usr 15G 12G 1.9G 86% /usr
tmpfs 1.5G 0 1.5G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup
/dev/sda1 228M 133M 84M 62% /boot
/dev/mapper/localhost-tmp 2.3G 55K 2.2G 1% /tmp
/dev/mapper/localhost-var 3.7G 1.9G 1.6G 55% /var
/dev/mapper/localhost-home 257G 63G 182G 26% /home
tmpfs 297M 32K 297M 1% /run/user/1000

# vgdisplay
--- Volume group ---
VG Name localhost
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 20
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 6
Open LV 6
Max PV 0
Cur PV 1
Act PV 1
VG Size <297.85 GiB
PE Size 4.00 MiB
Total PE 76249
Alloc PE / Size 75146 / <293.54 GiB
Free PE / Size 1103 / <4.31 GiB
VG UUID fbCaw1-u3SN-2HCy-w6y8-v0nK-QsFE-FETNZM

... seems that I still have some 4 GB of unallocated space to add
somewhere if/when needed. (I remain unsure whether the above-mentioned
7.31 GB of free space has probably left from the earlier resizing some
years ago, or from initial installation.)

In any case, what is left to do is to find the best way to take some
space from /home which is largely underused.

Thank you all for comments ...

Miroslav Skoric

unread,
Jan 23, 2024, 6:40:07 PM1/23/24
to
On 1/22/24 5:02 PM, Greg Wooledge wrote:
> On Mon, Jan 22, 2024 at 03:17:36PM +0000, Alain D D Williams wrote:
>> The shrinking of /home is the hard part. You MUST first unmount /home, then
>> resize the file system, then resize the logical volume.
>
> Before doing any of that, one should check the volume group and see
> if there are unallocated hunks of free space that can simply be assigned
> to the root LV.
>

vgdisplay

?

It helped me for now, see my other responses to the topic ...

Miroslav Skoric

unread,
Jan 23, 2024, 6:40:07 PM1/23/24
to
On 1/22/24 11:21 PM, Greg Wooledge wrote:
> On Mon, Jan 22, 2024 at 10:41:57PM +0100, Miroslav Skoric wrote:
>> As I need to extend & resize more than one LV in the file system (/, /usr,
>> and /var), should they all need to be unmounted before the operation? As I
>> remember, it is ext3 system on that comp.
>
> Whaaaaat?? I don't think these words mean what you think they mean.
>
> An LV is a logical volume, which is like a virtual partition. It's a
> block device, like /dev/sda2. You can use an LV the same way you would
> use a partition -- you can use it for swap space, or a file system, or
> other purposes.
>
> A file system is a mountable directory structure that you can put inside
> a partition, or an LV. File system types include ext4, ext3, xfs, vfat,
> and so on.
>

Sorry for my ignorance regarding terminology, I mix terms sometimes :-)

> If your system has separately mounted file systems for /, /usr and
> /var and you want to shrink ALL of them, then yes, you would need to
> unmount all three of them, shrink them, then (re)boot. You can't
> unmount / during normal operations, so the only ways to shrink / would
> involved booting in a special way, either with some external medium,
> or with specific kernel parameters. Thus, you'd typically reboot to
> get back to normal operations afterward.
>

Let me clarify: I did not plan to shrink all of those, but rather just
one (/home). The other three (/, /usr, and /var) shall be extended from
the released space.

I managed to locate the first CD of my very old initial installation set
(squeeze). However, booting from that one did not help me to get /home
available for shrinking. See later what I did instead.

> However, if you're in a position where you think you need to make
> dramatic changes to FOUR of your mounted file systems, perhaps you
> might want to consider restarting from scratch. Ponder why you have
> separate file systems at all. Are they really giving you a benefit?
> Have you ever filled up one of them and thought "Oh wow, I am *so*
> glad I separated these file systems so I didn't fill up ___ as well!"
> Or are they just giving you grief with no benefits?
>
>

Well I belong to those who are going to exercise any possible way to
prolong the life of an existing installation, no matter how old it is.
In my case it started from squeeze a decade or more ago and gradually
upgraded during the years. And I knew that some years ago I resized the
file system because of similar reasons, and that worked at the time. But
the procedure disappeared from memory :-)

Reinstalling from scratch is always possible, of course.

Greg Wooledge

unread,
Jan 23, 2024, 6:50:06 PM1/23/24
to
On Wed, Jan 24, 2024 at 12:29:18AM +0100, Miroslav Skoric wrote:
> Total PE 76249
> Alloc PE / Size 75146 / <293.54 GiB
> Free PE / Size 1103 / <4.31 GiB
> VG UUID fbCaw1-u3SN-2HCy-w6y8-v0nK-QsFE-FETNZM
>
> ... seems that I still have some 4 GB of unallocated space to add somewhere
> if/when needed.

Yes. Everything looks fine.

> In any case, what is left to do is to find the best way to take some space
> from /home which is largely underused.

You'll have to unmount it, which generally means you will have to reboot
in single-user mode, or from rescue media, whichever is easier.

If you aren't opposed to setting a root password (some people have *weird*
self-imposed restrictions, seriously), single-user mode (aka "rescue mode"
from the GRUB menu) is the standard way to do this. Boot to the GRUB menu,
select rescue mode, give the root password when prompted, then you should
end up with a root shell prompt. I don't recall whether /home will be
mounted at that point; if it is, unmount it. Then you should be able
to do whatever resizing is needed. When done, exit from the shell, and
the system should boot normally.

Max Nikulin

unread,
Jan 23, 2024, 9:30:06 PM1/23/24
to
On 24/01/2024 06:29, Miroslav Skoric wrote:
> # df -h

> /dev/mapper/localhost-root  6.2G  4.7G  1.2G  81% /

Taking into account size of kernel packages, I would allocate a few G
more for the root partition.

dpkg -s linux-image-6.1.0-17-amd64 | grep -i size
Installed-Size: 398452

Notice that separate /usr is not supported by latest systemd that should
be a part of the next Debian release.

to...@tuxteam.de

unread,
Jan 24, 2024, 12:50:05 AM1/24/24
to
On Tue, Jan 23, 2024 at 06:42:43PM -0500, Greg Wooledge wrote:
> On Wed, Jan 24, 2024 at 12:29:18AM +0100, Miroslav Skoric wrote:
> > Total PE 76249
> > Alloc PE / Size 75146 / <293.54 GiB
> > Free PE / Size 1103 / <4.31 GiB
> > VG UUID fbCaw1-u3SN-2HCy-w6y8-v0nK-QsFE-FETNZM
> >
> > ... seems that I still have some 4 GB of unallocated space to add somewhere
> > if/when needed.
>
> Yes. Everything looks fine.
>
> > In any case, what is left to do is to find the best way to take some space
> > from /home which is largely underused.
>
> You'll have to unmount it, which generally means you will have to reboot
> in single-user mode, or from rescue media, whichever is easier.

If you log in as root in a Linux console before the graphical
thing gets started, you might get a stab at it, too. No reason
for /home to be in use if no user has a session running (I can
only vouch for a pretty minimal graphical system with no DE,
but it might work for the newfangled things, too).

Cheers
--
t
signature.asc

Greg Wooledge

unread,
Jan 24, 2024, 7:10:06 AM1/24/24
to
On Wed, Jan 24, 2024 at 06:45:12AM +0100, to...@tuxteam.de wrote:
> On Tue, Jan 23, 2024 at 06:42:43PM -0500, Greg Wooledge wrote:
> > You'll have to unmount it, which generally means you will have to reboot
> > in single-user mode, or from rescue media, whichever is easier.
>
> If you log in as root in a Linux console before the graphical
> thing gets started, you might get a stab at it, too. No reason
> for /home to be in use if no user has a session running

Depends on the system. If you've got user crontabs that run @reboot
(or their systemd equivalents, if such a thing exists), those might
try to use files in $HOME. If you're running a mail transfer agent
that receives email, it might attempt deliveries, which would involve
looking for ~/.forward or similar files, and deliveries could be done
to the home directory (but not by default on Debian).

But yeah, for *most* users, what you said is probably accurate.

Andy Smith

unread,
Jan 24, 2024, 7:10:06 AM1/24/24
to
Hi,

On Wed, Jan 24, 2024 at 12:29:18AM +0100, Miroslav Skoric wrote:
> Dunno ... in any case, for some reason the rescue mode I went to by booting
> from an old installation CD (dated back to Debian 6.0.1A Squeeze!) did not
> see partitions in form of e.g. /dev/mapper/localhost-home, but rather
> /dev/localhost/home, so lvreduce rejected to proceed.

Booting into an ancient userland like Debian 6 to do vital work on
your storage stack is completely insane. Bear in mind the amount of
changes and bug fixes that will have taken place in kernel,
filesystem and LVM tools between Debian 6 and Debian 12. You are
lucky we are not now having a very different kind of conversation.

Always try to use a rescue/live environment that is close to, or
newer than your actual system. Anything else risks catastrophe.

> So I tried vgdisplay. It returned ... among the others ...
>
> ...
> Total PE 76249
> Alloc PE / Size 74378 / 290.54 GiB
> Free PE / Size 1871 / 7.31 GiB

Summary: you managed to use some of that available space.

> In any case, what is left to do is to find the best way to take some space
> from /home which is largely underused.

You should be able to do this bit without going into a live/rescue
env. You won't be able to do it while any user is logged in, so shut
down any desktop environment and log out of all users. Log back in
as root from console and just do the lvreduce --resizefs from there.
It should ask if you are willing to unmount /home.

If there's anything left running from /home the unmount won't work
and you'll have to track down those stray processes, but should be
easily doable.

Miroslav Skoric

unread,
Jan 24, 2024, 5:10:06 PM1/24/24
to
On 1/24/24 12:42 AM, Greg Wooledge wrote:
>
> You'll have to unmount it, which generally means you will have to reboot
> in single-user mode, or from rescue media, whichever is easier.
>
> If you aren't opposed to setting a root password (some people have *weird*
> self-imposed restrictions, seriously), single-user mode (aka "rescue mode"
> from the GRUB menu) is the standard way to do this. Boot to the GRUB menu,
> select rescue mode, give the root password when prompted, then you should
> end up with a root shell prompt. I don't recall whether /home will be
> mounted at that point; if it is, unmount it. Then you should be able
> to do whatever resizing is needed. When done, exit from the shell, and
> the system should boot normally.
>
>

I do not have root account. (I use sudo from my user account.) I think I
already tried rescue mode in the past but was not prompted for root
password.

Miroslav Skoric

unread,
Jan 24, 2024, 5:20:06 PM1/24/24
to
Thank you. Will consider that.

Greg Wooledge

unread,
Jan 24, 2024, 5:30:07 PM1/24/24
to
On Wed, Jan 24, 2024 at 10:43:51PM +0100, Miroslav Skoric wrote:
> I do not have root account.

Sure you do. You might not have a root *password* set.

> (I use sudo from my user account.) I think I
> already tried rescue mode in the past but was not prompted for root
> password.

You can set a root password:

sudo passwd root

That should allow you to enter single-user mode, or to login directly
as root on a text console, both of which are things that you may need
to do as a system administrator. Especially if you're trying to
unmount /home.

Andy Smith

unread,
Jan 24, 2024, 5:30:07 PM1/24/24
to
Hello,

On Wed, Jan 24, 2024 at 09:20:47AM +0700, Max Nikulin wrote:
> Notice that separate /usr is not supported by latest systemd that should be
> a part of the next Debian release.

I don't think this is the case. What I think is not supported is a
separate /usr that is not mounted by initramfs. On Debian, if you do
nothing special, any separate /usr will be mounted by initramfs. As
far as I'm aware it is only a concern for:

people who have a /usr mount point
&& (
(do not use an initramfs)
||
(have meddled with their initramfs to stop it from mounting
/usr)
)

What systemd has decided to no longer support is what they call
"split /usr":

https://lists.freedesktop.org/archives/systemd-devel/2022-April/047673.html

They define that as "/usr that is not populated at boot time". i.e.
a /usr that would be mounted during boot from /etc/fstab or similar.
If /usr is mounted by the initramfs, that is before userland boot,
and systemd doesn't care about that. Debian does that where there is
a separate mount point for /usr.

Stefan Monnier

unread,
Jan 25, 2024, 9:20:05 AM1/25/24
to
BTW, instead of rescue mode, you can use the initramfs to do such things
(I like to do that when I don't have a LiveUSB at hand because it lets
you manipulate *all* partitions, including /).

I.e. do something like:

- Reboot
- In Grub, edit your boot script (with `e`) to add `break=mount` to the
kernel command line.
- Use `F10` to boot with that boot script.
- You should very quickly be dropped into a fairly minimal shell,
without any password.
- None of your volumes are mounted yet. Even LVM isn't initialized yet.
- Then type something like (guaranteed 100% untested)

lvm vgchange -ay # Activate your LVM volumes.
mount /dev/mapper/localhost-root /mnt # Mount /
mount --bind /dev /mnt/dev
chroot /mnt /bin/bash
lvreduce --size -50G --resizefs /dev/mapper/localhost-home
exit
umount /mnt/dev
umount /mnt
exit


--- Stefan

Miroslav Skoric

unread,
Jan 26, 2024, 7:40:06 AM1/26/24
to
Of course, sorry for my mixing terms. In fact I have never logged in
directly as root so I thought the account was disabled or unusable.

In any case, after setting a root password I did this:

1. Log-out as user (in GUI)
2. Ctrl-Alt-F2
3. Log-in as root (in CLI)
4. # lvreduce --size -50G --resizefs /dev/mapper/localhost-home
Do you want to unmount "/home" ? [Y|n] y
...
...
Size of logical volume localhost/home changed from 261.00 GiB (66816
extents) to 211.00 GiB (54016 extents).
Logical volume localhost/home successfully resized.

... after reboot ...

# df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.5G 0 1.5G 0% /dev
tmpfs 297M 8.9M 288M 3% /run
/dev/mapper/localhost-root 6.2G 4.7G 1.2G 81% /
/dev/mapper/localhost-usr 15G 11G 2.7G 80% /usr
tmpfs 1.5G 0 1.5G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup
/dev/sda1 228M 142M 74M 66% /boot
/dev/mapper/localhost-home 208G 60G 138G 31% /home
/dev/mapper/localhost-var 3.7G 2.0G 1.6G 57% /var
/dev/mapper/localhost-tmp 2.3G 57K 2.2G 1% /tmp
tmpfs 297M 32K 297M 1% /run/user/1000

# vgdisplay
--- Volume group ---
VG Name localhost
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 21
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 6
Open LV 6
Max PV 0
Cur PV 1
Act PV 1
VG Size <297.85 GiB
PE Size 4.00 MiB
Total PE 76249
Alloc PE / Size 62346 / <243.54 GiB
Free PE / Size 13903 / <54.31 GiB
VG UUID fbCaw1-u3SN-2HCy-w6y8-v0nK-QsFE-FETNZM

... and then I extended /, /usr, and /var for 1GB each. Seems all ok.

Thank you!
0 new messages