On Thu, Jun 25, 2020 at 08:54:51PM +0000, Matt Drez wrote:
> This is where I am at now (below). What would be the best course of
> action. What would you if this was your mess to clean up? :)
> WARNING: Device
> /dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576 has size of
> 1948924598 sectors which is smaller than corresponding PV size of
> 1949320573 sectors. Was device resized?
this is a bit worrying and i am not sure how you ended up with that.
to clean this up, i would try ...
pvresize --setphysicalvolumesize 900g /dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576
pvresize /dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576
then check that the warning is gone (pvs or vgs).
AFTER resolving the PV size problem, i would grow the thin pool.
> VG #PV #LV #SN Attr VSize VFree
> qubes_dom0 1 141 0 wz--n- 929.51g 556.70g
> [@dom0 ~]$ sudo lvs qubes_dom0/pool00
> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
> pool00 qubes_dom0 twi-aotz-- 347.98g 86.33 14.67
this suggests you resized the poolmetadata already?
if you did NOT:
lvresize --poolmetadatasize +1G qubes_dom0/pool00
then resize the pool itself:
lvresize -L +550G qubes_dom0/pool00
then check it worked by running "lvs qubes_dom0/pool00" again,
the new LSize should be 900g, and the Meta% should still be a lot
smaller than the Data%.
> [@dom0 ~]$ sudo df -h /
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/qubes_dom0-root 1.1T 9.7G 1.1T 1% /
this is unfortunate, but not critical.
if you really want to shrink the FS again, look up how to
shrink a ext4 FS.
basicly you will have to boot from some rescue disk or live image,
open the luks container + lvm, fsck and resize2fs the fs, then shrink
the LV, then grow the fs again to fit.
if it were my system, i wouldnt bother with that.