LVM not using full partition size

16 views
Skip to first unread message

jigger...@posteo.de

unread,
May 10, 2020, 11:12:13 AM5/10/20
to qubes...@googlegroups.com
Hi,

I just reinstalled Qubes 4.0.3 (had 4.0 before) on my machine because of
some issues I had. Now my LVM has significantly less space than before.
My old installation had an LVM with about 670 GiB, now it is only 584 GiB.

In the installer, I deleted all the Qubes partitions there were (I also
have a Windows partition) through the auto partitioner with "make
additional space available", same I did with the previous install.

The partition size for the LVM correctly shows about 700 GiB, but the
root storage then only gets 584, and I am not able to set it higher.

Where did the rest of the space end up? Did I do something wrong during
the partitioning? Is there a way to fix this without having to reinstall
again? I am not very familiar whith the whole LVM concept, so please
excuse if I use any terminology wrong.

Best

dhorf-hfre...@hashmail.org

unread,
May 10, 2020, 1:05:20 PM5/10/20
to jigger...@posteo.de, qubes...@googlegroups.com
On Sun, May 10, 2020 at 05:12:04PM +0200, jigger...@posteo.de wrote:
> The partition size for the LVM correctly shows about 700 GiB, but the
> root storage then only gets 584, and I am not able to set it higher.

the root partition is just the root partition for dom0.
if you want to see how much space is available to lvm, you need
to check with "vgs" or "lvs qubes_dom0/pool00".


> Where did the rest of the space end up? Did I do something wrong during

for the full picture you can try "lvs -a".



jigger...@posteo.de

unread,
May 10, 2020, 1:51:08 PM5/10/20
to dhorf-hfre...@hashmail.org, qubes...@googlegroups.com
Yes, that's what I mean, mixed up terminology there. The pool has now
584 GiB, before the reinstall it had around 670 GiB - I am wondering
where that additional 84 GiB ended up (and I need it, currently I can't
restore all my VMs).

lvs -a shows

  LV                                             VG         Attr      
LSize   Pool   Origin                                  Data%  Meta% 
Move Log Cpy%Sync Convert
  [lvol0_pmspare]                                qubes_dom0 ewi-------
600.00m                                                                                      

  pool00                                         qubes_dom0 twi-aotz--
584.84g                                                99.11 
56.59                          
  [pool00_tdata]                                 qubes_dom0 Twi-ao----
584.84g                                                                                      

  [pool00_tmeta]                                 qubes_dom0 ewi-ao----
600.00m                                                                                      

  root                                           qubes_dom0 Vwi-aotz--
584.84g pool00                                        
2.53                                  
  swap                                           qubes_dom0 -wi-ao---- 
15.27g                                                                                      

dhorf-hfre...@hashmail.org

unread,
May 10, 2020, 2:08:04 PM5/10/20
to jigger...@posteo.de, qubes...@googlegroups.com
(please dont top post)

On Sun, May 10, 2020 at 07:51:01PM +0200, jigger...@posteo.de wrote:
> Yes, that's what I mean, mixed up terminology there. The pool has now
> 584 GiB, before the reinstall it had around 670 GiB - I am wondering
> where that additional 84 GiB ended up (and I need it, currently I can't
> restore all my VMs).

"fdisk -l" should show you how big the partition is.
"pvs" should show you how big the luks device is.
"vgs" should show you how big the volume group is.

pvs + vgs also have a "free" column that shows if there is unused
space on that level.




jigger...@posteo.de

unread,
May 10, 2020, 2:21:13 PM5/10/20
to dhorf-hfre...@hashmail.org, qubes...@googlegroups.com
Okay, there seems to be 100g unused space? How can I claim that?

[user@dom0 ~]$ sudo pvs
  PV                                                    VG         Fmt 
Attr PSize   PFree 
  /dev/mapper/luks-f81958da-4f7e-4146-8475-554f4ce32289 qubes_dom0 lvm2
a--  701.28g 100.00g
[user@dom0 ~]$ sudo vgs
  VG         #PV #LV #SN Attr   VSize   VFree 
  qubes_dom0   1 168   0 wz--n- 701.28g 100.00g

dhorf-hfre...@hashmail.org

unread,
May 10, 2020, 2:50:22 PM5/10/20
to jigger...@posteo.de, qubes...@googlegroups.com
On Sun, May 10, 2020 at 08:21:03PM +0200, jigger...@posteo.de wrote:
> [user@dom0 ~]$ sudo vgs
>   VG         #PV #LV #SN Attr   VSize   VFree 
>   qubes_dom0   1 168   0 wz--n- 701.28g 100.00g

this means the VG is already the right size.
for resizing the pool, google "how to resize an lvm thin pool".

it is really important you resize the metadata volume too, and
probably better to do this before resizing the datavolume.

going from your earlier numbers, something like ...
lvresize --poolmetadatasize 1G qubes_dom0/pool00

then resizing the data volume ...
lvresize -L +99G qubes_dom0/pool00

(if it complains on that step, reduce to +98G or so)

i would leave any "fractional GB" leftovers free as a reserve in
case i have to dig myself out of a "metadata full" situation.

verify with pvs/pvs that the "Free" space is (mostly) gone,
then check with lvs that pool00 now is bigger and still has
a (much) lower meta% than data%.



jigger...@posteo.de

unread,
May 10, 2020, 4:22:09 PM5/10/20
to dhorf-hfre...@hashmail.org, qubes...@googlegroups.com

On 5/10/20 8:50 PM, dhorf-hfre...@hashmail.org wrote:
> going from your earlier numbers, something like ...
> lvresize --poolmetadatasize 1G qubes_dom0/pool00
>
> then resizing the data volume ...
> lvresize -L +99G qubes_dom0/pool00
>
> (if it complains on that step, reduce to +98G or so)
>
> i would leave any "fractional GB" leftovers free as a reserve in
> case i have to dig myself out of a "metadata full" situation.
>
> verify with pvs/pvs that the "Free" space is (mostly) gone,
> then check with lvs that pool00 now is bigger and still has
> a (much) lower meta% than data%.
>
Thanks a lot, that did it! Any idea, why the installer would leave that
space free in the first place?
Reply all
Reply to author
Forward
0 new messages