Move To a Larger Disk

104 views
Skip to first unread message

Matt Drez

unread,
Jun 23, 2020, 12:12:10 PM6/23/20
to qubes-users
Hey Guys,

I was trying to move my Qubes to a bigger disk. I used clonezilla to clone it. I was able to successfully boot up but Qubes was still complaining that my disk is full. For some reason it doesn't want to use the free disk space that's available. I have no clue how to troubleshoot this. (please see attachment)

Can someone please help?

PS: since we are on the disk topic I never was able to figure out why Qubes was telling me that I only have 348 GB when my disk was 500 GB. Any ideas?



gparted2.png
publickey - mattdrez@pm.me - 0x8196D0F4.asc
signature.asc

dhorf-hfre...@hashmail.org

unread,
Jun 23, 2020, 5:24:21 PM6/23/20
to Matt Drez, qubes-users
On Tue, Jun 23, 2020 at 04:11:56PM +0000, 'Matt Drez' via qubes-users wrote:
> I was trying to move my Qubes to a bigger disk. I used clonezilla to
> clone it. I was able to successfully boot up but Qubes was still
> complaining that my disk is full. For some reason it doesn't want to
> use the free disk space that's available. I have no clue how to
> troubleshoot this. (please see attachment)

i dont fully understand that gparted screenshot, but it looks like
you resized the partition already.

so whats missing is on one of the LVM layers of the stack.

check with "pvs", it should list a /dev/mapper/luks-something.
does that PV have the right PSize of 900+ GB?
if not, google: pvresize

next check with "vgs", it should list a qubes_dom0.
does that vg have the right VSize of 900+ GB?
if not, google: vgextend

next check with "lvs qubes_dom0/pool00".
does that pool lv have the right LSize of 900+GB?
if not, google: resize lvm thin pool




Matt Drez

unread,
Jun 24, 2020, 12:38:29 PM6/24/20
to dhorf-hfre...@hashmail.org, qubes-users
> i dont fully understand that gparted screenshot, but it looks like
> you resized the partition already.
>

> so whats missing is on one of the LVM layers of the stack.
>

> check with "pvs", it should list a /dev/mapper/luks-something.
> does that PV have the right PSize of 900+ GB?
> if not, google: pvresize
>

> next check with "vgs", it should list a qubes_dom0.
> does that vg have the right VSize of 900+ GB?
> if not, google: vgextend
>

> next check with "lvs qubes_dom0/pool00".
> does that pool lv have the right LSize of 900+GB?
> if not, google: resize lvm thin pool
>


Thank you for getting back to me.

None of the three commands you gave showed 900+GB

Now, do I have to do all 3 steps in that specific order to solve the problem?
publickey - mattdrez@pm.me - 0x8196D0F4.asc
signature.asc

dhorf-hfre...@hashmail.org

unread,
Jun 24, 2020, 12:55:34 PM6/24/20
to Matt Drez, qubes-users
On Wed, Jun 24, 2020 at 04:38:21PM +0000, Matt Drez wrote:
> None of the three commands you gave showed 900+GB

you can doublecheck the partition and luks device already have the
right size with ...

lsblk -d /dev/nvme* /dev/mapper/luks-*

if these show 900+GB you are good to go with the lvm resizing.

> Now, do I have to do all 3 steps in that specific order to solve the problem?

and yes, you need to do at least the "resize the pv" and
"resize the thin pool" (and its metadata device!) parts,
i expect the "vg" will resize itself when you resize the existing pv.
(so recheck with pvs/vgs/lvs after each step)



Matt Drez

unread,
Jun 24, 2020, 3:50:08 PM6/24/20
to dhorf-hfre...@hashmail.org, qubes-users
I've got this far but got stuck (see last output):

[@dom0 ~]$ sudo pvs
/dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576: read failed after 0 of 512 at 998053052416: Input/output error
/dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576: read failed after 0 of 512 at 998053171200: Input/output error
PV VG Fmt Attr PSize PFree

/dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576 qubes_dom0 lvm2 a-- 929.51g 557.70g


[@dom0 ~]$ sudo vgs
VG #PV #LV #SN Attr VSize VFree

qubes_dom0 1 141 0 wz--n- 929.51g 557.70g


[@dom0 ~]$ sudo lvs qubes_dom0/pool00
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
pool00 qubes_dom0 twi-aotz-- 347.98g 87.01 58.34



[@dom0 ~]$ sudo lvextend -l +100%FREE /dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576

skip_dev_dir: Couldn't split up device name luks-55a20051-8c1a-435e-a1c8-862493f2d576.
"/dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576": Invalid path for Logical Volume.
publickey - mattdrez@pm.me - 0x8196D0F4.asc
signature.asc

Matt Drez

unread,
Jun 24, 2020, 4:03:14 PM6/24/20
to Matt Drez, dhorf-hfre...@hashmail.org, qubes-users
sorry, I was a cottonheaded-ninnymuggins. I was supposed to run the lvextend against the /dev/mapper/qubes_dom0-root

Now I did it but for some reason the lvs still won't see the full size


[@dom0 ~]$ sudo lvextend -l +100%FREE /dev/mapper/qubes_dom0-root

WARNING: Sum of all thin volume sizes (3.37 TiB) exceeds the size of thin pool qubes_dom0/pool00 and the size of whole volume group (929.51 GiB)!
For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100.
Size of logical volume qubes_dom0/root changed from 557.36 GiB (142683 extents) to 1.09 TiB (285455 extents).
Logical volume qubes_dom0/root successfully resized.


[@dom0 ~]$ sudo pvs
/dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576: read failed after 0 of 512 at 998053052416: Input/output error
/dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576: read failed after 0 of 512 at 998053171200: Input/output error
PV VG Fmt Attr PSize PFree

/dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576 qubes_dom0 lvm2 a-- 929.51g 557.70g


[@dom0 ~]$ sudo vgs
VG #PV #LV #SN Attr VSize VFree

qubes_dom0 1 141 0 wz--n- 929.51g 557.70g
[Abraham@dom0 ~]$ sudo lvs qubes_dom0/pool00
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
pool00 qubes_dom0 twi-aotz-- 347.98g 87.05 58.35

publickey - mattdrez@pm.me - 0x8196D0F4.asc
signature.asc

dhorf-hfre...@hashmail.org

unread,
Jun 25, 2020, 2:18:47 PM6/25/20
to Matt Drez, qubes-users
On Wed, Jun 24, 2020 at 07:49:56PM +0000, Matt Drez wrote:


> [@dom0 ~]$ sudo lvextend -l +100%FREE /dev/mapper/qubes_dom0-root

this means you resized your dom0 root volume.
that is probably not what you were trying to do.
you can check if that worked with "df -h /" or
"lsblk -d /dev/mapper/qubes_dom0-root" in dom0.


> [@dom0 ~]$ sudo lvextend -l +100%FREE /dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576

it is dangerous to press random buttons when dealing with diskspace.


> [@dom0 ~]$ sudo lvs qubes_dom0/pool00
> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
> pool00 qubes_dom0 twi-aotz-- 347.98g 87.01 58.34

if you want to enlarge logical volume qubes_dom0/pool00, you
should enlarge qubes_dom0/pool00, not random other devices.

and it is important you enlarge the metadata volume first.
something like:
lvresize --poolmetadatasize +1G qubes_dom0/pool00





Matt Drez

unread,
Jun 25, 2020, 4:05:24 PM6/25/20
to dhorf-hfre...@hashmail.org, qubes-users
> > [@dom0 ~]$ sudo lvs qubes_dom0/pool00
> > LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
> > pool00 qubes_dom0 twi-aotz-- 347.98g 87.01 58.34
>

> if you want to enlarge logical volume qubes_dom0/pool00, you
> should enlarge qubes_dom0/pool00, not random other devices.
>

> and it is important you enlarge the metadata volume first.
> something like:
> lvresize --poolmetadatasize +1G qubes_dom0/pool00

That makes total sense. Well, I'm glad I didn't screw everything up but now how do I reverse what I have done so I could achieve my goal? How can I "take away" the 100% free space I assigned to root and give it to the whole system?


Thanks for all your help and your patience with me.


publickey - mattdrez@pm.me - 0x8196D0F4.asc
signature.asc

dhorf-hfre...@hashmail.org

unread,
Jun 25, 2020, 4:42:29 PM6/25/20
to Matt Drez, qubes-users
On Thu, Jun 25, 2020 at 08:05:11PM +0000, Matt Drez wrote:
> That makes total sense. Well, I'm glad I didn't screw everything up
> but now how do I reverse what I have done so I could achieve my goal?
> How can I "take away" the 100% free space I assigned to root and give
> it to the whole system?

i am not sure you have to do anything.

check with "vgs", does it still report 500+ VFree?
then you can just go ahead with the "resizing pool00 meta+data" parts.

if you really want to clean up the qubes_dom0-root situation...
check with "lsblk -d /dev/mapper/qubes_dom0-root".
does it say 500something GB? all good, nothing to do.
does it say 1TB? meh, continue.
check with "df -h /", does it report 500something GB total? good!
IF the df-total is 500something GB, you can just lvresize
qubes_dom0-root back down to something like 600GB.
pick something that is 10% or so bigger than your old disk and the df-total.

DO NOT try to hit the exact size!
DO NOT try this if df says 1TB total!




Matt Drez

unread,
Jun 25, 2020, 4:54:59 PM6/25/20
to dhorf-hfre...@hashmail.org, qubes-users
This is where I am at now (below). What would be the best course of action. What would you if this was your mess to clean up? :)

[@dom0 ~]$ sudo pvs
WARNING: Device /dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576 has size of 1948924598 sectors which is smaller than corresponding PV size of 1949320573 sectors. Was device resized?
One or more devices used as PVs in VG qubes_dom0 have changed sizes.
PV VG Fmt Attr PSize PFree

/dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576 qubes_dom0 lvm2 a-- 929.51g 556.70g


[@dom0 ~]$ sudo vgs
WARNING: Device /dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576 has size of 1948924598 sectors which is smaller than corresponding PV size of 1949320573 sectors. Was device resized?
One or more devices used as PVs in VG qubes_dom0 have changed sizes.
VG #PV #LV #SN Attr VSize VFree

qubes_dom0 1 141 0 wz--n- 929.51g 556.70g


[@dom0 ~]$ sudo lvs qubes_dom0/pool00
WARNING: Device /dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576 has size of 1948924598 sectors which is smaller than corresponding PV size of 1949320573 sectors. Was device resized?
One or more devices used as PVs in VG qubes_dom0 have changed sizes.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
pool00 qubes_dom0 twi-aotz-- 347.98g 86.33 14.67




[@dom0 ~]$ sudo lsblk -d /dev/mapper/qubes_dom0-root

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
qubes_dom0-root 253:4 0 1.1T 0 lvm /


[@dom0 ~]$ sudo df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/qubes_dom0-root 1.1T 9.7G 1.1T 1% /


publickey - mattdrez@pm.me - 0x8196D0F4.asc
signature.asc

dhorf-hfre...@hashmail.or

unread,
Jun 25, 2020, 5:40:34 PM6/25/20
to Matt Drez, qubes-users
On Thu, Jun 25, 2020 at 08:54:51PM +0000, Matt Drez wrote:
> This is where I am at now (below). What would be the best course of
> action. What would you if this was your mess to clean up? :)


> WARNING: Device
> /dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576 has size of
> 1948924598 sectors which is smaller than corresponding PV size of
> 1949320573 sectors. Was device resized?

this is a bit worrying and i am not sure how you ended up with that.

to clean this up, i would try ...
pvresize --setphysicalvolumesize 900g /dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576
pvresize /dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576

then check that the warning is gone (pvs or vgs).

AFTER resolving the PV size problem, i would grow the thin pool.

> VG #PV #LV #SN Attr VSize VFree
> qubes_dom0 1 141 0 wz--n- 929.51g 556.70g

> [@dom0 ~]$ sudo lvs qubes_dom0/pool00
> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
> pool00 qubes_dom0 twi-aotz-- 347.98g 86.33 14.67

this suggests you resized the poolmetadata already?

if you did NOT:
lvresize --poolmetadatasize +1G qubes_dom0/pool00

then resize the pool itself:
lvresize -L +550G qubes_dom0/pool00

then check it worked by running "lvs qubes_dom0/pool00" again,
the new LSize should be 900g, and the Meta% should still be a lot
smaller than the Data%.



> [@dom0 ~]$ sudo df -h /
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/qubes_dom0-root 1.1T 9.7G 1.1T 1% /

this is unfortunate, but not critical.
if you really want to shrink the FS again, look up how to
shrink a ext4 FS.
basicly you will have to boot from some rescue disk or live image,
open the luks container + lvm, fsck and resize2fs the fs, then shrink
the LV, then grow the fs again to fit.
if it were my system, i wouldnt bother with that.



Matt Drez

unread,
Jun 25, 2020, 6:29:48 PM6/25/20
to dhorf-hfre...@hashmail.or, qubes-users
This worked (see below). Thank you so much. You rock!


Only one question remained:
How to safely raise it to the maximum size (1TB ) and not just an arbitrary number that's close enough?


[@dom0 ~]$ sudo pvresize --setphysicalvolumesize 900g /dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576

WARNING: Device /dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576 has size of 1948924598 sectors which is smaller than corresponding PV size of 1949320573 sectors. Was device resized?
One or more devices used as PVs in VG qubes_dom0 have changed sizes.
Physical volume "/dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576" changed
1 physical volume(s) resized / 0 physical volume(s) not resized


[@dom0 ~]$ sudo pvresize /dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576

Physical volume "/dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576" changed
1 physical volume(s) resized / 0 physical volume(s) not resized


[@dom0 ~]$ sudo pvs
PV VG Fmt Attr PSize PFree

/dev/mapper/luks-55a20051-8c1a-435e-a1c8-862493f2d576 qubes_dom0 lvm2 a-- 929.32g 556.51g
[Abraham@dom0 ~]$ sudo vgs
VG #PV #LV #SN Attr VSize VFree

qubes_dom0 1 141 0 wz--n- 929.32g 556.51g


[@dom0 ~]$ sudo lvresize -L +550G qubes_dom0/pool00
WARNING: Sum of all thin volume sizes (3.34 TiB) exceeds the size of thin pools and the size of whole volume group (929.32 GiB)!
For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100.
Size of logical volume qubes_dom0/pool00_tdata changed from 347.98 GiB (89083 extents) to 897.98 GiB (229883 extents).
Logical volume qubes_dom0/pool00_tdata successfully resized.


[@dom0 ~]$ sudo lvs qubes_dom0/pool00
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
pool00 qubes_dom0 twi-aotz-- 897.98g 33.45 14.82

publickey - mattdrez@pm.me - 0x8196D0F4.asc
signature.asc

dhorf-hfre...@hashmail.org

unread,
Jun 25, 2020, 6:45:38 PM6/25/20
to Matt Drez, qubes-users
On Thu, Jun 25, 2020 at 10:29:33PM +0000, 'Matt Drez' via qubes-users wrote:
> How to safely raise it to the maximum size (1TB ) and not just an
> arbitrary number that's close enough?

which "it"?
if you bought a "1TB disk", that usualy means 931GiB.

the pvresize dance means the pv is now as big as the partition/luks.

> VG #PV #LV #SN Attr VSize VFree
> qubes_dom0 1 141 0 wz--n- 929.32g 556.51g

so 929.32g sounds about right, thats the size of your disk minus
the boot partition minus some headers.

> pool00 qubes_dom0 twi-aotz-- 897.98g 33.45 14.82

you can probably grow that pool some more.
check "vgs" again, the VFree is now something like 6.5g?

so run "lvresize -L +6g qubes_dom0/pool00"

i would NOT assign the last few megabytes to a device, but keep
them "free" in case i need to dig myself out of a "oops, metadata
is full because i have too many snapshots" situation.



Franz

unread,
Jun 26, 2020, 5:38:26 AM6/26/20
to dhorf-hfre...@hashmail.org, Matt Drez, qubes-users
Next time I would suggest a very simple and reliable backup of the old disk, install Qubes on the new disk, restore backup on the new disk and you'll have a clean new system with no problems.

dhorf-hfre...@hashmail.org

unread,
Jun 26, 2020, 5:43:26 AM6/26/20
to Franz, qubes-users
On Fri, Jun 26, 2020 at 06:38:09AM -0300, Franz wrote:

> Next time I would suggest a very simple and reliable backup of the old
> disk, install Qubes on the new disk, restore backup on the new disk and
> you'll have a clean new system with no problems.

uh. even in the best case, this is a _lot_ more effort than just
copying over the disk and resizing the partition.

but yes. it is important to have backups and know how to restore them.
i recommend borgbackup for this.



Franz

unread,
Jun 26, 2020, 8:52:35 AM6/26/20
to dhorf-hfre...@hashmail.org, qubes-users
On Fri, Jun 26, 2020 at 6:43 AM <dhorf-hfre...@hashmail.org> wrote:
On Fri, Jun 26, 2020 at 06:38:09AM -0300, Franz wrote:

> Next time I would suggest a very simple and reliable backup of the old
> disk, install Qubes on the new disk, restore backup on the new disk and
> you'll have a clean new system with no problems.

uh. even in the best case, this is a _lot_ more effort than just
copying over the disk and resizing the partition.

:-) for you Dhorf, who are a brilliant computer scientist, for you. Or for others that have the capacity and the biological memory to store this information. But for more normal users, the backup route is easy enough to be done safely without headaches and without arriving at a dead end where the only chance is asking for help. It is just spending some time.   Not only that, the Qubes backup system is very good and safe,  better than all other systems I tried, Windows or Linux. Over about 10 years I restored my systems lots of times and Qubes never failed. But Dhorf you are a good teacher, the other time you taught me how to repair a filesystem with fsck and I remembered it enough to use it with a another non-Qubes system and it solved the problem, Thanks.
Reply all
Reply to author
Forward
0 new messages