Shrinking guest DRBD disks.

740 views
Skip to first unread message

Bill Broadley

unread,
Mar 15, 2012, 7:52:14 PM3/15/12
to gan...@googlegroups.com

I needed to shrink a disk of a DRDB using instance.  Each of my nodes has 10GB for / and the rest of the disk for LVM.

The official way seems to be gnt-backup export, then gnt-instance inport and manually specify the new smaller disk.

That seemed time intensive, and the export writes to the host filesystem and I didn't have enough room.

So the best I came up with, with some help from #ganeti (very helpful, thanks) is:
1) inside the guest resize the filesystem to the new size
2) shutdown instance, then activate disks
3) lvresize both halves
4) drbdsetup /dev/drbd<device id> -f 
5) gnt-instance deactivate-disks
6) /etc/init.d/ganeti stop on the head node
7) edit config.data changing the old size for the disk to the new correct size.
8) /etc/init.d/ganeti start on the head node
9) gnt-instance start

Does anyone have something safer and/or cleaner that doesn't involve copying the entire device to a new device?






Iustin Pop

unread,
Mar 16, 2012, 4:55:26 AM3/16/12
to gan...@googlegroups.com

The only thing that could be automated more is the sequence of steps 6-8
via ganeti's "gnt-cluster repair-disk-sizes". I've never tried it for
shrinking, but for growing it works.

But for more automated shrinking, nope we don't have anything :(

regards,
iustin

Nilshar

unread,
Dec 26, 2012, 6:06:41 AM12/26/12
to gan...@googlegroups.com
Hello,

I wanted to shrink an existing partition, so I tried the above, but so far, no luck :/
I resized ext3 with no problem, then used lvreduce (I do not use drbd for that disk), but after lvreduce, when I try kpartx -a it says :

kpartx -v -a /dev/xenvg/2eb0d7a6-6173-4e46-a9b9-2663994716d9.disk0 
device-mapper: resume ioctl failed: Invalid argument
create/reload failed on xenvg-2eb0d7a6--6173--4e46--a9b9--2663994716d9.disk0p1

dmesg says :
[1067185.624489] device-mapper: table: 254:9: dm-6 too small for target: start=1, len=209712509, dev_size=104857600

So it seems that kpartx still want a 100G disk...

If I lvresize back to 100G, I can use kpartx again.
Am I missing something ?

Guido Trotter

unread,
Dec 26, 2012, 7:16:52 AM12/26/12
to gan...@googlegroups.com
Yes, if you use kpartx you have 3 levels:

1. filesystem level (which you resized)
2. partition table level (which you need to reside via parted, cfdisk,
fdisk, etc) on the main device
(/dev/xenvg/2eb0d7a6-6173-4e46-a9b9-2663994716d9.disk0)
3. lvm level (which you did)

So you're missing step 2.

Best of luck,

Guido




--
Guido Trotter
Ganeti engineering
Google Germany

Nilshar

unread,
Dec 26, 2012, 7:41:02 AM12/26/12
to gan...@googlegroups.com
Damnit... of course !
Thanks Guido :) works perfectly.. !

Sean Reifschneider

unread,
Jun 27, 2016, 6:12:36 PM6/27/16
to ganeti, bi...@broadley.org
Thanks for this post, I was able to use it to shrink a volume I had accidentally grown to >2TB, but that my MBR partition table couldn't handle.  It looks like repair-disk-sizes will take what is in LVM and record it in the config, so editing the config may not have been necessary.  I also was not using DRBD for this instance, so the drbdsetup step wasn't necessary, nor activating the disk
Reply all
Reply to author
Forward
0 new messages