Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

moving LVM logical volumes to new disks

65 views
Skip to first unread message

lee

unread,
Nov 12, 2014, 4:30:04 PM11/12/14
to
Hi,

what's the best way to move existing logical volumes or a whole volume
group to new disks?

The target disks cannot be installed at the same time as the source
disks. I will have to make some sort of copy over the network to
another machine, remove the old disks, install the new disks and put the
copy in place.

Using dd doesn't seem to be a good option because extend sizes in the
old VG can be different from the extend sizes used in the new VG.

The LVs contain VMs. The VMs can be shut down during the migration.
It's not possible to make snapshots because the VG is full.

New disks will be 6x1TB RAID-5, old ones are 2x74GB RAID-1 on a
ServeRaid 8k. No more than 6 discs can be installed at the same time.


--
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us. Finally, this fear has become reasonable.


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: https://lists.debian.org/874mu4c...@yun.yagibdah.de

Don Armstrong

unread,
Nov 12, 2014, 5:40:03 PM11/12/14
to
On Wed, 12 Nov 2014, lee wrote:
> what's the best way to move existing logical volumes or a whole volume
> group to new disks?
>
> The target disks cannot be installed at the same time as the source
> disks. I will have to make some sort of copy over the network to
> another machine, remove the old disks, install the new disks and put the
> copy in place.
>
> The LVs contain VMs. The VMs can be shut down during the migration.
> It's not possible to make snapshots because the VG is full.
>
> New disks will be 6x1TB RAID-5, old ones are 2x74GB RAID-1 on a
> ServeRaid 8k. No more than 6 discs can be installed at the same time.

You can remove one of the RAID-1 drives, install 5 of the 1T drives, and
start both raids in degraded mode temporarily. Once you've done that,
add the new PVs to the VG, and pvmove.

Alternatively, you can start with three drives in raid-5, and then grow
the array out to the additional three drives, once you've done the
migration, or have two different raid-5 arrays in the same vg.

Alternatively, you can use an external enclosure to house the RAID1 or
RAID5 temporarily. USB is slow, but workable.

--
Don Armstrong http://www.donarmstrong.com

There is no more concentrated form of evil
than apathy.


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: https://lists.debian.org/20141112223...@teltox.donarmstrong.com

Igor Cicimov

unread,
Nov 12, 2014, 5:40:05 PM11/12/14
to

How about this, sdf is one of the new disks sdb is old that needs replacement:

Attach sdf and add it to the vg

# pvcreate /dev/sdf
# vgextend vg1 /dev/sdf

Move the data

# pvmove /dev/sdb /dev/sdf
           
Remove the old disk from vg1

# vgreduce vg1 /dev/sdb

Take out sdb, attach new drive and repeat the procedure. No need to unmound the filesystem for pvmove. Having backup is of course recommended.
     

Karl E. Jorgensen

unread,
Nov 12, 2014, 6:00:05 PM11/12/14
to
Hi

On Wed, Nov 12, 2014 at 10:09:43PM +0100, lee wrote:
> Hi,
>
> what's the best way to move existing logical volumes or a whole volume
> group to new disks?
>
> The target disks cannot be installed at the same time as the source
> disks. I will have to make some sort of copy over the network to
> another machine, remove the old disks, install the new disks and put the
> copy in place.

Having to do this over the network makes it slightly
complicated.... But not impossible.

> Using dd doesn't seem to be a good option because extend sizes in the
> old VG can be different from the extend sizes used in the new VG.
>
> The LVs contain VMs. The VMs can be shut down during the migration.
> It's not possible to make snapshots because the VG is full.

Ok.

> New disks will be 6x1TB RAID-5, old ones are 2x74GB RAID-1 on a
> ServeRaid 8k. No more than 6 discs can be installed at the same time.

Assuming that:

* both machines can be online at the same time

* there is a good network connection between them. The fatter the pipe
the better

* both run Debian. Obviously

* The VMs are happy to (eventually) migrate to the new hardware box

Then there is a sneaky way, which can help minimize the downtime: LVM
and network block devices (or iSCSI. Either can work). Chunky,
slightly hacky, but worth considering.

The basic idea is:

* On the receiving machine, prepare the disks. Export the *whole*
disks (or rather: the RAID device(s)) using nbd, xnbd or iSCSI.

* On the sending machine: attach the disks over the network, using nbd
client, xndb client or iSCSI.

* On the sending machine: 'pvcreate' the disks, and 'vgextend' them
into your volume group. So you end up with a volume group that spans
*both* machines. Some of the PVs will be accessed over the network,
but LVM doesn't care. Obviously, the I/O characteristics of the
"remote" disks will be a lot worse.

* Avoid running any LVM commands on the receiving machine just yet -
if you did, it would see a partial volume group and probably
complain like mad. It may even update the metadata on the PVs it
*can* see to say that the "other" PVs are unavailable, which is
tricky to fix.

* On the sending machine, use 'pvmove' to move each LV to the new
disks of your choice. This will send them over the network. This
doesn't *require* any downtime on the VMs, but be prepared for slow
I/O on them, as they will now (increasingly) be accessing stuff over
the network.

* Once all your LVs have been moved, shut down the VMs on the sending
machine and quiesce everything. You want to 'deactivate' the LVs with:

lvchange -an vgname/lvname

This will (amongst other things) remove the entries in /dev for the
LVs, and make them unavailable.

* On the sending machine, use 'vgsplit' to split the volume group into
two volume groups. The remote disks should be moved into a new
volume group.

* On the sending machine: "sync;sync;sync". Just for paranoia's
sake. Paranoia is good, and not a vice.

* On the receiving machine, run 'pvscan', 'vgscan' and similar: This
should now see a complete VG.

* shut down the nbd client/xnbd client/iscsi client on the sending
machine. You don't want the two machines accessing the same
disks. Therein lies madness.

* Activate the LVs on the receiving machine ("lvchange -ay"), copy the
VM definitions across (exactly how depends on your virtualisation)

* Start up the VMs. Pray that they have network etc as before.

* Profit.

I'm sure that there are (hopefully minor) details here that I've
forgotten (backups?), but it should give you the general idea.

Bottom line: Accessing disks over the network is perfectly possible,
if you are willing to live with the added latency. Not a good idea for
database servers or other IO intensive VMs.

It may be a better alternative than extended downtime. As an
administrator, you get to make that trade-off.

Hope this helps
--
Karl E. Jorgensen


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: https://lists.debian.org/20141112225744.GA9128@hawking

lee

unread,
Nov 13, 2014, 3:10:06 PM11/13/14
to
Don Armstrong <d...@debian.org> writes:

> On Wed, 12 Nov 2014, lee wrote:
>> what's the best way to move existing logical volumes or a whole volume
>> group to new disks?
>>
>> The target disks cannot be installed at the same time as the source
>> disks. I will have to make some sort of copy over the network to
>> another machine, remove the old disks, install the new disks and put the
>> copy in place.
>>
>> The LVs contain VMs. The VMs can be shut down during the migration.
>> It's not possible to make snapshots because the VG is full.
>>
>> New disks will be 6x1TB RAID-5, old ones are 2x74GB RAID-1 on a
>> ServeRaid 8k. No more than 6 discs can be installed at the same time.
>
> You can remove one of the RAID-1 drives, install 5 of the 1T drives, and
> start both raids in degraded mode temporarily. Once you've done that,
> add the new PVs to the VG, and pvmove.
>
> Alternatively, you can start with three drives in raid-5, and then grow
> the array out to the additional three drives, once you've done the
> migration, or have two different raid-5 arrays in the same vg.
>
> Alternatively, you can use an external enclosure to house the RAID1 or
> RAID5 temporarily. USB is slow, but workable.

Yes, I thought about something like this. I'm not sure whether I can
actually grow the RAID-5. I could create the new RAID-5 with 6 disks
and then remove one to run both arrays in degraded mode to copy things
over.

However, I currently have dom0 on a non-LVM partition, and I want to
make a single LVM partition from the new RIAD-5 with several LVs. The
largest LV will be for data. I want to have more room for VMs, too, and
I want to be able to increase the size of the existing LVs --- either in
the future or while I am at it.

I want to convert dom0 to a LV, and I'm not sure whether it's better to
re-install dom0 on LV or to copy and convert the existing one somehow.

I'm also not sure what limits copying with dd would introduce. I might
be better off making copies of everything over the network --- with the
added benefit of having a backup which I currently don't have --- and
then copying everything back with something like 'cp -a'. Using 'cp -a'
worked fine in the past, and I was able to boot a rescue system to
install grub.

I've never done that with VMs in LVs, though. When I boot a rescue
system, can I still install grub in each of the LVs?

I also have a spare 1TB disk which I can use in another machine
to make the backups to.


--
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us. Finally, this fear has become reasonable.


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: https://lists.debian.org/87a93vq...@yun.yagibdah.de

lee

unread,
Nov 13, 2014, 3:10:07 PM11/13/14
to
"Karl E. Jorgensen" <ka...@jorgensen.org.uk> writes:

> Hi
>
> On Wed, Nov 12, 2014 at 10:09:43PM +0100, lee wrote:
>> Hi,
>>
>> what's the best way to move existing logical volumes or a whole volume
>> group to new disks?
>>
>> The target disks cannot be installed at the same time as the source
>> disks. I will have to make some sort of copy over the network to
>> another machine, remove the old disks, install the new disks and put the
>> copy in place.
>
> Having to do this over the network makes it slightly
> complicated.... But not impossible.
>
>> Using dd doesn't seem to be a good option because extend sizes in the
>> old VG can be different from the extend sizes used in the new VG.
>>
>> The LVs contain VMs. The VMs can be shut down during the migration.
>> It's not possible to make snapshots because the VG is full.
>
> Ok.
>
>> New disks will be 6x1TB RAID-5, old ones are 2x74GB RAID-1 on a
>> ServeRaid 8k. No more than 6 discs can be installed at the same time.
>
> Assuming that:
>
> * both machines can be online at the same time
>
> * there is a good network connection between them. The fatter the pipe
> the better
>
> * both run Debian. Obviously

One is running Fedora. I could install Debian on it if that helps.

> * The VMs are happy to (eventually) migrate to the new hardware box

The server remains the same, only the disks are being replaced. The
SeverRaid 8k doesn't like the WD20EARS (and they are slow), so I'm
replacing them.

> Then there is a sneaky way, which can help minimize the downtime: LVM
> and network block devices (or iSCSI. Either can work). Chunky,
> slightly hacky, but worth considering.
>
> The basic idea is:
>
> * On the receiving machine, prepare the disks. Export the *whole*
> disks (or rather: the RAID device(s)) using nbd, xnbd or iSCSI.

I'd have to attach the disks to a SmartArry P800, and the ServeRaid 8k
won't be able to read them. I'd have to use the 1TB spare disk instead
to move the volumes to. Once they're moved over, I could replace the
disks in the sending machine and move the LVs back the other way round.

> * On the sending machine: attach the disks over the network, using nbd
> client, xndb client or iSCSI.

Hm, I need to learn about that ...

Looking at things, aoe seems to be a good idea --- if that works with
hardware RAID and SATA.

> [...]
>
> Hope this helps

Yes, thank you :) It's a very interesting idea.

Fortunately, downtime isn't an issue. I also have a 32GB USB stick, and
all the LVs are smaller than 32GB.

Since there seems to be some agreement that it would be best to use
pvmove, I think I could, one after the other, move all the LVs to the
USB stick with pvmove, plug the USB stick into the other machine, move
the LVs onto a hard disk in the other machine, replace disks and move
the LVs back the same way.

I can keep the VMs shut down while doing this, which allows me to just
move the USB stick rather than moving over the network. However, over
the network might be more reliable, and I could move all VMs at once
with minimal downtime. Hmmm ...


--
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us. Finally, this fear has become reasonable.


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: https://lists.debian.org/871tp6r...@yun.yagibdah.de

lee

unread,
Nov 13, 2014, 3:10:07 PM11/13/14
to
Afaik it's rather difficult, if not impossible, to convince a ServeRaid
8k controller to present disks as JBOD.

If I could somehow install a 7th disk, moving would be much easier ...
The only USB disk I have is broken.


--
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us. Finally, this fear has become reasonable.


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: https://lists.debian.org/8761ejq...@yun.yagibdah.de

Patrick Ouellette

unread,
Nov 13, 2014, 4:30:06 PM11/13/14
to
On Thu, Nov 13, 2014 at 08:52:58PM +0100, lee wrote:
>
> Fortunately, downtime isn't an issue. I also have a 32GB USB stick, and
> all the LVs are smaller than 32GB.
>
> Since there seems to be some agreement that it would be best to use
> pvmove, I think I could, one after the other, move all the LVs to the
> USB stick with pvmove, plug the USB stick into the other machine, move
> the LVs onto a hard disk in the other machine, replace disks and move
> the LVs back the same way.
>
> I can keep the VMs shut down while doing this, which allows me to just
> move the USB stick rather than moving over the network. However, over
> the network might be more reliable, and I could move all VMs at once
> with minimal downtime. Hmmm ...
>

Call me unimaganitive or simple, but what about tar or rsync??

Just backup to the other host on the network; swap around drives
as needed; create new volume groups; restore from other host.

Pat


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: https://lists.debian.org/20141113202...@flying-gecko.net

lee

unread,
Nov 15, 2014, 4:10:05 PM11/15/14
to
Patrick Ouellette <pou...@debian.org> writes:

> On Thu, Nov 13, 2014 at 08:52:58PM +0100, lee wrote:
>>
>> Fortunately, downtime isn't an issue. I also have a 32GB USB stick, and
>> all the LVs are smaller than 32GB.
>>
>> Since there seems to be some agreement that it would be best to use
>> pvmove, I think I could, one after the other, move all the LVs to the
>> USB stick with pvmove, plug the USB stick into the other machine, move
>> the LVs onto a hard disk in the other machine, replace disks and move
>> the LVs back the same way.
>>
>> I can keep the VMs shut down while doing this, which allows me to just
>> move the USB stick rather than moving over the network. However, over
>> the network might be more reliable, and I could move all VMs at once
>> with minimal downtime. Hmmm ...
>>
>
> Call me unimaganitive or simple, but what about tar or rsync??
>
> Just backup to the other host on the network; swap around drives
> as needed; create new volume groups; restore from other host.

Because it's too simple? ;)

Why didn't I think of rsync? I'm using it for backups all the time.

How do I make the VMs bootable after copying them back?


--
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us. Finally, this fear has become reasonable.


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: https://lists.debian.org/87sihkj...@yun.yagibdah.de

Patrick Ouellette

unread,
Nov 16, 2014, 7:30:03 PM11/16/14
to
On Sat, Nov 15, 2014 at 09:53:40PM +0100, lee wrote:
> Patrick Ouellette <pou...@debian.org> writes:
>
> > On Thu, Nov 13, 2014 at 08:52:58PM +0100, lee wrote:
> >>
> >> Fortunately, downtime isn't an issue. I also have a 32GB USB stick, and
> >> all the LVs are smaller than 32GB.
> >>
> >> Since there seems to be some agreement that it would be best to use
> >> pvmove, I think I could, one after the other, move all the LVs to the
> >> USB stick with pvmove, plug the USB stick into the other machine, move
> >> the LVs onto a hard disk in the other machine, replace disks and move
> >> the LVs back the same way.
> >>
> >> I can keep the VMs shut down while doing this, which allows me to just
> >> move the USB stick rather than moving over the network. However, over
> >> the network might be more reliable, and I could move all VMs at once
> >> with minimal downtime. Hmmm ...
> >>
> >
> > Call me unimaganitive or simple, but what about tar or rsync??
> >
> > Just backup to the other host on the network; swap around drives
> > as needed; create new volume groups; restore from other host.
>
> Because it's too simple? ;)
>
> Why didn't I think of rsync? I'm using it for backups all the time.
>
> How do I make the VMs bootable after copying them back?
>

Maybe try a SuperGrub Boot Disk (or USB drive) if you are using GRUB.

I would probably install on a minimal system on the new disks so they are
bootable, create the new volumes, rsync, move to the desired machine and
boot with the SuperGrub Boot Disk if the machine didn't just boot from the
drives on it's own.

Pat


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: https://lists.debian.org/2014111623...@flying-gecko.net

lee

unread,
Nov 18, 2014, 6:50:04 PM11/18/14
to
Hi,

so far, I managed to pvmove a LV to my USB stick and from there to a
backup disk in another machine. Doing so, I found that I can split off
LVs from a volume group and that this inevitably creates a new VG. That
leaves you stuck because it's impossible to move a LV from one VG to
another, and it's also impossible to merge multiple VGs into one VG :(
How stupid is that??

This means that I must move the remaining LVs from the server to the USB
stick all at once because otherwise I'd end up with a number of VGs,
each representing one LV, rather than a number of LVs within one VG.

I could move the whole VG with all the remaining LVs to the USB stick if
the VG wasn't too large for the stick by a few extends (about 1GB).
Hence I need to shrink LVs of the VG before I can move the whole VG to
the USB stick.

I'm reading that I could first shrink the file system in a LV, then
lvreduce the LV. However, what's inside each LV is two partitions, a
1GB swap partition and another partition with ext4.

This shows up like this:


root@heimdall:~# lvdisplay
[...]
--- Logical volume ---
LV Path /dev/vg_guests/lv_gulltop
LV Name lv_gulltop
VG Name vg_guests
LV UUID rKVBWY-vycv-KEsl-CQYL-iAbV-ipqL-WtDuWy
LV Write Access read/write
LV Creation host, time heimdall, 2014-06-10 11:35:45 +0200
LV Status available
# open 0
LV Size 10.25 GiB
Current LE 656
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:4


root@heimdall:~# fdisk -l
[...]
Device Boot Start End Blocks Id System
/dev/mapper/vg_guests-lv_gulltop1 2048 1953791 975872 82 Linux swap / Solaris
/dev/mapper/vg_guests-lv_gulltop2 * 1953792 21493759 9769984 83 Linux


Each LV was partitioned from within its VM. Inside the VM, the above
"disks" show up as /dev/xvda1 and /dev/xvda2. The VMs can still be
started (some are running atm).

The volume group now has 6GB free I can use to work with.


Can I remove or shrink the swap partition contained in each LV (instead
of the ext4 file system) to then shrink the whole VG so that it will fit
onto the USB stick?

This should somehow be possible because the only difference to shrinking
the file system and then the partition holding the file system is that
the swap partition is at the beginning of the LV while the partition
with the file system is at the end of the LV. In any case, with the
swap partition removed or shrunken, less space would be occupied by what
the LV contains. So lvreduce would have to shrink the LV at its
beginning rather than at its end --- and it needs a way to figure out
where to shrink a LV anyway. In the end, the LVs in the VG must require
a few extends less so that I can pvmove them.

Or what do I do (letting aside making disks available over the network
with iSCSI or the like)?


(Note to self: Do not partition LVs but create LVs for swap partitions
instead.)

--
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us. Finally, this fear has become reasonable.


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: https://lists.debian.org/87d28ka...@yun.yagibdah.de

lee

unread,
Nov 18, 2014, 6:50:05 PM11/18/14
to
Patrick Ouellette <pou...@debian.org> writes:

>> How do I make the VMs bootable after copying them back?
>>
>
> Maybe try a SuperGrub Boot Disk (or USB drive) if you are using GRUB.
>
> I would probably install on a minimal system on the new disks so they are
> bootable, create the new volumes, rsync, move to the desired machine and
> boot with the SuperGrub Boot Disk if the machine didn't just boot from the
> drives on it's own.

I'm not sure what you mean. Xen uses something called pygrub (or
phygrub) instead of grub for the VMs as a boot manager. If I copy the
VMs with rsync, I need to re-install this boot manager in the VM, and I
have no idea how I would do that.


--
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us. Finally, this fear has become reasonable.


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: https://lists.debian.org/87h9xwb...@yun.yagibdah.de

Karl E. Jorgensen

unread,
Nov 18, 2014, 7:40:04 PM11/18/14
to
Hi

On Wed, Nov 19, 2014 at 12:24:27AM +0100, lee wrote:
> Hi,
>
> so far, I managed to pvmove a LV to my USB stick and from there to a
> backup disk in another machine. Doing so, I found that I can split off
> LVs from a volume group and that this inevitably creates a new VG. That
> leaves you stuck because it's impossible to move a LV from one VG to
> another, and it's also impossible to merge multiple VGs into one VG :(
> How stupid is that??

Well - you can merge VGs. and You and split VGs. so you could:

- pvcreate /dev/${usbdevice}
- use pvmove to move your LV of choice onto the USB stick.
- use vgsplit to split /dev/${usbdevice} into its on VG
- Sneakernet the USB stick to a new box
- Use vgmerge to join the USB stick to the box's VG
- Use pvmove to move the LV onto a local disk
- Use vgsplit to make the USB its own VG again
- Sneakernet the USB stick back to the original box
- Rinse and repeat.

But that's a fair number of steps... You could also just:

- Create a file system on the USB device. and mount it.
- dd if=/dev/oldvg/vgname | gzip --best > /media/usbstick/somefile.gz
- Unmount and sneakernet to the new box

etc.. I'm sure you get the idea. No need to make it more complicated
than absolutely necessary.

> This means that I must move the remaining LVs from the server to the USB
> stick all at once because otherwise I'd end up with a number of VGs,
> each representing one LV, rather than a number of LVs within one VG.

Not quite...

[snipped lots of stuff that my tired braincells cannot cope with]

> (Note to self: Do not partition LVs but create LVs for swap partitions
> instead.)

Sounds like a sensible note. Unfortunately, if you use virtualisation,
you will often end up slicing off LVs to be presented to the virtual
machines as disks. And the VMs then partition them and/or create PVs
on them. Nested stuff galore.

--
Karl E. Jorgensen


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: https://lists.debian.org/20141119003752.GB25922@hawking

lee

unread,
Nov 19, 2014, 3:20:05 PM11/19/14
to
"Karl E. Jorgensen" <ka...@jorgensen.org.uk> writes:

> Hi
>
> On Wed, Nov 19, 2014 at 12:24:27AM +0100, lee wrote:
>> Hi,
>>
>> so far, I managed to pvmove a LV to my USB stick and from there to a
>> backup disk in another machine. Doing so, I found that I can split off
>> LVs from a volume group and that this inevitably creates a new VG. That
>> leaves you stuck because it's impossible to move a LV from one VG to
>> another, and it's also impossible to merge multiple VGs into one VG :(
>> How stupid is that??
>
> Well - you can merge VGs. and You and split VGs. so you could:
>
> - pvcreate /dev/${usbdevice}
> - use pvmove to move your LV of choice onto the USB stick.
> - use vgsplit to split /dev/${usbdevice} into its on VG
> - Sneakernet the USB stick to a new box
> - Use vgmerge to join the USB stick to the box's VG
> - Use pvmove to move the LV onto a local disk
> - Use vgsplit to make the USB its own VG again
> - Sneakernet the USB stick back to the original box
> - Rinse and repeat.
>
> But that's a fair number of steps... You could also just:

Oh! Thank you, that solves my problem! :) I looked for something like
vgmerge and didn't find it.

I've done one step already, and when I move another VM to the USB stick,
I should be able to move the whole remaining VG onto the stick. Three
steps isn't too bad.

> - Create a file system on the USB device. and mount it.
> - dd if=/dev/oldvg/vgname | gzip --best > /media/usbstick/somefile.gz
> - Unmount and sneakernet to the new box

Hmmm, how would I put the copy of the VG back? Once the copying is all
done, I'll remove the remaining two discs from the server and plug 6
discs in from which I'll make a RAID-5 which will have about 6TB. Then
I'll pvcreate the whole RAID volume and probably re-install dom0 in its
own LV. The next step is to create two VGs, one for the guest VMs and a
large one for data.

I don't want to re-install dom0, but apparently just copying it with 'cp
-a' into the new LV might not allow me to make it bootable without
trouble.

What does Debian do in regard to grub? Can I just 'cp -a' dom0 over,
boot into a rescue system and install grub? Dom0 is currently not in a
LV.

> etc.. I'm sure you get the idea. No need to make it more complicated
> than absolutely necessary.

Yes --- though I don't mind when it's a bit more complicated now and
easier later. I can simply go through the same procedure a couple times
and do it forth and back; it's not like I had hundreds of VMs :)

>> (Note to self: Do not partition LVs but create LVs for swap partitions
>> instead.)
>
> Sounds like a sensible note. Unfortunately, if you use virtualisation,
> you will often end up slicing off LVs to be presented to the virtual
> machines as disks. And the VMs then partition them and/or create PVs
> on them. Nested stuff galore.

Guess what :)

I found out that I accidentially had managed to give dom0 and a VM the
very same partition as swap partition. It even worked ...


--
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us. Finally, this fear has become reasonable.


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: https://lists.debian.org/87egszr...@yun.yagibdah.de
0 new messages