Volgroup in use

158 views
Skip to first unread message

eni....@gmail.com

unread,
Oct 3, 2017, 5:13:46 AM10/3/17
to kiwi
Hi Guys,

Since a day or two I have run into an issue while building kiwi images using LVM. Before, kiwi would build the image perfectly with LVM, and clean everything up after it was done.

Suddenly, I noticed the last two days or so that it does not clean up everything. This results in the following build to stop just before the end, complaining about the VolGroup01 is in use.

To fix this issue, I have to remove the volume group by hand, reboot the system, and build everything again.

I have just tried a clean build, which was successful, followed by the vgdisplay command in my terminal. This shows that there is still a volume group on my system, which kiwi did not clean up.

Has there been a recent change in a kiwi release that could have affected this? My system checks for updates daily, so I have been using almost every new release over the last few weeks.

My system:
OpenSuse leap 42.2
Kiwi NG: 9.11.8

Snippet from the log:


INFO: Cleaning up VolumeManagerLVM instance
DEBUG: EXEC: [mountpoint /tmp/kiwi_volumes.pm1_3tvn//var]
DEBUG: EXEC: [mountpoint /tmp/kiwi_volumes.pm1_3tvn//tmp]
DEBUG: EXEC: [mountpoint /tmp/kiwi_volumes.pm1_3tvn//home]
DEBUG: EXEC: [mountpoint /tmp/kiwi_volumes.pm1_3tvn]
DEBUG: EXEC: [rm -r -f /tmp/kiwi_volumes.pm1_3tvn]
DEBUG: EXEC: [vgchange -an volgroup01]
INFO: Cleaning up Disk instance


Thanks in advance!

Roger Oberholtzer

unread,
Oct 3, 2017, 9:44:22 AM10/3/17
to kiwi-...@googlegroups.com
On Tue, Oct 3, 2017 at 11:13 AM, <eni....@gmail.com> wrote:

> To fix this issue, I have to remove the volume group by hand, reboot the
> system, and build everything again.

How do you remove the volume group by hand? I have also been fighting
with getting a kiwi image that contains a volume group to work.

--
Roger Oberholtzer

eni....@gmail.com

unread,
Oct 3, 2017, 9:59:44 AM10/3/17
to kiwi
You can use vgremove followed by the volume group name (found by running vgdisplay). But, I have discovered that a simple reboot already removes the volume group.

A few days ago I never had to do this, because kiwi simply tidied everything at the end of the build. I do not know whether this is due to a change in one of the last kiwi releases.

David Cassany

unread,
Oct 4, 2017, 8:47:28 AM10/4/17
to kiwi-...@googlegroups.com
Hi


> Suddenly, I noticed the last two days or so that it does not clean up
> everything. This results in the following build to stop just before the
> end, complaining about the VolGroup01 is in use.

Yes, I just could reproduce this issue, I am currently investigating it.
As you noted there is a work around to remove the volume by hand (using
vgremove or rebooting) or you could also change the volume group name
for different builds.

Thanks for reporting it. Regards,
David

eni....@gmail.com

unread,
Oct 4, 2017, 9:25:47 AM10/4/17
to kiwi
Hi David,

It seems that running vgremove does not work for volgroup01:

# vgremove volgroup01
  WARNING: Device for PV KlkyOD-ePPd-vFPG-xUet-C6kH-w2hh-DijVif not found or rejected by a filter.
  WARNING: 1 physical volumes are currently missing from the system.
Do you really want to remove volume group "volgroup01" containing 5 logical volumes? [y/n]: y
  Aborting vg_write: No metadata areas to write to!

Could this be a result of doing (which was suggested to do on one of the other lvm related items here, and which worked up until 2 days ago): cp /usr/bin/true /sbin/lvmetad

So for now the only option that works for me is a reboot, which is not really ideal.

Thanks,

eni....@gmail.com

unread,
Oct 4, 2017, 9:26:20 AM10/4/17
to kiwi
Just to be clear, i was running as superuser.

David Cassany

unread,
Oct 4, 2017, 9:32:34 AM10/4/17
to kiwi-...@googlegroups.com
Hi,

> It seems that running vgremove does not work for volgroup01:
>
> # vgremove volgroup01
> WARNING: Device for PV KlkyOD-ePPd-vFPG-xUet-C6kH-w2hh-DijVif not found
> or rejected by a filter.
> WARNING: 1 physical volumes are currently missing from the system.
> Do you really want to remove volume group "volgroup01" containing 5 logical
> volumes? [y/n]: y
> Aborting vg_write: No metadata areas to write to!

This is happening because the disk is not there anymore. Note that the
device is a loop device that is not present anymore after the built. I
usually copy the disk (raw image) to somewhere and then:

$ sudo losetup -f --show <raw image>
$ sudo kpartx -sa <loop device>
$ sudo vgremove --force <volumne group>
$ sudo losetup -d <loop device>

I copy the raw image because these commands modify the image, this way I
can keep the original one.

Regards,
David

signature.asc

Marcus Schäfer

unread,
Oct 4, 2017, 12:09:24 PM10/4/17
to kiwi-...@googlegroups.com
Hi,
lvmetad running on the building host usually causing this kind
of problems. As it is started automatically again and again I usually
do

cp /usr/bin/true /sbin/lvmetad

and reboot the system because in the above state several other
inconsistencies could have been stacked together

of course all this is a nasty workaround and should not be done
when the building host itself uses lvm for its own system

...I think I'm repeating myself :)

Regards,
Marcus
--
Public Key available via: https://keybase.io/marcus_schaefer/key.asc
keybase search marcus_schaefer
-------------------------------------------------------
Marcus Schäfer (Res. & Dev.) SUSE Linux GmbH
Tel: 0911-740 53 0 Maxfeldstrasse 5
FAX: 0911-740 53 479 D-90409 Nürnberg
HRB: 21284 (AG Nürnberg) Germany
GF: Felix Imendörffer, Jane Smithard, Graham Norton
http://www.suse.de
-------------------------------------------------------

eni....@gmail.com

unread,
Oct 9, 2017, 3:03:31 AM10/9/17
to kiwi
Hi all,

I had used the  cp /usr/bin/true /sbin/lvmetad for quite a while with success. I tried to do it again last week, but it wouldn't let me.
This morning I tried "diff -u /usr/bin/true /sbin/lvmetad " and it said that they differed! I tried doing the copy again, and now it let me do it (maybe some other issue that was fixed by rebooting).

Now everything works again as it did a week ago. It seems that, possibly, after an OpenSuse update this had been reverted? That's my only guess. Consider the topic closed for me.

Thanks!

Roger Oberholtzer

unread,
Oct 9, 2017, 8:34:00 AM10/9/17
to kiwi-...@googlegroups.com
If my / is btrfs, is it safe to cp /usr/bin/true /sbin/lvmetad?
> --
> You received this message because you are subscribed to the Google Groups
> "kiwi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kiwi-images...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



--
Roger Oberholtzer

Marcus Schäfer

unread,
Oct 9, 2017, 10:05:07 AM10/9/17
to kiwi-...@googlegroups.com
Hi,

> If my / is btrfs, is it safe to cp /usr/bin/true /sbin/lvmetad?

I would say yes. lvmetad is the metadata daemon used by lvm2 exclusively
If your system does not use lvm2 itself it's imho of no risk. Even
without a running lvmetad lvm2 stays operational

Roger Oberholtzer

unread,
Oct 9, 2017, 10:08:14 AM10/9/17
to kiwi-...@googlegroups.com
On Mon, Oct 9, 2017 at 4:05 PM, Marcus Schäfer <m...@suse.de> wrote:
> Hi,
>
>> If my / is btrfs, is it safe to cp /usr/bin/true /sbin/lvmetad?
>
> I would say yes. lvmetad is the metadata daemon used by lvm2 exclusively
> If your system does not use lvm2 itself it's imho of no risk. Even
> without a running lvmetad lvm2 stays operational

okay. I was just afraid to reboot while /sbin/lvmetad was really
/usr/bin/true. But since I no longer have any logical volumes on this
system, I guess I'm okay.


--
Roger Oberholtzer
Reply all
Reply to author
Forward
0 new messages