Error: volume group missing

590 views
Skip to first unread message

Diego Bianchetti

unread,
Mar 30, 2011, 11:33:51 AM3/30/11
to ganeti
Hi people,

I'm try install ganeti on the ubuntu server 10.04.2 TLS, with kernel
2.6.32-21-server x86_64.

The installation apparently is ok, all required configure was OK, the
compilation and installation weren't show errors. But when I try
create a new cluster can't attach VG to cluster.

If I create the new cluster without VG (--no--lvm-storage) the cluster
works fine, but I need install guest vms into the storage.

The error and other details are bellow.

Anybody have a ideia to solve this case?



root@node01:~# ganeti-masterd --version
ganeti-masterd (ganeti) 2.3.1


root@node01:~# vgs
Skipping clustered volume group db_arch_bkpvm_vg
Skipping clustered volume group mail_vg
Skipping clustered volume group dados_conf_vg
VG #PV #LV #SN Attr VSize VFree
vms 1 0 0 wz--n- 279.37g 279.37g

root@node01:~# vgdisplay
Skipping clustered volume group db_arch_bkpvm_vg
Skipping clustered volume group mail_vg
Skipping clustered volume group dados_conf_vg

--- Volume group ---
VG Name vms
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 279.37 GiB
PE Size 4.00 MiB
Total PE 71519
Alloc PE / Size 0 / 0
Free PE / Size 71519 / 279.37 GiB
VG UUID 5pwxWN-8F7T-R2SG-xZWY-TUbj-JSon-GVqGdE


root@node01:~# gnt-cluster init --vg-name=vms --master-netdev=br0 --
nic-parameters link=br0 --enabled-hypervisors=kvm CLUSTER01
Failure: prerequisites not met for this operation:
error type: wrong_input, error details:
Error: volume group 'vms' missing
specify --no-lvm-storage if you are not using lvm



Thanks for any help!!
Diego

Iustin Pop

unread,
Mar 30, 2011, 11:57:21 AM3/30/11
to gan...@googlegroups.com

Strange. There should be some errors in /var/log/ganeti/commands.log
and/or node-daemon.log, could you please check?

I'm not sure if the clustered volumes interact badly with Ganeti (but
they shouldn't).

regards,
iustin

Diego Bianchetti

unread,
Mar 30, 2011, 12:57:06 PM3/30/11
to gan...@googlegroups.com

follows the logs that ganeti generated when the comando it is execute


/var/log/ganeti/commands.log

OpPrereqError: ("Error: volume group 'vms' missing\nspecify
--no-lvm-storage if you are not using lvm", 'wrong_input')
2011-03-30 12:20:43,380: gnt-cluster init pid=31693 INFO run with
arguments '--vg-name=vms --master-netdev=br0 --nic-parameters link=br0
--enabled-hypervisors=kvm CLUSTER01'
2011-03-30 12:20:43,380: gnt-cluster init pid=31693 INFO Using PycURL
libcurl/7.19.7 GnuTLS/2.8.5 zlib/1.2.3.3 libidn/1.15
2011-03-30 12:20:46,474: gnt-cluster init pid=31693 ERROR Error during
command processing
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/ganeti/cli.py", line
1880, in GenericMain
result = func(options, args)
File "/usr/local/lib/python2.6/dist-packages/ganeti/rpc.py", line 176,
in wrapper
return fn(*args, **kwargs)
File
"/usr/local/lib/python2.6/dist-packages/ganeti/client/gnt_cluster.py",
line 134, in InitCluster
prealloc_wipe_disks=opts.prealloc_wipe_disks,
File "/usr/local/lib/python2.6/dist-packages/ganeti/bootstrap.py",
line 308, in InitCluster
errors.ECODE_INVAL)
OpPrereqError: ("Error: volume group 'vms' missing\nspecify
--no-lvm-storage if you are not using lvm", 'wrong_input')

Diego Bianchetti

unread,
Mar 30, 2011, 3:36:42 PM3/30/11
to gan...@googlegroups.com
Hello Guys!

More information about this case. The only thing that I can't try is to remove the LUN in storage, but I believe isn't need because the storage disk (LUN) was found by server.



I upgraded the server system, now it running:

root@node01:~# uname -a
Linux node01 2.6.32-30-server #59-Ubuntu SMP Tue Mar 1 22:46:09 UTC 2011 x86_64 GNU/Linux

root@node01:~# cat /etc/issue
Ubuntu 10.04.2 LTS \n \l


root@node01:~# ganeti-masterd --version
ganeti-masterd (ganeti) 2.3.1



Anybody have any suggestion?

Thanks!!
Diego



root@node01:~# pvscan
  PV /dev/mapper/mpath3-part1   VG db_arch_bkpvm_vg   lvm2 [242.15 GiB / 0    free]
  PV /dev/mapper/mpath3-part2   VG db_arch_bkpvm_vg   lvm2 [304.71 GiB / 4.00 MiB free]
  PV /dev/mapper/mpath2-part1   VG mail_vg            lvm2 [838.14 GiB / 8.00 MiB free]
  PV /dev/mapper/mpath0-part2   VG dados_conf_vg      lvm2 [46.57 GiB / 46.57 GiB free]
  PV /dev/mapper/mpath0-part3   VG dados_conf_vg      lvm2 [512.08 GiB / 111.43 GiB free]
  Total: 5 [1.90 TiB] / in use: 5 [1.90 TiB] / in no VG: 0 [0   ]

root@node01:~# pvcreate /dev/mapper/mpath1-part1
  Physical volume "/dev/mapper/mpath1-part1" successfully created

root@node01:~# pvscan
  PV /dev/mapper/mpath3-part1   VG db_arch_bkpvm_vg   lvm2 [242.15 GiB / 0    free]
  PV /dev/mapper/mpath3-part2   VG db_arch_bkpvm_vg   lvm2 [304.71 GiB / 4.00 MiB free]
  PV /dev/mapper/mpath2-part1   VG mail_vg            lvm2 [838.14 GiB / 8.00 MiB free]
  PV /dev/mapper/mpath0-part2   VG dados_conf_vg      lvm2 [46.57 GiB / 46.57 GiB free]
  PV /dev/mapper/mpath0-part3   VG dados_conf_vg      lvm2 [512.08 GiB / 111.43 GiB free]
  PV /dev/mapper/mpath1-part1                         lvm2 [279.38 GiB]
  Total: 6 [2.17 TiB] / in use: 5 [1.90 TiB] / in no VG: 1 [279.38 GiB]


root@node01:~# vgscan
  Reading all physical volumes.  This may take a while...

  Skipping clustered volume group db_arch_bkpvm_vg
  Skipping clustered volume group mail_vg
  Skipping clustered volume group dados_conf_vg


root@node01:~# vgcreate VMs /dev/mapper/mpath1-part1
  Volume group "VMs" successfully created


root@node01:~# vgscan
  Reading all physical volumes.  This may take a while...

  Skipping clustered volume group db_arch_bkpvm_vg
  Found volume group "VMs" using metadata type lvm2

  Skipping clustered volume group mail_vg
  Skipping clustered volume group dados_conf_vg


root@node01:~# gnt-cluster init --vg-name=VMs --nic-parameters link=br0 --master-netdev=br0 --enabled-hypervisors=kvm --no-drbd-storage CLUSTER01

Failure: prerequisites not met for this operation:
error type: wrong_input, error details:
Error: volume group 'VMs' missing

Iustin Pop

unread,
Mar 31, 2011, 5:10:22 AM3/31/11
to gan...@googlegroups.com
On Wed, Mar 30, 2011 at 04:36:42PM -0300, Diego Bianchetti wrote:
> root@node01:~# gnt-cluster init --vg-name=*VMs* --nic-parameters

> link=br0 --master-netdev=br0 --enabled-hypervisors=kvm --no-drbd-storage
> CLUSTER01
> Failure: prerequisites not met for this operation:
> error type: wrong_input, error details:
> Error: volume group 'VMs' missing
> specify --no-lvm-storage if you are not using lvm

Hmm, still strange. Ganeti runs the following command:

vgs --noheadings --units m --nosuffix -o name,size

which should give a reasonable output.

Can you run gnt-cluster init with the --debug argument and check the
logs again?

thanks,
iustin

Diego Bianchetti

unread,
Mar 31, 2011, 11:22:52 AM3/31/11
to gan...@googlegroups.com
Hi Iustin,

I ran the commands, follows the output.

------------------

root@node01:~# vgs --noheadings --units m --nosuffix -o name,size

  Skipping clustered volume group db_arch_bkpvm_vg
  Skipping clustered volume group mail_vg
  Skipping clustered volume group dados_conf_vg
  VMs  286084.00


root@node01:~# gnt-cluster init --vg-name=VMs --nic-parameters link=br0 --master-netdev=br0 --enabled-hypervisors=kvm --no-drbd-storage --debug CLUSTER01
2011-03-31 12:07:19,400: gnt-cluster init pid=4105 cli:1875 INFO run with arguments '--vg-name=VMs --nic-parameters link=br0 --master-netdev=br0 --enabled-hypervisors=kvm --no-drbd-storage --debug CLUSTER01'
2011-03-31 12:07:19,400: gnt-cluster init pid=4105 rpc:93 INFO Using PycURL libcurl/7.19.7 GnuTLS/2.8.5 zlib/1.2.3.3 libidn/1.15
2011-03-31 12:07:22,407: gnt-cluster init pid=4105 utils:213 DEBUG RunCmd vgs --noheadings --units m --nosuffix -o name,size
2011-03-31 12:07:22,486: gnt-cluster init pid=4105 utils:140 DEBUG Command 'vgs --noheadings --units m --nosuffix -o name,size' failed (exited with exit code 5); output:   VMs  286084.00

  Skipping clustered volume group db_arch_bkpvm_vg
  Skipping clustered volume group mail_vg
  Skipping clustered volume group dados_conf_vg

2011-03-31 12:07:22,487: gnt-cluster init pid=4105 cli:1884 ERROR Error during command processing

Traceback (most recent call last):
  File "/usr/local/lib/python2.6/dist-packages/ganeti/cli.py", line 1880, in GenericMain
    result = func(options, args)
  File "/usr/local/lib/python2.6/dist-packages/ganeti/rpc.py", line 176, in wrapper
    return fn(*args, **kwargs)
  File "/usr/local/lib/python2.6/dist-packages/ganeti/client/gnt_cluster.py", line 134, in InitCluster
    prealloc_wipe_disks=opts.prealloc_wipe_disks,
  File "/usr/local/lib/python2.6/dist-packages/ganeti/bootstrap.py", line 308, in InitCluster
    errors.ECODE_INVAL)
OpPrereqError: ("Error: volume group 'VMs' missing\nspecify --no-lvm-storage if you are not using lvm", 'wrong_input')

Failure: prerequisites not met for this operation:
error type: wrong_input, error details:
Error: volume group 'VMs' missing
specify --no-lvm-storage if you are not using lvm

---------------

I dont undestand this output, because the VG exist. If I create LVs, I can use it... but ganeti don't identify as a valid VG.
 
2011-03-31 12:07:22,486: gnt-cluster init pid=4105 utils:140 DEBUG Command 'vgs --noheadings --units m --nosuffix -o name,size' failed (exited with exit code 5); output:   VMs  286084.00



** Do not have any system log beyond these presented above, neither in syslog, neither in dmesg

I'm almost giving to used the ganeti to manager guest vms with HA. Anybody has other suggestion of software to do it? linux-ha.org maybe??


Thanks!!!
diego

Iustin Pop

unread,
Mar 31, 2011, 11:25:42 AM3/31/11
to gan...@googlegroups.com
^^^^^^^^^^^^^^^^^^^^^^^

This is interesting.

> Skipping clustered volume group db_arch_bkpvm_vg
> Skipping clustered volume group mail_vg
> Skipping clustered volume group dados_conf_vg

> ---------------


>
> I dont undestand this output, because the VG exist. If I create LVs, I
> can use it... but ganeti don't identify as a valid VG.

> *


> 2011-03-31 12:07:22,486: gnt-cluster init pid=4105 utils:140 DEBUG
> Command 'vgs --noheadings --units m --nosuffix -o name,size' failed

> (exited with exit code 5); output: VMs 286084.00*

The key part in there is that 'vgs' exits with code 5, instead of code
0. Can you confirm that:

vgs --noheadings --units m --nosuffix -o name,size; echo $?

Shows a 5 instead of 0 at the end?

> ** Do not have any system log beyond these presented above, neither in
> syslog, neither in dmesg

That is not expected, if simply vgs has non-zero exit code.

> I'm almost giving to used the ganeti to manager guest vms with HA.
> Anybody has other suggestion of software to do it? linux-ha.org maybe??

Sure, that's a very good set of tools.

regards,
iustin

Diego Bianchetti

unread,
Mar 31, 2011, 2:36:39 PM3/31/11
to gan...@googlegroups.com
I guess that exit code is 5 because the others VGs are to be used for
other cluster. This server is one node of the new cluter, but it can to
"see" the all LUNs. I will try to disable others LUNs for this server to
try the last test with ganeti.


root@node01:~# vgs --noheadings --units m --nosuffix -o name,size; echo $?


Skipping clustered volume group db_arch_bkpvm_vg
Skipping clustered volume group mail_vg
Skipping clustered volume group dados_conf_vg
VMs 286084.00

5
root@node01:~# vgs VMs --noheadings --units m --nosuffix -o name,size;
echo $?
VMs 286084.00
0
root@node01:~# vgs mail_vg --noheadings --units m --nosuffix -o
name,size; echo $?


Skipping clustered volume group mail_vg

5


Thanks!
Diego

Iustin Pop

unread,
Apr 1, 2011, 12:24:09 PM4/1/11
to gan...@googlegroups.com

I see.

> root@node01:~# vgs --noheadings --units m --nosuffix -o name,size; echo $?
> Skipping clustered volume group db_arch_bkpvm_vg
> Skipping clustered volume group mail_vg
> Skipping clustered volume group dados_conf_vg
> VMs 286084.00
> 5
> root@node01:~# vgs VMs --noheadings --units m --nosuffix -o name,size;
> echo $?
> VMs 286084.00
> 0
> root@node01:~# vgs mail_vg --noheadings --units m --nosuffix -o
> name,size; echo $?
> Skipping clustered volume group mail_vg
> 5

Thanks for the confirmation. I don't have any solution for you right
now, it seems we should change Ganeti so that it only cares and asks for
its own volume groups.

I've filled http://code.google.com/p/ganeti/issues/detail?id=152 for
this.

thanks,
iustin

Diego Bianchetti

unread,
Apr 1, 2011, 1:06:41 PM4/1/11
to gan...@googlegroups.com


Hello Iustin,


I created one LV and started a cluster with option --no-lvm-storage,
after the cluster gone up I added the LV that I was created into the new
cluster.

Now my doubt is... the cluster works fine so? How I manager the LVs? I
need to create one for each Guest? How is de better way for manage this
structure?

Follows the output of the commands bellow.


Thank you very much for all support!
Diego


root@node01:~# vgs


Skipping clustered volume group db_arch_bkpvm_vg
Skipping clustered volume group mail_vg
Skipping clustered volume group dados_conf_vg

VG #PV #LV #SN Attr VSize VFree

VMs 1 1 0 wz--n- 279.38g 249.38g


root@node01:~# lvs


Skipping clustered volume group db_arch_bkpvm_vg

Skipping volume group db_arch_bkpvm_vg


Skipping clustered volume group mail_vg
Skipping clustered volume group dados_conf_vg

LV VG Attr LSize Origin Snap% Move Log Copy% Convert
teste01 VMs -wi-a- 30.00g


root@node01:~# gnt-cluster init --nic-parameters link=br0
--master-netdev=br0 --enabled-hypervisors=kvm --no-drbd-storage
--no-lvm-storage --debug CLUSTER01

2011-04-01 13:57:12,067: gnt-cluster init pid=6936 cli:1875 INFO run
with arguments '--nic-parameters link=br0 --master-netdev=br0
--enabled-hypervisors=kvm --no-drbd-storage --no-lvm-storage --debug
CLUSTER01'
2011-04-01 13:57:12,068: gnt-cluster init pid=6936 rpc:93 INFO Using


PycURL libcurl/7.19.7 GnuTLS/2.8.5 zlib/1.2.3.3 libidn/1.15

2011-04-01 13:57:15,081: gnt-cluster init pid=6936 utils:213 DEBUG
RunCmd ip link show dev br0
2011-04-01 13:57:15,117: gnt-cluster init pid=6936 utils:213 DEBUG
RunCmd ssh-keygen -t dsa -f /root/.ssh/id_dsa -q -N ''
2011-04-01 13:57:15,982: gnt-cluster init pid=6936 bootstrap:119 DEBUG
Generating new cluster certificate at /var/lib/ganeti/server.pem
2011-04-01 13:57:16,145: gnt-cluster init pid=6936 bootstrap:124 DEBUG
Writing new confd HMAC key to /var/lib/ganeti/hmac.key
2011-04-01 13:57:16,163: gnt-cluster init pid=6936 bootstrap:139 DEBUG
Generating new RAPI certificate at /var/lib/ganeti/rapi.pem
2011-04-01 13:57:16,410: gnt-cluster init pid=6936 bootstrap:148 DEBUG
Generating new cluster domain secret at
/var/lib/ganeti/cluster-domain-secret
2011-04-01 13:57:16,434: gnt-cluster init pid=6936 utils:213 DEBUG
RunCmd /usr/local/lib/ganeti/daemon-util start ganeti-noded
2011-04-01 13:57:16,784: gnt-cluster init pid=6936 client:335 DEBUG
Starting request <ganeti.http.client.HttpClientRequest 192.168.9.12:1811
PUT /version at 0x1e397d0>
2011-04-01 13:57:16,785: gnt-cluster init pid=6936 client:320 DEBUG
Created new client <ganeti.http.client._PooledHttpClient
id=192.168.9.12/1811 lastuse=0 <ganeti.http.client._HttpClient object at
0x1e39b10> at 0x1e42998>
2011-04-01 13:57:16,847: gnt-cluster init pid=6936 client:232 DEBUG
Request <ganeti.http.client.HttpClientRequest 192.168.9.12:1811 PUT
/version at 0x1e397d0> finished, errmsg=None
2011-04-01 13:57:16,848: gnt-cluster init pid=6936 client:350 DEBUG
Returning client <ganeti.http.client._PooledHttpClient
id=192.168.9.12/1811 lastuse=1 <ganeti.http.client._HttpClient object at
0x1e39b10> at 0x1e42998> to pool
2011-04-01 13:57:16,848: gnt-cluster init pid=6936 client:335 DEBUG
Starting request <ganeti.http.client.HttpClientRequest 192.168.9.12:1811
PUT /node_start_master at 0x1e39790>
2011-04-01 13:57:16,848: gnt-cluster init pid=6936 client:322 DEBUG
Reusing client <ganeti.http.client._PooledHttpClient
id=192.168.9.12/1811 lastuse=1 <ganeti.http.client._HttpClient object at
0x1e39b10> at 0x1e42998>
2011-04-01 13:57:20,467: gnt-cluster init pid=6936 client:232 DEBUG
Request <ganeti.http.client.HttpClientRequest 192.168.9.12:1811 PUT
/node_start_master at 0x1e39790> finished, errmsg=None
2011-04-01 13:57:20,467: gnt-cluster init pid=6936 client:350 DEBUG
Returning client <ganeti.http.client._PooledHttpClient
id=192.168.9.12/1811 lastuse=2 <ganeti.http.client._HttpClient object at
0x1e39b10> at 0x1e42998> to pool


root@node01:~# gnt-cluster info
Cluster name: agv_vms
Cluster UUID: 149f8257-650f-4212-9ba8-53c83f5fb886
Creation time: 2011-04-01 13:57:15
Modification time: 2011-04-01 13:57:15
Master node: node01.agr
Architecture (this node): 64bit (x86_64)
Tags: (none)
Default hypervisor: kvm
Enabled hypervisors: kvm
Hypervisor parameters:
- kvm:
acpi: True
boot_order: disk
cdrom_image_path:
disk_cache: default
disk_type: paravirtual
initrd_path:
kernel_args: ro
kernel_path: /boot/vmlinuz-2.6-kvmU
kvm_flag:
mem_path:
migration_bandwidth: 32
migration_downtime: 30
migration_mode: live
migration_port: 8102
nic_type: paravirtual
root_path: /dev/vda1
security_domain:
security_model: none
serial_console: True
usb_mouse:
use_chroot: False
use_localtime: False
vhost_net: False
vnc_bind_address:
vnc_password_file:
vnc_tls: False
vnc_x509_path:
vnc_x509_verify: False
OS-specific hypervisor parameters:
OS parameters:
Cluster parameters:
- candidate pool size: 10
- master netdev: br0
- lvm volume group: None
- lvm reserved volumes: (none)
- drbd usermode helper: None
- file storage path: /srv/ganeti/file-storage
- maintenance of node health: False
- uid pool:
- default instance allocator:
- primary ip version: 4
- preallocation wipe disks: False
Default instance parameters:
- default:
auto_balance: True
memory: 128
vcpus: 1
Default nic parameters:
- default:
link: br0
mode: bridged

root@node01:~# gnt-cluster modify --reserved-lvs=teste01


root@node01:~# gnt-cluster info
Cluster name: agv_vms
Cluster UUID: 149f8257-650f-4212-9ba8-53c83f5fb886
Creation time: 2011-04-01 13:57:15
Modification time: 2011-04-01 13:57:36
Master node: node01.agr
Architecture (this node): 64bit (x86_64)
Tags: (none)
Default hypervisor: kvm
Enabled hypervisors: kvm
Hypervisor parameters:
- kvm:
acpi: True
boot_order: disk
cdrom_image_path:
disk_cache: default
disk_type: paravirtual
initrd_path:
kernel_args: ro
kernel_path: /boot/vmlinuz-2.6-kvmU
kvm_flag:
mem_path:
migration_bandwidth: 32
migration_downtime: 30
migration_mode: live
migration_port: 8102
nic_type: paravirtual
root_path: /dev/vda1
security_domain:
security_model: none
serial_console: True
usb_mouse:
use_chroot: False
use_localtime: False
vhost_net: False
vnc_bind_address:
vnc_password_file:
vnc_tls: False
vnc_x509_path:
vnc_x509_verify: False
OS-specific hypervisor parameters:
OS parameters:
Cluster parameters:
- candidate pool size: 10
- master netdev: br0
- lvm volume group: None
- lvm reserved volumes: teste01
- drbd usermode helper: None
- file storage path: /srv/ganeti/file-storage
- maintenance of node health: False
- uid pool:
- default instance allocator:
- primary ip version: 4
- preallocation wipe disks: False
Default instance parameters:
- default:
auto_balance: True
memory: 128
vcpus: 1
Default nic parameters:
- default:
link: br0
mode: bridged


Iustin Pop

unread,
Apr 4, 2011, 4:06:51 AM4/4/11
to gan...@googlegroups.com

I'm a bit confused. Are you talking about LVs or about VGs? I would
recommend you read the documentation, but the short version is that you
just need one volume group and that Ganeti will take care of
creating/removing/managing the LVs in that VG.

regards,
iustin

Diego Bianchetti

unread,
Apr 4, 2011, 6:51:04 AM4/4/11
to gan...@googlegroups.com

Iustin,

Sorry about confusion. I did make the tests and understood my error
about the last email.

Thank you for the answers.
Diego

Reply all
Reply to author
Forward
0 new messages