Strange. There should be some errors in /var/log/ganeti/commands.log
and/or node-daemon.log, could you please check?
I'm not sure if the clustered volumes interact badly with Ganeti (but
they shouldn't).
regards,
iustin
follows the logs that ganeti generated when the comando it is execute
/var/log/ganeti/commands.log
OpPrereqError: ("Error: volume group 'vms' missing\nspecify
--no-lvm-storage if you are not using lvm", 'wrong_input')
2011-03-30 12:20:43,380: gnt-cluster init pid=31693 INFO run with
arguments '--vg-name=vms --master-netdev=br0 --nic-parameters link=br0
--enabled-hypervisors=kvm CLUSTER01'
2011-03-30 12:20:43,380: gnt-cluster init pid=31693 INFO Using PycURL
libcurl/7.19.7 GnuTLS/2.8.5 zlib/1.2.3.3 libidn/1.15
2011-03-30 12:20:46,474: gnt-cluster init pid=31693 ERROR Error during
command processing
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/ganeti/cli.py", line
1880, in GenericMain
result = func(options, args)
File "/usr/local/lib/python2.6/dist-packages/ganeti/rpc.py", line 176,
in wrapper
return fn(*args, **kwargs)
File
"/usr/local/lib/python2.6/dist-packages/ganeti/client/gnt_cluster.py",
line 134, in InitCluster
prealloc_wipe_disks=opts.prealloc_wipe_disks,
File "/usr/local/lib/python2.6/dist-packages/ganeti/bootstrap.py",
line 308, in InitCluster
errors.ECODE_INVAL)
OpPrereqError: ("Error: volume group 'vms' missing\nspecify
--no-lvm-storage if you are not using lvm", 'wrong_input')
Hmm, still strange. Ganeti runs the following command:
vgs --noheadings --units m --nosuffix -o name,size
which should give a reasonable output.
Can you run gnt-cluster init with the --debug argument and check the
logs again?
thanks,
iustin
This is interesting.
> Skipping clustered volume group db_arch_bkpvm_vg
> Skipping clustered volume group mail_vg
> Skipping clustered volume group dados_conf_vg
> ---------------
>
> I dont undestand this output, because the VG exist. If I create LVs, I
> can use it... but ganeti don't identify as a valid VG.
> *
> 2011-03-31 12:07:22,486: gnt-cluster init pid=4105 utils:140 DEBUG
> Command 'vgs --noheadings --units m --nosuffix -o name,size' failed
> (exited with exit code 5); output: VMs 286084.00*
The key part in there is that 'vgs' exits with code 5, instead of code
0. Can you confirm that:
vgs --noheadings --units m --nosuffix -o name,size; echo $?
Shows a 5 instead of 0 at the end?
> ** Do not have any system log beyond these presented above, neither in
> syslog, neither in dmesg
That is not expected, if simply vgs has non-zero exit code.
> I'm almost giving to used the ganeti to manager guest vms with HA.
> Anybody has other suggestion of software to do it? linux-ha.org maybe??
Sure, that's a very good set of tools.
regards,
iustin
root@node01:~# vgs --noheadings --units m --nosuffix -o name,size; echo $?
Skipping clustered volume group db_arch_bkpvm_vg
Skipping clustered volume group mail_vg
Skipping clustered volume group dados_conf_vg
VMs 286084.00
5
root@node01:~# vgs VMs --noheadings --units m --nosuffix -o name,size;
echo $?
VMs 286084.00
0
root@node01:~# vgs mail_vg --noheadings --units m --nosuffix -o
name,size; echo $?
Skipping clustered volume group mail_vg
5
Thanks!
Diego
I see.
> root@node01:~# vgs --noheadings --units m --nosuffix -o name,size; echo $?
> Skipping clustered volume group db_arch_bkpvm_vg
> Skipping clustered volume group mail_vg
> Skipping clustered volume group dados_conf_vg
> VMs 286084.00
> 5
> root@node01:~# vgs VMs --noheadings --units m --nosuffix -o name,size;
> echo $?
> VMs 286084.00
> 0
> root@node01:~# vgs mail_vg --noheadings --units m --nosuffix -o
> name,size; echo $?
> Skipping clustered volume group mail_vg
> 5
Thanks for the confirmation. I don't have any solution for you right
now, it seems we should change Ganeti so that it only cares and asks for
its own volume groups.
I've filled http://code.google.com/p/ganeti/issues/detail?id=152 for
this.
thanks,
iustin
Hello Iustin,
I created one LV and started a cluster with option --no-lvm-storage,
after the cluster gone up I added the LV that I was created into the new
cluster.
Now my doubt is... the cluster works fine so? How I manager the LVs? I
need to create one for each Guest? How is de better way for manage this
structure?
Follows the output of the commands bellow.
Thank you very much for all support!
Diego
root@node01:~# vgs
Skipping clustered volume group db_arch_bkpvm_vg
Skipping clustered volume group mail_vg
Skipping clustered volume group dados_conf_vg
VG #PV #LV #SN Attr VSize VFree
VMs 1 1 0 wz--n- 279.38g 249.38g
root@node01:~# lvs
Skipping clustered volume group db_arch_bkpvm_vg
Skipping volume group db_arch_bkpvm_vg
Skipping clustered volume group mail_vg
Skipping clustered volume group dados_conf_vg
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
teste01 VMs -wi-a- 30.00g
root@node01:~# gnt-cluster init --nic-parameters link=br0
--master-netdev=br0 --enabled-hypervisors=kvm --no-drbd-storage
--no-lvm-storage --debug CLUSTER01
2011-04-01 13:57:12,067: gnt-cluster init pid=6936 cli:1875 INFO run
with arguments '--nic-parameters link=br0 --master-netdev=br0
--enabled-hypervisors=kvm --no-drbd-storage --no-lvm-storage --debug
CLUSTER01'
2011-04-01 13:57:12,068: gnt-cluster init pid=6936 rpc:93 INFO Using
PycURL libcurl/7.19.7 GnuTLS/2.8.5 zlib/1.2.3.3 libidn/1.15
2011-04-01 13:57:15,081: gnt-cluster init pid=6936 utils:213 DEBUG
RunCmd ip link show dev br0
2011-04-01 13:57:15,117: gnt-cluster init pid=6936 utils:213 DEBUG
RunCmd ssh-keygen -t dsa -f /root/.ssh/id_dsa -q -N ''
2011-04-01 13:57:15,982: gnt-cluster init pid=6936 bootstrap:119 DEBUG
Generating new cluster certificate at /var/lib/ganeti/server.pem
2011-04-01 13:57:16,145: gnt-cluster init pid=6936 bootstrap:124 DEBUG
Writing new confd HMAC key to /var/lib/ganeti/hmac.key
2011-04-01 13:57:16,163: gnt-cluster init pid=6936 bootstrap:139 DEBUG
Generating new RAPI certificate at /var/lib/ganeti/rapi.pem
2011-04-01 13:57:16,410: gnt-cluster init pid=6936 bootstrap:148 DEBUG
Generating new cluster domain secret at
/var/lib/ganeti/cluster-domain-secret
2011-04-01 13:57:16,434: gnt-cluster init pid=6936 utils:213 DEBUG
RunCmd /usr/local/lib/ganeti/daemon-util start ganeti-noded
2011-04-01 13:57:16,784: gnt-cluster init pid=6936 client:335 DEBUG
Starting request <ganeti.http.client.HttpClientRequest 192.168.9.12:1811
PUT /version at 0x1e397d0>
2011-04-01 13:57:16,785: gnt-cluster init pid=6936 client:320 DEBUG
Created new client <ganeti.http.client._PooledHttpClient
id=192.168.9.12/1811 lastuse=0 <ganeti.http.client._HttpClient object at
0x1e39b10> at 0x1e42998>
2011-04-01 13:57:16,847: gnt-cluster init pid=6936 client:232 DEBUG
Request <ganeti.http.client.HttpClientRequest 192.168.9.12:1811 PUT
/version at 0x1e397d0> finished, errmsg=None
2011-04-01 13:57:16,848: gnt-cluster init pid=6936 client:350 DEBUG
Returning client <ganeti.http.client._PooledHttpClient
id=192.168.9.12/1811 lastuse=1 <ganeti.http.client._HttpClient object at
0x1e39b10> at 0x1e42998> to pool
2011-04-01 13:57:16,848: gnt-cluster init pid=6936 client:335 DEBUG
Starting request <ganeti.http.client.HttpClientRequest 192.168.9.12:1811
PUT /node_start_master at 0x1e39790>
2011-04-01 13:57:16,848: gnt-cluster init pid=6936 client:322 DEBUG
Reusing client <ganeti.http.client._PooledHttpClient
id=192.168.9.12/1811 lastuse=1 <ganeti.http.client._HttpClient object at
0x1e39b10> at 0x1e42998>
2011-04-01 13:57:20,467: gnt-cluster init pid=6936 client:232 DEBUG
Request <ganeti.http.client.HttpClientRequest 192.168.9.12:1811 PUT
/node_start_master at 0x1e39790> finished, errmsg=None
2011-04-01 13:57:20,467: gnt-cluster init pid=6936 client:350 DEBUG
Returning client <ganeti.http.client._PooledHttpClient
id=192.168.9.12/1811 lastuse=2 <ganeti.http.client._HttpClient object at
0x1e39b10> at 0x1e42998> to pool
root@node01:~# gnt-cluster info
Cluster name: agv_vms
Cluster UUID: 149f8257-650f-4212-9ba8-53c83f5fb886
Creation time: 2011-04-01 13:57:15
Modification time: 2011-04-01 13:57:15
Master node: node01.agr
Architecture (this node): 64bit (x86_64)
Tags: (none)
Default hypervisor: kvm
Enabled hypervisors: kvm
Hypervisor parameters:
- kvm:
acpi: True
boot_order: disk
cdrom_image_path:
disk_cache: default
disk_type: paravirtual
initrd_path:
kernel_args: ro
kernel_path: /boot/vmlinuz-2.6-kvmU
kvm_flag:
mem_path:
migration_bandwidth: 32
migration_downtime: 30
migration_mode: live
migration_port: 8102
nic_type: paravirtual
root_path: /dev/vda1
security_domain:
security_model: none
serial_console: True
usb_mouse:
use_chroot: False
use_localtime: False
vhost_net: False
vnc_bind_address:
vnc_password_file:
vnc_tls: False
vnc_x509_path:
vnc_x509_verify: False
OS-specific hypervisor parameters:
OS parameters:
Cluster parameters:
- candidate pool size: 10
- master netdev: br0
- lvm volume group: None
- lvm reserved volumes: (none)
- drbd usermode helper: None
- file storage path: /srv/ganeti/file-storage
- maintenance of node health: False
- uid pool:
- default instance allocator:
- primary ip version: 4
- preallocation wipe disks: False
Default instance parameters:
- default:
auto_balance: True
memory: 128
vcpus: 1
Default nic parameters:
- default:
link: br0
mode: bridged
root@node01:~# gnt-cluster modify --reserved-lvs=teste01
root@node01:~# gnt-cluster info
Cluster name: agv_vms
Cluster UUID: 149f8257-650f-4212-9ba8-53c83f5fb886
Creation time: 2011-04-01 13:57:15
Modification time: 2011-04-01 13:57:36
Master node: node01.agr
Architecture (this node): 64bit (x86_64)
Tags: (none)
Default hypervisor: kvm
Enabled hypervisors: kvm
Hypervisor parameters:
- kvm:
acpi: True
boot_order: disk
cdrom_image_path:
disk_cache: default
disk_type: paravirtual
initrd_path:
kernel_args: ro
kernel_path: /boot/vmlinuz-2.6-kvmU
kvm_flag:
mem_path:
migration_bandwidth: 32
migration_downtime: 30
migration_mode: live
migration_port: 8102
nic_type: paravirtual
root_path: /dev/vda1
security_domain:
security_model: none
serial_console: True
usb_mouse:
use_chroot: False
use_localtime: False
vhost_net: False
vnc_bind_address:
vnc_password_file:
vnc_tls: False
vnc_x509_path:
vnc_x509_verify: False
OS-specific hypervisor parameters:
OS parameters:
Cluster parameters:
- candidate pool size: 10
- master netdev: br0
- lvm volume group: None
- lvm reserved volumes: teste01
- drbd usermode helper: None
- file storage path: /srv/ganeti/file-storage
- maintenance of node health: False
- uid pool:
- default instance allocator:
- primary ip version: 4
- preallocation wipe disks: False
Default instance parameters:
- default:
auto_balance: True
memory: 128
vcpus: 1
Default nic parameters:
- default:
link: br0
mode: bridged
I'm a bit confused. Are you talking about LVs or about VGs? I would
recommend you read the documentation, but the short version is that you
just need one volume group and that Ganeti will take care of
creating/removing/managing the LVs in that VG.
regards,
iustin
Iustin,
Sorry about confusion. I did make the tests and understood my error
about the last email.
Thank you for the answers.
Diego