IP addresses muddled form multiple iterations and credentialsurls munged for security,
From the aforementioned FAQ:
"A28: First of all, instruct CBTOOL to deploy the instances without attempting to establishing contact to these, by running the following commands on the CLI :
cldalter vm_defaults check_boot_complete wait_for_0
cldalter vm_defaults transfer_files False
cldalter vm_defaults run_generic_scripts False"
I enter back into CB world:
root@cloudbench-server:/opt/cbtool# ./cb
Cbtool version is "0545f81"
Parsing "cloud definitions" file..... "/opt/cbtool/lib/auxiliary//../..//configs/root_cloud_definitions.txt" opened and parsed successfully.
Checking "Object Store".....An Object Store of the kind "Redis" (shared) on node 10.0.2.113, TCP port 6379, database id "0" seems to be running.
Checking "Log Store".....A Log Store of the kind "rsyslog" (private) on node 10.0.2.113, UDP port 5114 seems to be running.
Checking "Metric Store".....A Metric Store of the kind "MongoDB" (shared) on node 10.0.2.113, TCP port 27017, database id "metrics" seems to be running.
Checking "File Store".....A File Store of the kind "rsync" (private) on node 10.0.2.113, TCP port 873 seems to be running.
Checking for a running API service daemon.....API Service daemon was successfully started. The process id is ['5518'] (
http://10.0.2.113:7070).
Checking for a running GUI service daemon.....GUI Service daemon was successfully started. The process id is ['5542', '5543'], listening on port 8080. Full url is "
http://10.0.2.113:8080".
The "osk" cloud named "MYOPENSTACK" was already attached to this experiment.
The experiment identifier is EXP-02-16-2016-08-23-59-PM-UTC
All VMCs successfully attached to this experiment. It looks like all VMCs were already attached.
(MYOPENSTACK) cldalter vm_defaults check_boot_complete wait_for_0
(MYOPENSTACK)cldalter vm_defaults transfer_files False
The global object "vm_defaults" on Cloud MYOPENSTACK was modified:
|"sub-attribute" (key) |old value |new value
|check_boot_complete |tcp_on_22 |wait_for_0
(MYOPENSTACK) cldalter vm_defaults transfer_files False
The global object "vm_defaults" on Cloud MYOPENSTACK was modified:
|"sub-attribute" (key) |old value |new value
|transfer_files |True |False
(MYOPENSTACK) cldalter vm_defaults run_generic_scripts False
The global object "vm_defaults" on Cloud MYOPENSTACK was modified:
|"sub-attribute" (key) |old value |new value
|run_generic_scripts |True |False
"After that, just run vmattach tinyvm once. Check that the instance was properly deployed by issuing vmlist."
Then at the prompt:
(MYOPENSTACK) vmattach tinyvm
/usr/local/lib/python2.7/dist-packages/novaclient/v2/client.py:109: UserWarning: 'novaclient.v2.client.Client' is not designed to be initialized directly. It is inner class of novaclient. Please, use 'novaclient.client.Client' instead. Related lp bug-report: 1493576
_LW("'novaclient.v2.client.Client' is not designed to be "
status: Flavor (m1.tiny ) not found: Please check if the defined flavor is present on this OpenStack Cloud
VM object 8B5EE009-4F22-5B9F-91A6-B9F98614E02F (named "vm_2") could not be attached to this experiment: vm_2 (cloud-assigned uuid NA) could not be created on OpenStack Cloud "MYOPENSTACK"Flavor (m1.tiny ) not found: Please check if the defined flavor is present on this OpenStack Cloud.
Flavor not found. Using a flavor that DOES exist:
(MYOPENSTACK) vmattach GP3-Medium
VM object 0065BA58-99E5-5B42-A170-AA3F5B366460 (named "vm_3") could not be attached to this experiment: VM object initialization failure: 'GP3-Medium'
Also, when using the dashboard to attach to MYOPENSTACK and spawn an instance to try and get to the logs:
"Then get all the relevant information about the instance with vmshow vm_1."
(MYOPENSTACK) vmshow vm_1
The VM object vm_1, attached to this experiment, has the following attributes (Cloud MYOPENSTACK) :
|attribute (VM object key) |value
|ai |none
|ai_arrived |0
|ai_arriving |0
|ai_departed |0
|ai_failed |0
|ai_issued |0
|ai_name |none
|ai_reservations |0
|aidrs |none
|aidrs_name |none
|alternative_remote_mtu |False
|alternative_remote_mtu_default |1200
|alternative_remote_mtu_interface |default
|always_create_floating_ip |True
|arrival |1455655504
|async |true
|attempts |30
|availability_zone |
|base_dir |/opt/cbtool/lib/auxiliary//../..
|capture_supported |True
|cgroups_base_dir |/sys/fs/cgroup/
|check_boot_complete |tcp_on_22
|check_boot_started |poll_cloud
|client_pref_pool |lg
|cloud_hostname |cb-root-MYOPENSTACK-vm4-iperfserver
|cloud_init_bootstrap |False
|cloud_init_rsync |False
|cloud_ip |10.0.2.160
|cloud_mac |N/A
|cloud_name |MYOPENSTACK
|cloud_vm_name |cb-root-MYOPENSTACK-vm4-iperfserver
|cloud_vm_uuid |uid
|cloud_vv_uuid |none
|command |vmattach MYOPENSTACK iperfserver auto empty default continue
|command_originated |1455655374
|comments |
|counter |6
|create_jumphost |False
|credentials |admin-nunya-admin
|credentials_dir |/opt/cbtool/lib/auxiliary//../../credentials
|daemon_dir |/home/klabuser
|debug_remote_commands |False
|detach_parallelism |20
|discover_hosts |False
|driver_pref_pool |lg
|eclipsed |False
|errors |no
|exclude_list |/opt/cbtool/lib/auxiliary//../../exclude_list.txt
|execute_json_filename_prefix |cb
|execute_script_name |execute_on_staging.sh
|expected_mtu |1500
|experiment_id |EXP-02-16-2016-08-23-59-PM-UTC
|filestore_hostname |10.0.2.113
|filestore_port |873
|filestore_username |root
|floating_pool |
|force_failure |False
|host_name |csx-a-nova1-013
|hostname_key |cloud_vm_name
|identity |/opt/cbtool/lib/auxiliary//../../credentials/cbtool_rsa
|imageid1 |cloudbench-software-0
|instance_name |instance-000118d4
|is_jumphost |False
|jars_dir |/home/klabuser/cloudbench/jar
|jumphost_base_name |cb-jumphost
|jumphost_login |cbuser
|jumphost_name |root-cb-jumphost
|jumphost_netnames |all
|key_name |root_default_mythos-key
|last_known_state |ACTIVE with ip assigned
|leave_instance_on_failure |False
|local_dir_name |cbtool
|login |cbuser
|meta_tags |empty
|mgt_001_provisioning_request_originated |1455655374
|mgt_002_provisioning_request_sent |4
|mgt_003_provisioning_request_completed |101
|mgt_004_network_acessible |25
|mgt_005_file_transfer |0
|mgt_006_instance_preparation |0
|mgt_007_application_start |0
|migrate_protocol |tcp
|migrate_protocol_supported |tcp,rdma
|migrate_supported |True
|mode |controllable
|model |osk
|name |vm_4
|netname |cloudbench-shared
|notification |False
|notification_channel |auto
|pattern |none
|project |default
|protect_protocol |tcp
|protect_protocol_supported |tcp,rdma
|protect_supported |True
|prov_cloud_ip |10.0.2.160
|prov_netname |cloudbench-shared
|qemu_debug |False
|randomize_image_name |False
|remote_dir_name |cbtool
|resize_supported |False
|role |iperfserver
|run_cloud_ip |10.0.2.160
|run_generic_scripts |True
|run_netname |cloudbench-shared
|runstate_parallelism |5
|runstate_supported |True
|security_groups |default
|size |GP3-Medium
|sla_runtime |ok
|ssh_key_name |cbtool_rsa
|staging |continue
|state |attached
|sticky_app_status |False
|temp_attr_list |empty=empty
|tenant |default
|timeout |30000
|tracking |none
|transfer_files |True
|type |none
|update_attempts |36
|update_frequency |5
|use_cinderclient |True
|use_floating_ip |False
|use_jumphost |False
|use_neutronclient |false
|use_vpn_ip |False
|userdata |/var/lib/cloud/uid/user-data.txt
|username |root
|utc_offset_on_orchestrator |0
|uuid |uid
|vm_arrived |0
|vm_arriving |1
|vm_departed |0
|vm_failed |3
|vm_issued |4
|vm_reservations |0
|vmc |1munged1
|vmc_arrived |1
|vmc_cloud_ip |IP
|vmc_departed |0
|vmc_failed |0
|vmc_issued |1
|vmc_name |some_region
|vmc_pool |SUT
|vpn_only |False
|vpn_server_bootstrap |192.168.0.6
|vpn_server_ip |10.0.2.113
|vpn_server_port |1194
"Please take note of the following attributes: "credentials_dir", "ssh_key_name", "login", and "prov_cloud_ip". Then move to a separate bash prompt and:"
"a) try to ping "prov_cloud_ip""
root@cloudbench-server:/opt/cbtool# ping 10.0.2.179
PING 10.0.2.179 (10.0.2.179) 56(84) bytes of data.
64 bytes from 10.0.2.179: icmp_seq=1 ttl=64 time=1.12 ms
(this wasnt working before i turned the connectivity checks off...)
"b) try to ssh with command line such as ssh -i "credentials_dir"/"ssh_key_name" "login"@"prov_cloud_ip"
root@cloudbench-server:/opt/cbtool# ssh -i /opt/cbtool/lib/auxiliary//../../credentials/cbtool_rsa
cbu...@10.0.2.179The authenticity of host '10.0.2.179 (10.0.2.179)' can't be established.
ECDSA key fingerprint is SHA256:5AYEsQ5ROoYFopogcXRxBJG6qZVIMiv3w79UkOYhBro.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.2.179' (ECDSA) to the list of known hosts.
Permission denied (publickey).
"NOTE: There is a small utility, called "cbssh", that could be used to directly. To try it, just run - on a bash prompt - "cd ~cbtool; ~/cbtool/cbssh vm_1", and you should be able to login on the node."
root@cloudbench-server:/opt/cbtool# ./cbssh vm_1
Warning: Permanently added '10.0.2.139' (ECDSA) to the list of known hosts.
Permission denied (publickey).
Exit code for command
ssh -i /opt/cbtool/lib/auxiliary//../../credentials/cbtool_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l cbuser 10.0.2.139 ""
has the value of 255
We seem to have a situation where cbtool is not creating the instance with the appropriate key name, or it is not getting set correctly when the instance spawns, which would result in not being able to access it. This is sort of a chicken vs. egg scenario. I have to create a key to create an image to create a vm, then cbtool has to either use my key for that image or add its key to Openstack and nova boot the vm with that key name.
It looks as though ssh key management is not being handled correctly here for some reason. Also, for some reason I have been unable to even ping one of these instances during the 5 minute interval before cb deletes them. I cant tell if it is cloud-init taking way too long to start a new instance, or if some change i made to cbtool configs actually fixed the issue. Probably the former.
If you managed to read this entire thing you deserve a medal.
:)