Unrecognized option "openstack" in cfy init

504 views
Skip to first unread message

Manik Sidana

unread,
Aug 25, 2015, 12:12:09 AM8/25/15
to cloudify-users
Hi,

After following the steps described in https://pypi.python.org/pypi/cloudify/3.0, I am getting error when I execute
cfy init openstack



root@controller:/home/controller/cldfy# cfy init openstack
usage: cfy [-h] [--version]
           {status,blueprints,recover,teardown,bootstrap,node-instances,dev,deployments,init,ssh,use,workflows,nodes,executions,local,events}
           ...
cfy: error: unrecognized arguments: openstack
root@controller:/home/controller/cldfy#

Below is the list of python packages installed.

root@controller:/home/controller/cldfy# pip list | grep cloudify
cloudify (3.3a4)
cloudify-dsl-parser (3.3a4)
cloudify-openstack (0.3)
cloudify-plugins-common (3.3a4)
cloudify-rest-client (3.3a4)
cloudify-script-plugin (1.3a4)
root@controller:/home/controller/cldfy#

It would be great help if anybody can guide me.

Thanks

Manik Sidana

unread,
Aug 25, 2015, 12:49:18 AM8/25/15
to cloudify-users
I installed python-dev package. After this I was able to execute cfy init openstack.

(cvenv)root@controller:/home/controller/testcfy# sudo cfy init openstack
Initializing Cloudify
Initialization complete
(cvenv)root@controller:/home/controller/testcfy# ls -l
total 8
-rw-rw-r-- 1 root root 6234 Aug 25 10:09 cloudify-config.yaml
(cvenv)root@controller:/home/controller/testcfy# 

tram...@gigaspaces.com

unread,
Aug 25, 2015, 2:44:59 AM8/25/15
to cloudify-users
These instructions are for 3.0. Please use 3.2.
Please follow these installation instructions: http://getcloudify.org/guide/3.2/installation.html
For a quick walk-through of using Cloudify, follow: http://getcloudify.org/guide/3.2/quickstart.html

To summarize:

Install pip:
  - [sudo] python get-pip.py
Install Virtualenv:
  - [sudo] pip install virtualenv
  - virtualenv cloudify
  - source cloudify/env/bin/activate
Install Cloudify:
  - pip install cloudify==3.2

Now you can bootstrap a manager in Openstack:
  - unzip 3.2.zip
  - cd cloudify-manager-blueprints-3.2/openstack
  - cp inputs.yaml.template inputs.yaml # At this stage you need to customize this file. See "Configuring you manager blueprints": http://getcloudify.org/guide/3.2/getting-started-bootstrapping.html
  - cfy bootstrap --install-plugins -p openstack-manager-blueprint.yaml -i inputs.yaml

When that has succeeded, you can deploy a nodecellar example blueprint:
  - unzip 3.2.zip
  - cd cloudify-nodecellar-example-3.2/
  - cp inputs/openstack.yaml.template inputs.yaml # At this stage you need to customize this file
  - cfy blueprints upload -p openstack-blueprints.yaml -b nodecellar
  - cfy deployments create -b nodecellar -d nodecellar -i inputs.yaml
  - cfy executions start -w install -d nodecellar

Other helpful commands:
  - cfy blueprints list
  - cfy deployments list
  - cfy executions list -d [DEPLOYMENT ID]
  - cfy events list -e [EXECUTION ID] -l
Message has been deleted

Manik Sidana

unread,
Aug 26, 2015, 5:39:12 AM8/26/15
to cloudify-users
Hi,

Thanks a lot for the detailed steps.
However, after following these steps, I am stuck at the below error when I try to bootstrap [cfy bootstrap --install-plugins -p openstack-manager-blueprint.yaml -i inputs.yaml]:


2015-08-26 14:55:47 LOG <manager> [manager_server_04bdd.creation] ERROR: Exception raised on operation [nova_plugin.server.creation_validation] invocation

Below is the entire error log

__________________________________________________________________START_____________________________________________________________________________________
2015-08-26 14:55:42 CFY <manager> Starting 'execute_operation' workflow execution
2015-08-26 14:55:42 CFY <manager> [volume_bd8e9] Starting operation cloudify.interfaces.validation.creation
2015-08-26 14:55:42 CFY <manager> [management_keypair_39ee6] Starting operation cloudify.interfaces.validation.creation
2015-08-26 14:55:42 CFY <manager> [agent_keypair_12b9e] Starting operation cloudify.interfaces.validation.creation
2015-08-26 14:55:42 CFY <manager> [external_network_ad91e] Starting operation cloudify.interfaces.validation.creation
2015-08-26 14:55:42 CFY <manager> [management_network_9fc95] Starting operation cloudify.interfaces.validation.creation
2015-08-26 14:55:42 CFY <manager> [manager_server_ip_b5e0b] Starting operation cloudify.interfaces.validation.creation
2015-08-26 14:55:42 CFY <manager> [manager_data_5c2dd] Starting operation cloudify.interfaces.validation.creation
2015-08-26 14:55:42 CFY <manager> [manager_port_4b9d6] Starting operation cloudify.interfaces.validation.creation
2015-08-26 14:55:42 CFY <manager> [agents_security_group_aee6c] Starting operation cloudify.interfaces.validation.creation
2015-08-26 14:55:42 CFY <manager> [management_subnet_5b4eb] Starting operation cloudify.interfaces.validation.creation
2015-08-26 14:55:42 CFY <manager> [management_security_group_6cf6f] Starting operation cloudify.interfaces.validation.creation
2015-08-26 14:55:42 CFY <manager> [openstack_configuration_9a6b1] Starting operation cloudify.interfaces.validation.creation
2015-08-26 14:55:42 CFY <manager> [manager_server_04bdd] Starting operation cloudify.interfaces.validation.creation
2015-08-26 14:55:42 CFY <manager> [router_f925e] Starting operation cloudify.interfaces.validation.creation
2015-08-26 14:55:42 CFY <manager> [manager_9ebff] Starting operation cloudify.interfaces.validation.creation
2015-08-26 14:55:42 CFY <manager> [manager_9ebff.creation] Sending task 'cloudify_cli.bootstrap.tasks.creation_validation'
2015-08-26 14:55:42 CFY <manager> [management_subnet_5b4eb.creation] Sending task 'neutron_plugin.subnet.creation_validation'
2015-08-26 14:55:42 CFY <manager> [manager_server_ip_b5e0b.creation] Sending task 'neutron_plugin.floatingip.creation_validation'
2015-08-26 14:55:42 CFY <manager> [management_security_group_6cf6f.creation] Sending task 'neutron_plugin.security_group.creation_validation'
2015-08-26 14:55:42 CFY <manager> [agent_keypair_12b9e.creation] Sending task 'nova_plugin.keypair.creation_validation'
2015-08-26 14:55:42 CFY <manager> [management_keypair_39ee6.creation] Sending task 'nova_plugin.keypair.creation_validation'
2015-08-26 14:55:42 CFY <manager> [external_network_ad91e.creation] Sending task 'neutron_plugin.network.creation_validation'
2015-08-26 14:55:42 CFY <manager> [volume_bd8e9.creation] Sending task 'cinder_plugin.volume.creation_validation'
2015-08-26 14:55:42 CFY <manager> [management_network_9fc95.creation] Sending task 'neutron_plugin.network.creation_validation'
2015-08-26 14:55:42 CFY <manager> [manager_port_4b9d6.creation] Sending task 'neutron_plugin.port.creation_validation'
2015-08-26 14:55:42 CFY <manager> [manager_server_04bdd.creation] Sending task 'nova_plugin.server.creation_validation'
2015-08-26 14:55:42 CFY <manager> [router_f925e.creation] Sending task 'neutron_plugin.router.creation_validation'
2015-08-26 14:55:42 CFY <manager> [manager_9ebff.creation] Task started 'cloudify_cli.bootstrap.tasks.creation_validation'
2015-08-26 14:55:42 CFY <manager> [agents_security_group_aee6c.creation] Sending task 'neutron_plugin.security_group.creation_validation'
2015-08-26 14:55:45 CFY <manager> [manager_9ebff.creation] Task succeeded 'cloudify_cli.bootstrap.tasks.creation_validation'
2015-08-26 14:55:45 CFY <manager> [management_subnet_5b4eb.creation] Task started 'neutron_plugin.subnet.creation_validation'
2015-08-26 14:55:45 CFY <manager> [management_subnet_5b4eb.creation] Task succeeded 'neutron_plugin.subnet.creation_validation'
2015-08-26 14:55:45 CFY <manager> [manager_server_ip_b5e0b.creation] Task started 'neutron_plugin.floatingip.creation_validation'
2015-08-26 14:55:46 CFY <manager> [manager_server_ip_b5e0b.creation] Task succeeded 'neutron_plugin.floatingip.creation_validation'
2015-08-26 14:55:46 CFY <manager> [management_security_group_6cf6f.creation] Task started 'neutron_plugin.security_group.creation_validation'
2015-08-26 14:55:46 CFY <manager> [management_security_group_6cf6f.creation] Task succeeded 'neutron_plugin.security_group.creation_validation'
2015-08-26 14:55:46 CFY <manager> [agent_keypair_12b9e.creation] Task started 'nova_plugin.keypair.creation_validation'
2015-08-26 14:55:46 CFY <manager> [agent_keypair_12b9e.creation] Task succeeded 'nova_plugin.keypair.creation_validation'
2015-08-26 14:55:46 CFY <manager> [management_keypair_39ee6.creation] Task started 'nova_plugin.keypair.creation_validation'
2015-08-26 14:55:46 CFY <manager> [management_keypair_39ee6.creation] Task succeeded 'nova_plugin.keypair.creation_validation'
2015-08-26 14:55:46 CFY <manager> [external_network_ad91e.creation] Task started 'neutron_plugin.network.creation_validation'
2015-08-26 14:55:46 CFY <manager> [external_network_ad91e.creation] Task succeeded 'neutron_plugin.network.creation_validation'
2015-08-26 14:55:46 CFY <manager> [volume_bd8e9.creation] Task started 'cinder_plugin.volume.creation_validation'
2015-08-26 14:55:46 CFY <manager> [volume_bd8e9.creation] Task succeeded 'cinder_plugin.volume.creation_validation'
2015-08-26 14:55:46 CFY <manager> [management_network_9fc95.creation] Task started 'neutron_plugin.network.creation_validation'
2015-08-26 14:55:46 CFY <manager> [management_network_9fc95.creation] Task succeeded 'neutron_plugin.network.creation_validation'
2015-08-26 14:55:46 CFY <manager> [manager_port_4b9d6.creation] Task started 'neutron_plugin.port.creation_validation'
2015-08-26 14:55:46 CFY <manager> [manager_port_4b9d6.creation] Task succeeded 'neutron_plugin.port.creation_validation'
2015-08-26 14:55:46 CFY <manager> [manager_server_04bdd.creation] Task started 'nova_plugin.server.creation_validation'
2015-08-26 14:55:47 LOG <manager> [manager_server_04bdd.creation] ERROR: Exception raised on operation [nova_plugin.server.creation_validation] invocation
Traceback (most recent call last):
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/cloudify/decorators.py", line 125, in wrapper
    result = func(*args, **kwargs)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/openstack_plugin_common/__init__.py", line 532, in wrapper
    _re_raise(e, recoverable=False, status_code=e.code)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/openstack_plugin_common/__init__.py", line 527, in wrapper
    return f(*args, **kw)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/nova_plugin/server.py", line 766, in creation_validation
    validate_server_property_value_exists(server_props, 'image')
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/nova_plugin/server.py", line 747, in validate_server_property_value_exists
    prop_values = list(nova_client.cosmo_list(property_name))
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/openstack_plugin_common/__init__.py", line 621, in cosmo_list
    for obj in getattr(self, obj_type_plural).findall(**kw):
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/novaclient/base.py", line 228, in findall
    listing = self.list(**list_kwargs)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/novaclient/v1_1/images.py", line 69, in list
    return self._list('/images%s%s' % (detail, query), 'images')
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/novaclient/base.py", line 64, in _list
    _resp, body = self.api.client.get(url)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/novaclient/client.py", line 283, in get
    return self._cs_request(url, 'GET', **kwargs)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/novaclient/client.py", line 272, in _cs_request
    method, **kwargs)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/novaclient/client.py", line 242, in _time_request
    resp, body = self.request(url, method, **kwargs)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/novaclient/client.py", line 236, in request
    raise exceptions.from_response(resp, body, url, method)
NonRecoverableError: Not found [status_code=404]
2015-08-26 14:55:47 CFY <manager> [manager_server_04bdd.creation] Task failed 'nova_plugin.server.creation_validation' -> Not found [status_code=404] [attempt 1/6]
2015-08-26 14:55:47 CFY <manager> [router_f925e.creation] Task started 'neutron_plugin.router.creation_validation'
2015-08-26 14:55:47 CFY <manager> 'execute_operation' workflow execution failed: Workflow failed: Task failed 'nova_plugin.server.creation_validation' -> Not found [status_code=404]
Workflow failed: Task failed 'nova_plugin.server.creation_validation' -> Not found [status_code=404]

__________________________________________________________________END_____________________________________________________________________________________


My inputs file is as given below
______________________________________________________________INPUTS.YAML____________________________________________________________________________________
keystone_username: admin
keystone_password: password
keystone_tenant_name: admin
region: regionOne
manager_public_key_name: cfy_key
agent_public_key_name: cfy_key
image_id: 7e5b4c28-ff1d-4af4-ad06-d9079e54d67d
flavor_id: 2
external_network_name: public

use_existing_manager_keypair: false
use_existing_agent_keypair: false
manager_server_name: cloudify-manager-server
manager_server_user: ubuntu
manager_private_key_path: ~/.ssh/cloudify-manager-kp.pem
agent_private_key_path: ~/.ssh/cloudify-agent-kp.pem
agents_user: ubuntu
management_network_name: cloudify-management-network
management_subnet_name: cloudify-management-network-subnet
management_router: cloudify-management-router
manager_security_group_name: cloudify-sg-manager
agents_security_group_name: cloudify-sg-agents
manager_port_name: cloudify-manager-port
manager_volume_name: cloudify-manager-volume
resources_prefix: cloudify


------------------------Glance Image-------------------------
root@controller:/home/controller# glance image-list
+--------------------------------------+---------------------+-------------+------------------+-----------+--------+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+--------------------------------------+---------------------+-------------+------------------+-----------+--------+
| 515d3cd3-3b68-44ea-b785-fd8a9e51d4b8 | cirros-0.3.3-x86_64 | qcow2       | bare             | 13200896  | active |
| 7e5b4c28-ff1d-4af4-ad06-d9079e54d67d | ubuntu-trusty-amd64 | qcow2       | bare             | 258474496 | active |
+--------------------------------------+---------------------+-------------+------------------+-----------+--------+
root@controller:/home/controller#

______________________________________________________________________________________________________



________________________________________________________NOVA KEYPAIR__________________________________________
root@controller:/home/controller# nova keypair-list
+---------+-------------------------------------------------+
| Name    | Fingerprint                                     |
+---------+-------------------------------------------------+
| cfy_key | 55:59:79:a8:a6:e5:7f:49:b5:dc:0b:aa:d7:db:e5:ca |
+---------+-------------------------------------------------+
root@controller:/home/controller#


# cfy --version
Cloudify CLI 3.2.0
#

Looking forward to your guidance and support.

Thanks

Manik Sidana

unread,
Aug 28, 2015, 2:18:08 AM8/28/15
to cloudify-users
Hi,

I would summarize my problem as below:

Cloudify is not able to locate/fetch the ubuntu image in glance. The image exists in glance database.
On using PDB, I found that cloudify gets the correct image id 7e5b4c28-ff1d-4af4-ad06-d9079e54d67d in validate_server_property_value_exists() but for some reason an exception is thrown.

I am using the same image id in inputs.yaml

image_id: 7e5b4c28-ff1d-4af4-ad06-d9079e54d67d
flavor_id: 2

The flavor id is as below

 nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

I am stuck on this since 1.5 days. 
Any help/pointers would be highly appreciated.

Thanks

tram...@gigaspaces.com

unread,
Aug 30, 2015, 9:45:07 AM8/30/15
to cloudify-users

In a separate post, I suggested that perhaps the nova_url you are providing is unnecessary. Can you provide empty values for nova_url and neutron_url and let me know if the error is the same?

Cloudify doesn't use glance. It only uses the nova plugin. 
Can you confirm if the nova CLI client referenced below uses the same credentials as you provided in the inputs.yaml file?

Also can you tell us who is your Openstack provider?


On Tuesday, August 25, 2015 at 7:12:09 AM UTC+3, Manik Sidana wrote:
Message has been deleted

Manik Sidana

unread,
Aug 31, 2015, 1:05:36 AM8/31/15
to cloudify-users
Thanks a lot for your help and time.

>> In a separate post, I suggested that perhaps the nova_url you are providing is unnecessary. Can you provide empty values for nova_url and neutron_url and let me know if the error is the same?
It worked after providing empty URLs.

>>Cloudify doesn't use glance. It only uses the nova plugin. 
>>Can you confirm if the nova CLI client referenced below uses the same credentials as you provided in the inputs.yaml file?
Yes, the nova credentials were correct.

>>Also can you tell us who is your Openstack provider?
I am using OpenStack Juno, the one which is available for free. The underlying OS is Ubuntu 14.04.

The bootstrap worked after giving empty URLs.

I am now facing error due to Cinder and fabric plugin.
Before this error, I was able to see ports, networks, subnets, router and one VM created by Cloudify.
But when the execution terminated due to error, all the resources got vanished may be due to some error clean up routine.

Let me check if there's anything wrong with my cinder configuration.
Do I need to create a cinder volume as "cloudify-manager-volume" before bootstrapping.

Pasting my error dump.


----------------------------------------------------- CINDER ERROR--------------------------------------------------------------------------------------
2015-08-31 10:02:10 CFY <manager> [volume_f7650.create] Task started 'cinder_plugin.volume.create'
2015-08-31 10:02:14 LOG <manager> [volume_f7650.create] ERROR: Exception raised on operation [cinder_plugin.volume.create] invocation
Traceback (most recent call last):
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/cloudify/decorators.py", line 125, in wrapper
    result = func(*args, **kwargs)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/openstack_plugin_common/__init__.py", line 544, in wrapper
    return f(*args, **kw)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/cinder_plugin/volume.py", line 74, in create
    status=VOLUME_STATUS_AVAILABLE)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/openstack_plugin_common/__init__.py", line 544, in wrapper
    return f(*args, **kw)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/cinder_plugin/volume.py", line 92, in wait_until_status
    "Volume {0} is in error state".format(volume_id))
NonRecoverableError: Volume ad7db3ff-c8b3-4b7a-a8d3-4e1aa124b9ea is in error state
2015-08-31 10:02:14 CFY <manager> [volume_f7650.create] Task failed 'cinder_plugin.volume.create' -> Volume ad7db3ff-c8b3-4b7a-a8d3-4e1aa124b9ea is in error state [attempt 1/6]
2015-08-31 10:02:14 CFY <manager> 'install' workflow execution failed: Workflow failed: Task failed 'cinder_plugin.volume.create' -> Volume ad7db3ff-c8b3-4b7a-a8d3-4e1aa124b9ea is in error state
bootstrap failed!
executing teardown due to failed bootstrap
2015-08-31 10:02:14 CFY <manager> Starting 'uninstall' workflow execution
2015-08-31 10:02:14 CFY <manager> [openstack_configuration_20caa] Stopping node
-------------------------------------------------------------------------------------------------------------------------------------------------------



----------------------------------------------------- FABRIC PLUGIN ERROR--------------------------------------------------------------------------------------
2015-08-31 10:03:10 LOG <manager> [manager_7ebab.delete] INFO: preparing fabric environment...
2015-08-31 10:03:10 LOG <manager> [manager_7ebab.delete] INFO: environment prepared successfully
[192.168.56.118] run: sudo service docker stop
2015-08-31 10:04:02 LOG <manager> [manager_7ebab.delete] ERROR: Exception raised on operation [fabric_plugin.tasks.run_module_task] invocation
Traceback (most recent call last):
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/cloudify/decorators.py", line 125, in wrapper
    result = func(*args, **kwargs)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/fabric_plugin/tasks.py", line 89, in run_module_task
    return _run_task(task, task_properties, fabric_env)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/fabric_plugin/tasks.py", line 95, in _run_task
    return task(**task_properties)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/cloudify_cli/bootstrap/tasks.py", line 102, in stop_docker_service
    _run_command(docker_service_stop_command)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/cloudify_cli/bootstrap/tasks.py", line 660, in _run_command
    return fabric.api.run(command, shell_escape=shell_escape)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/fabric/network.py", line 639, in host_prompting_wrapper
    return func(*args, **kwargs)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/fabric/operations.py", line 1042, in run
    shell_escape=shell_escape)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/fabric/operations.py", line 909, in _run_command
    channel=default_channel(), command=wrapped_command, pty=pty,
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/fabric/state.py", line 388, in default_channel
    chan = _open_session()
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/fabric/state.py", line 380, in _open_session
    return connections[env.host_string].get_transport().open_session()
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/fabric/network.py", line 151, in __getitem__
    self.connect(key)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/fabric/network.py", line 143, in connect
    self[key] = connect(user, host, port, cache=self)
  File "/home/controller/inst/cloudify/local/lib/python2.7/site-packages/fabric/network.py", line 565, in connect
    raise NetworkError(msg, e)
NetworkError: Low level socket error connecting to host 192.168.56.118 on port 22: Connection refused (tried 5 times)
2015-08-31 10:04:02 CFY <manager> [manager_7ebab.delete] Task failed 'fabric_plugin.tasks.run_module_task' -> Low level socket error connecting to host 192.168.56.118 on port 22: Connection refused (tried 5 times) [attempt 1/6]
2015-08-31 10:04:02 CFY <manager> [manager_data_799b1] Stopping node
2015-08-31 10:04:03 CFY <manager> [manager_data_799b1->manager_server_b57a7|unlink] Sending task 'fabric_plugin.tasks.run_script'
2015-08-31 10:04:03 CFY <manager> [manager_data_799b1->manager_server_b57a7|unlink] Task started 'fabric_plugin.tasks.run_script'
-------------------------------------------------------------------------------------------------------------------------------------------------------



------------------------------------------------------------------------CINDER ERROR-------------------------------------------------------------------------------
2015-08-31 10:06:02 LOG <manager> [router_07eb0.delete] INFO: deleting router
2015-08-31 10:06:02 CFY <manager> [router_07eb0.delete] Task succeeded 'neutron_plugin.router.delete'
2015-08-31 10:06:02 CFY <manager> [external_network_7fc5e] Stopping node
2015-08-31 10:06:03 CFY <manager> [external_network_7fc5e] Deleting node
2015-08-31 10:06:03 CFY <manager> [external_network_7fc5e.delete] Sending task 'neutron_plugin.network.delete'
2015-08-31 10:06:03 CFY <manager> [external_network_7fc5e.delete] Task started 'neutron_plugin.network.delete'
2015-08-31 10:06:03 LOG <manager> [external_network_7fc5e.delete] INFO: not deleting network since an external network is being used
2015-08-31 10:06:03 CFY <manager> [external_network_7fc5e.delete] Task succeeded 'neutron_plugin.network.delete'
2015-08-31 10:06:03 CFY <manager> 'uninstall' workflow execution succeeded
Workflow failed: Task failed 'cinder_plugin.volume.create' -> Volume ad7db3ff-c8b3-4b7a-a8d3-4e1aa124b9ea is in error state
-------------------------------------------------------------------------------------------------------------------------------------------------------


(cloudify)root@controller:/home/controller/inst/cloudify/cloudify-manager-blueprints-3.2/openstack# cinder list
+----+--------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+----+--------+--------------+------+-------------+----------+-------------+
+----+--------+--------------+------+-------------+----------+-------------+
(cloudify)root@controller:/home/controller/inst/cloudify/cloudify-manager-blueprints-3.2/openstack# 


-----------------------------------------------------------------------INPUTS.YAML--------------------------------------------------------------

(cloudify)root@controller:/home/controller/inst/cloudify/cloudify-manager-blueprints-3.2/openstack# cat inputs.yaml
keystone_username: admin
keystone_password: password
keystone_tenant_name: admin
region: regionOne
manager_public_key_name: cfy_key2
agent_public_key_name: cfy_key3
image_id: 7e5b4c28-ff1d-4af4-ad06-d9079e54d67d
flavor_id: 2
external_network_name: public
use_existing_manager_keypair: false
use_existing_agent_keypair: false
manager_server_name: cloudify-manager-server
manager_server_user: ubuntu
manager_private_key_path: ~/.ssh/cloudify-manager-kp.pem
agent_private_key_path: ~/.ssh/cloudify-agent-kp.pem
agents_user: ubuntu
management_network_name: cloudify-management-network
management_subnet_name: cloudify-management-network-subnet
management_router: cloudify-management-router
manager_security_group_name: cloudify-sg-manager
agents_security_group_name: cloudify-sg-agents
manager_port_name: cloudify-manager-port
manager_volume_name: cloudify-manager-volume
nova_url:
neutron_url:
resources_prefix: cloudify
----------------------------------------------------------------------------------------------------------------------------------------------------------------

tram...@gigaspaces.com

unread,
Aug 31, 2015, 3:24:36 AM8/31/15
to cloudify-users
It sounds like you've guessed that you don't really need the block storage for the manager to work.

The manager blueprint defines two software components of the manager - the data container and the cfy manager container. The data container is configured to storage on the cinder volume. The idea is that if your manager fails you can bring up a new manager and attach it to your old data container.

The cfy manager docker container does not require the cfy data container to be hosted on a cinder volume. It can be hosted on the VM's own storage. So you should be able to remove the volume and put the data container on the manager VM.

I don't have an example for 3.2.1 where this is done in Openstack. If you look at the AWS-EC2 manager blueprint, you can compare the topology and figure out how this should work. It shouldn't require many changes to the openstack blueprint at all.



On Tuesday, August 25, 2015 at 7:12:09 AM UTC+3, Manik Sidana wrote:
Message has been deleted
Message has been deleted

Manik Sidana

unread,
Sep 4, 2015, 12:23:59 AM9/4/15
to cloudify-users
Hi,

I tried executing bootstrap with Ubuntu 14.04 image
During the install workflow, I get an error "sudo: unable to resolve host cloudify-manager-server".
I am bootstrapping cloudify on the OpenStack controller node.

Do I need to add cloudify-manager-server in /etc/hosts on the controller ?



-----------------------------------------------------Inputs.YAML------------------------------------------------------------------


keystone_username: admin
keystone_password: password
keystone_tenant_name: admin
region: regionOne
manager_public_key_name: cfy_key1
agent_public_key_name: cfy_key2
#image_id: 00e857ec-6c69-44f4-816f-2124070a4c60
image_id: 583c5b73-3d02-4d16-b2e0-202827f8eb52
#image_id: 5e80dd61-e29a-45fb-8dea-8577da5ab36a
flavor_id: 2
external_network_name: public

use_existing_manager_keypair: false
use_existing_agent_keypair: false
manager_server_name: cloudify-manager-server
manager_server_user: ubuntu
manager_private_key_path: ~/.ssh/cloudify-manager-kp.pem
agent_private_key_path: ~/.ssh/cloudify-agent-kp.pem
agents_user: ubuntu
management_network_name: cloudify-management-network
management_subnet_name: cloudify-management-network-subnet
management_router: cloudify-management-router
manager_security_group_name: cloudify-sg-manager
agents_security_group_name: cloudify-sg-agents
manager_port_name: cloudify-manager-port
#manager_volume_name: cloudify-manager-volume
manager_volume_name:
nova_url:
neutron_url:
resources_prefix: cloudify

------------------------------------------------------------------------------------------------------------------------------------------






Pasting logs:


----------------------------------------------Start---------------------------------------------------------------------------------------

[100.100.150.38] put: <file obj> -> /tmp/cloudify-ctx/scripts/env-tmpbfac9O-fdisk.sh
[100.100.150.38] run: source /tmp/cloudify-ctx/scripts/env-tmpbfac9O-fdisk.sh && /tmp/cloudify-ctx/scripts/tmpbfac9O-fdisk.sh
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 39765) -> ('100.100.150.38', 22) -> ('localhost', 57090)
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 39766) -> ('100.100.150.38', 22) -> ('localhost', 57090)
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 39767) -> ('100.100.150.38', 22) -> ('localhost', 57090)
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 39768) -> ('100.100.150.38', 22) -> ('localhost', 57090)
2015-09-04 09:22:00 LOG <manager> [manager_data_18981->volume_3d100|preconfigure] INFO: Creating disk partition on device /dev/vdb
[100.100.150.38] out: sudo: unable to resolve host cloudify-manager-server
[100.100.150.38] out: Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
[100.100.150.38] out: Building a new DOS disklabel with disk identifier 0x0c3462f9.
[100.100.150.38] out: Changes will remain in memory only, until you decide to write them.
[100.100.150.38] out: After that, of course, the previous content won't be recoverable.
[100.100.150.38] out:
[100.100.150.38] out: Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
[100.100.150.38] out:
[100.100.150.38] out: Command (m for help): Partition type:
[100.100.150.38] out:    p   primary (0 primary, 0 extended, 4 free)
[100.100.150.38] out:    e   extended
[100.100.150.38] out: Select (default p): Partition number (1-4, default 1): First sector (2048-20971519, default 2048): Using default value 2048
[100.100.150.38] out: Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): Using default value 20971519
[100.100.150.38] out:
[100.100.150.38] out: Command (m for help): Selected partition 1
[100.100.150.38] out: Hex code (type L to list codes):
[100.100.150.38] out: Command (m for help): The partition table has been altered!
[100.100.150.38] out:
[100.100.150.38] out: Calling ioctl() to re-read partition table.
[100.100.150.38] out: Syncing disks.
[100.100.150.38] out: [100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 39769) -> ('100.100.150.38', 22) -> ('localhost', 57090)
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 39770) -> ('100.100.150.38', 22) -> ('localhost', 57090)


2015-09-04 09:22:07 CFY <manager> [manager_data_18981->volume_3d100|preconfigure] Task succeeded 'fabric_plugin.tasks.run_script'
2015-09-04 09:22:07 CFY <manager> [manager_data_18981] Configuring node
2015-09-04 09:22:07 CFY <manager> [manager_data_18981.configure] Sending task 'fabric_plugin.tasks.run_script'
2015-09-04 09:22:07 CFY <manager> [manager_data_18981.configure] Task started 'fabric_plugin.tasks.run_script'
2015-09-04 09:22:09 LOG <manager> [manager_data_18981.configure] INFO: preparing fabric environment...
2015-09-04 09:22:09 LOG <manager> [manager_data_18981.configure] INFO: environment prepared successfully
[100.100.150.38] put: /tmp/tmpbTJBZw-mkfs.sh -> /tmp/cloudify-ctx/scripts/tmpbTJBZw-mkfs.sh
[100.100.150.38] put: <file obj> -> /tmp/cloudify-ctx/scripts/env-tmpbTJBZw-mkfs.sh
[100.100.150.38] run: source /tmp/cloudify-ctx/scripts/env-tmpbTJBZw-mkfs.sh && /tmp/cloudify-ctx/scripts/tmpbTJBZw-mkfs.sh
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 45663) -> ('100.100.150.38', 22) -> ('localhost', 47740)
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 45664) -> ('100.100.150.38', 22) -> ('localhost', 47740)
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 45665) -> ('100.100.150.38', 22) -> ('localhost', 47740)
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 45666) -> ('100.100.150.38', 22) -> ('localhost', 47740)
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 45667) -> ('100.100.150.38', 22) -> ('localhost', 47740)
2015-09-04 09:22:19 LOG <manager> [manager_data_18981.configure] INFO: Creating ext4 file system using mkfs.ext4
[100.100.150.38] out: sudo: unable to resolve host cloudify-manager-server
[100.100.150.38] out: mke2fs 1.42.9 (4-Feb-2014)
[100.100.150.38] out: Filesystem label=
[100.100.150.38] out: OS type: Linux
[100.100.150.38] out: Block size=4096 (log=2)
[100.100.150.38] out: Fragment size=4096 (log=2)
[100.100.150.38] out: Stride=0 blocks, Stripe width=0 blocks
[100.100.150.38] out: 655360 inodes, 2621184 blocks
[100.100.150.38] out: 131059 blocks (5.00%) reserved for the super user
[100.100.150.38] out: First data block=0
[100.100.150.38] out: Maximum filesystem blocks=2684354560
[100.100.150.38] out: 80 block groups
[100.100.150.38] out: 32768 blocks per group, 32768 fragments per group
[100.100.150.38] out: 8192 inodes per group
[100.100.150.38] out: Superblock backups stored on blocks:
[100.100.150.38] out:   32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
[100.100.150.38] out:
[100.100.150.38] out: Allocating group tables: done
[100.100.150.38] out: Writing inode tables: done
[100.100.150.38] out: Creating journal (32768 blocks): done
[100.100.150.38] out: Writing superblocks and filesystem accounting information: done
[100.100.150.38] out:
[100.100.150.38] out: [100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 45668) -> ('100.100.150.38', 22) -> ('localhost', 47740)
2015-09-04 09:22:27 LOG <manager> [manager_data_18981.configure] INFO: Marking this instance as created
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 45669) -> ('100.100.150.38', 22) -> ('localhost', 47740)


2015-09-04 09:22:29 CFY <manager> [manager_data_18981.configure] Task succeeded 'fabric_plugin.tasks.run_script'
2015-09-04 09:22:29 CFY <manager> [manager_data_18981] Starting node
2015-09-04 09:22:29 CFY <manager> [manager_data_18981->manager_server_67026|establish] Sending task 'fabric_plugin.tasks.run_script'
2015-09-04 09:22:29 CFY <manager> [manager_data_18981->manager_server_67026|establish] Task started 'fabric_plugin.tasks.run_script'
2015-09-04 09:22:31 LOG <manager> [manager_data_18981->manager_server_67026|establish] INFO: preparing fabric environment...
2015-09-04 09:22:31 LOG <manager> [manager_data_18981->manager_server_67026|establish] INFO: environment prepared successfully
[100.100.150.38] put: /tmp/tmpubqL8Y-mount-docker.sh -> /tmp/cloudify-ctx/scripts/tmpubqL8Y-mount-docker.sh
[100.100.150.38] put: <file obj> -> /tmp/cloudify-ctx/scripts/env-tmpubqL8Y-mount-docker.sh
[100.100.150.38] run: source /tmp/cloudify-ctx/scripts/env-tmpubqL8Y-mount-docker.sh && /tmp/cloudify-ctx/scripts/tmpubqL8Y-mount-docker.sh
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 41119) -> ('100.100.150.38', 22) -> ('localhost', 33805)
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 41120) -> ('100.100.150.38', 22) -> ('localhost', 33805)
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 41121) -> ('100.100.150.38', 22) -> ('localhost', 33805)
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 41122) -> ('100.100.150.38', 22) -> ('localhost', 33805)
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 41123) -> ('100.100.150.38', 22) -> ('localhost', 33805)
2015-09-04 09:22:37 LOG <manager> [manager_data_18981->manager_server_67026|establish] INFO: Checking whether docker is installed
[100.100.150.38] out: sudo: unable to resolve host cloudify-manager-server
[100.100.150.38] out: [100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 41124) -> ('100.100.150.38', 22) -> ('localhost', 33805)
2015-09-04 09:22:38 LOG <manager> [manager_data_18981->manager_server_67026|establish] INFO: Mounting file system /dev/vdb1 on /var/lib/docker
sudo: unable to resolve host cloudify-manager-server
[100.100.150.38] out: [100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 41125) -> ('100.100.150.38', 22) -> ('localhost', 33805)
2015-09-04 09:22:39 LOG <manager> [manager_data_18981->manager_server_67026|establish] INFO: Adding mount point /var/lib/docker to file system table
sudo: unable to resolve host cloudify-manager-server
[100.100.150.38] out: [100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 41126) -> ('100.100.150.38', 22) -> ('localhost', 33805)
2015-09-04 09:22:40 LOG <manager> [manager_data_18981->manager_server_67026|establish] INFO: Marking this instance as mounted
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 41127) -> ('100.100.150.38', 22) -> ('localhost', 33805)


2015-09-04 09:22:41 CFY <manager> [manager_data_18981->manager_server_67026|establish] Task succeeded 'fabric_plugin.tasks.run_script'
2015-09-04 09:22:42 CFY <manager> [manager_39a55] Creating node
2015-09-04 09:22:42 CFY <manager> [manager_39a55] Configuring node
2015-09-04 09:22:42 CFY <manager> [manager_39a55.configure] Sending task 'fabric_plugin.tasks.run_task'
2015-09-04 09:22:42 CFY <manager> [manager_39a55.configure] Task started 'fabric_plugin.tasks.run_task'
2015-09-04 09:22:42 LOG <manager> [manager_39a55.configure] INFO: running task: configure from scripts/configure.py
2015-09-04 09:22:42 LOG <manager> [manager_39a55.configure] INFO: preparing fabric environment...
2015-09-04 09:22:42 LOG <manager> [manager_39a55.configure] INFO: environment prepared successfully
[100.100.150.38] put: /tmp/tmphjyeP4 -> /home/ubuntu/openstack_config.json
2015-09-04 09:22:42 CFY <manager> [manager_39a55.configure] Task succeeded 'fabric_plugin.tasks.run_task'
2015-09-04 09:22:43 CFY <manager> [manager_39a55] Starting node
2015-09-04 09:22:43 CFY <manager> [manager_39a55.start] Sending task 'fabric_plugin.tasks.run_module_task'
2015-09-04 09:22:43 CFY <manager> [manager_39a55.start] Task started 'fabric_plugin.tasks.run_module_task'
2015-09-04 09:22:43 LOG <manager> [manager_39a55.start] INFO: running task: cloudify_cli.bootstrap.tasks.bootstrap_docker
2015-09-04 09:22:43 LOG <manager> [manager_39a55.start] INFO: preparing fabric environment...
2015-09-04 09:22:43 LOG <manager> [manager_39a55.start] INFO: environment prepared successfully
2015-09-04 09:22:43 LOG <manager> [manager_39a55.start] INFO: initializing manager on the machine at 100.100.150.38
[100.100.150.38] run: mkdir -p ~/cloudify
[100.100.150.38] run: sudo which docker
[100.100.150.38] out: sudo: unable to resolve host cloudify-manager-server
[100.100.150.38] out:


Fatal error: run() received nonzero return code 1 while executing!

Requested: sudo which docker
Executed: /bin/bash -l -c "sudo which docker"

Aborting.
[100.100.150.38] run: python -c "import platform, json, sys; sys.stdout.write('{0}\n'.format(json.dumps(platform.dist())))"
[100.100.150.38] out: ["Ubuntu", "14.04", "trusty"]
[100.100.150.38] out:

2015-09-04 09:22:45 LOG <manager> [manager_39a55.start] INFO: installing Docker
[100.100.150.38] run: curl -sSL https://get.docker.com/ubuntu/ | sudo sh
[100.100.150.38] out: sudo: unable to resolve host cloudify-manager-server
[100.100.150.38] out: curl: (6) Could not resolve host: get.docker.com
[100.100.150.38] out:

[100.100.150.38] run: sudo docker inspect data
[100.100.150.38] out: sudo: unable to resolve host cloudify-manager-server
[100.100.150.38] out: sudo: docker: command not found
[100.100.150.38] out:


Fatal error: run() received nonzero return code 1 while executing!

Requested: sudo docker inspect data
Executed: /bin/bash -l -c "sudo docker inspect data"

Aborting.
[100.100.150.38] run: sudo docker inspect cfy
[100.100.150.38] out: sudo: unable to resolve host cloudify-manager-server
[100.100.150.38] out: sudo: docker: command not found
[100.100.150.38] out:


Fatal error: run() received nonzero return code 1 while executing!

Requested: sudo docker inspect cfy
Executed: /bin/bash -l -c "sudo docker inspect cfy"

Aborting.
2015-09-04 09:22:47 LOG <manager> [manager_39a55.start] INFO: importing cloudify-manager docker image from http://gigaspaces-repository-eu.s3.amazonaws.com/org/cloudify3/3.2.0/ga-RELEASE/cloudify-docker_3.2.0-ga-b200.tar
[100.100.150.38] out: sudo: unable to resolve host cloudify-manager-server
[100.100.150.38] out: sudo: docker: command not found
[100.100.150.38] out:


Fatal error: run() received nonzero return code 1 while executing!


Aborting.
2015-09-04 09:22:48 LOG <manager> [manager_39a55.start] ERROR: failed importing cloudify docker image from http://gigaspaces-repository-eu.s3.amazonaws.com/org/cloudify3/3.2.0/ga-RELEASE/cloudify-docker_3.2.0-ga-b200.tar. reason:run() received nonzero return code 1 while executing!

2015-09-04 09:22:48 LOG <manager> [manager_39a55.start] ERROR: Exception raised on operation [fabric_plugin.tasks.run_module_task] invocation
Traceback (most recent call last):
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/cloudify/decorators.py", line 125, in wrapper
    result = func(*args, **kwargs)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric_plugin/tasks.py", line 89, in run_module_task
    return _run_task(task, task_properties, fabric_env)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric_plugin/tasks.py", line 95, in _run_task
    return task(**task_properties)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/cloudify_cli/bootstrap/tasks.py", line 277, in bootstrap_docker
    raise NonRecoverableError(err)
NonRecoverableError: failed importing cloudify docker image from http://gigaspaces-repository-eu.s3.amazonaws.com/org/cloudify3/3.2.0/ga-RELEASE/cloudify-docker_3.2.0-ga-b200.tar. reason:run() received nonzero return code 1 while executing!

2015-09-04 09:22:48 CFY <manager> [manager_39a55.start] Task failed 'fabric_plugin.tasks.run_module_task' -> failed importing cloudify docker image from http://gigaspaces-repository-eu.s3.amazonaws.com/org/cloudify3/3.2.0/ga-RELEASE/cloudify-docker_3.2.0-ga-b200.tar. reason:run() received nonzero return code 1 while executing!

2015-09-04 09:22:48 CFY <manager> 'install' workflow execution failed: Workflow failed: Task failed 'fabric_plugin.tasks.run_module_task' -> failed importing cloudify docker image from http://gigaspaces-repository-eu.s3.amazonaws.com/org/cloudify3/3.2.0/ga-RELEASE/cloudify-docker_3.2.0-ga-b200.tar. reason:run() received nonzero return code 1 while executing!

bootstrap failed!
executing teardown due to failed bootstrap
2015-09-04 09:22:48 CFY <manager> Starting 'uninstall' workflow execution
2015-09-04 09:22:48 CFY <manager> [agent_keypair_0ca08] Stopping node
2015-09-04 09:22:48 CFY <manager> [openstack_configuration_1a0fa] Stopping node
2015-09-04 09:22:48 CFY <manager> [agents_security_group_793c7] Stopping node
2015-09-04 09:22:48 CFY <manager> [manager_39a55] Stopping node
2015-09-04 09:22:48 CFY <manager> [manager_39a55.stop] Sending task 'fabric_plugin.tasks.run_module_task'
2015-09-04 09:22:48 CFY <manager> [manager_39a55.stop] Task started 'fabric_plugin.tasks.run_module_task'
2015-09-04 09:22:48 LOG <manager> [manager_39a55.stop] INFO: running task: cloudify_cli.bootstrap.tasks.stop_manager_container
2015-09-04 09:22:48 LOG <manager> [manager_39a55.stop] INFO: preparing fabric environment...
2015-09-04 09:22:48 LOG <manager> [manager_39a55.stop] INFO: environment prepared successfully
[100.100.150.38] run: sudo docker stop cfy
[100.100.150.38] out: sudo: unable to resolve host cloudify-manager-server
[100.100.150.38] out: sudo: docker: command not found
[100.100.150.38] out:


Fatal error: run() received nonzero return code 1 while executing!

Requested: sudo docker stop cfy
Executed: /bin/bash -l -c "sudo docker stop cfy"

Aborting.
2015-09-04 09:22:49 LOG <manager> [manager_39a55.stop] ERROR: Exception raised on operation [fabric_plugin.tasks.run_module_task] invocation
Traceback (most recent call last):
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/cloudify/decorators.py", line 125, in wrapper
    result = func(*args, **kwargs)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric_plugin/tasks.py", line 89, in run_module_task
    return _run_task(task, task_properties, fabric_env)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric_plugin/tasks.py", line 95, in _run_task
    return task(**task_properties)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/cloudify_cli/bootstrap/tasks.py", line 88, in stop_manager_container
    _run_command(command)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/cloudify_cli/bootstrap/tasks.py", line 660, in _run_command
    return fabric.api.run(command, shell_escape=shell_escape)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric/network.py", line 641, in host_prompting_wrapper
    return func(*args, **kwargs)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric/operations.py", line 1042, in run
    shell_escape=shell_escape)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric/operations.py", line 932, in _run_command
    error(message=msg, stdout=out, stderr=err)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric/utils.py", line 327, in error
    return func(message)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric/utils.py", line 32, in abort
    raise env.abort_exception(msg)
FabricTaskError: run() received nonzero return code 1 while executing!

Requested: sudo docker stop cfy
Executed: /bin/bash -l -c "sudo docker stop cfy"
2015-09-04 09:22:49 CFY <manager> [manager_39a55.stop] Task failed 'fabric_plugin.tasks.run_module_task' -> run() received nonzero return code 1 while executing!

Requested: sudo docker stop cfy
Executed: /bin/bash -l -c "sudo docker stop cfy" [attempt 1/6]
2015-09-04 09:22:49 CFY <manager> [openstack_configuration_1a0fa] Deleting node
2015-09-04 09:22:49 CFY <manager> [agents_security_group_793c7] Deleting node
2015-09-04 09:22:49 CFY <manager> [agent_keypair_0ca08] Deleting node
2015-09-04 09:22:49 CFY <manager> [agent_keypair_0ca08.delete] Sending task 'nova_plugin.keypair.delete'
2015-09-04 09:22:49 CFY <manager> [agents_security_group_793c7.delete] Sending task 'neutron_plugin.security_group.delete'
2015-09-04 09:22:49 CFY <manager> [agent_keypair_0ca08.delete] Task started 'nova_plugin.keypair.delete'
2015-09-04 09:22:49 LOG <manager> [agent_keypair_0ca08.delete] INFO: deleting keypair
2015-09-04 09:22:49 CFY <manager> [agent_keypair_0ca08.delete] Task succeeded 'nova_plugin.keypair.delete'
2015-09-04 09:22:49 CFY <manager> [agents_security_group_793c7.delete] Task started 'neutron_plugin.security_group.delete'
2015-09-04 09:22:49 LOG <manager> [agents_security_group_793c7.delete] INFO: deleting security_group
2015-09-04 09:22:49 CFY <manager> [agents_security_group_793c7.delete] Task succeeded 'neutron_plugin.security_group.delete'
2015-09-04 09:22:49 CFY <manager> [manager_39a55] Deleting node
2015-09-04 09:22:50 CFY <manager> [manager_39a55.delete] Sending task 'fabric_plugin.tasks.run_module_task'
2015-09-04 09:22:50 CFY <manager> [manager_39a55.delete] Task started 'fabric_plugin.tasks.run_module_task'
2015-09-04 09:22:50 LOG <manager> [manager_39a55.delete] INFO: running task: cloudify_cli.bootstrap.tasks.stop_docker_service
2015-09-04 09:22:50 LOG <manager> [manager_39a55.delete] INFO: preparing fabric environment...
2015-09-04 09:22:50 LOG <manager> [manager_39a55.delete] INFO: environment prepared successfully
[100.100.150.38] run: sudo service docker stop
[100.100.150.38] out: sudo: unable to resolve host cloudify-manager-server
[100.100.150.38] out: docker: unrecognized service
[100.100.150.38] out:


Fatal error: run() received nonzero return code 1 while executing!

Requested: sudo service docker stop
Executed: /bin/bash -l -c "sudo service docker stop"

Aborting.
2015-09-04 09:22:51 LOG <manager> [manager_39a55.delete] ERROR: Exception raised on operation [fabric_plugin.tasks.run_module_task] invocation
Traceback (most recent call last):
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/cloudify/decorators.py", line 125, in wrapper
    result = func(*args, **kwargs)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric_plugin/tasks.py", line 89, in run_module_task
    return _run_task(task, task_properties, fabric_env)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric_plugin/tasks.py", line 95, in _run_task
    return task(**task_properties)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/cloudify_cli/bootstrap/tasks.py", line 102, in stop_docker_service
    _run_command(docker_service_stop_command)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/cloudify_cli/bootstrap/tasks.py", line 660, in _run_command
    return fabric.api.run(command, shell_escape=shell_escape)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric/network.py", line 641, in host_prompting_wrapper
    return func(*args, **kwargs)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric/operations.py", line 1042, in run
    shell_escape=shell_escape)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric/operations.py", line 932, in _run_command
    error(message=msg, stdout=out, stderr=err)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric/utils.py", line 327, in error
    return func(message)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric/utils.py", line 32, in abort
    raise env.abort_exception(msg)
FabricTaskError: run() received nonzero return code 1 while executing!

Requested: sudo service docker stop
Executed: /bin/bash -l -c "sudo service docker stop"
2015-09-04 09:22:51 CFY <manager> [manager_39a55.delete] Task failed 'fabric_plugin.tasks.run_module_task' -> run() received nonzero return code 1 while executing!

Requested: sudo service docker stop
Executed: /bin/bash -l -c "sudo service docker stop" [attempt 1/6]
2015-09-04 09:22:51 CFY <manager> [manager_data_18981] Stopping node
2015-09-04 09:22:51 CFY <manager> [manager_data_18981->manager_server_67026|unlink] Sending task 'fabric_plugin.tasks.run_script'
2015-09-04 09:22:51 CFY <manager> [manager_data_18981->manager_server_67026|unlink] Task started 'fabric_plugin.tasks.run_script'
2015-09-04 09:22:58 LOG <manager> [manager_data_18981->manager_server_67026|unlink] INFO: preparing fabric environment...
2015-09-04 09:22:58 LOG <manager> [manager_data_18981->manager_server_67026|unlink] INFO: environment prepared successfully
[100.100.150.38] put: /tmp/tmpdPCN3I-unmount.sh -> /tmp/cloudify-ctx/scripts/tmpdPCN3I-unmount.sh
[100.100.150.38] put: <file obj> -> /tmp/cloudify-ctx/scripts/env-tmpdPCN3I-unmount.sh
[100.100.150.38] run: source /tmp/cloudify-ctx/scripts/env-tmpdPCN3I-unmount.sh && /tmp/cloudify-ctx/scripts/tmpdPCN3I-unmount.sh
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 56338) -> ('100.100.150.38', 22) -> ('localhost', 47225)
[100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 56339) -> ('100.100.150.38', 22) -> ('localhost', 47225)
2015-09-04 09:23:01 LOG <manager> [manager_data_18981->manager_server_67026|unlink] INFO: Unmounting file system on /var/lib/docker
[100.100.150.38] out: sudo: unable to resolve host cloudify-manager-server
[100.100.150.38] out: [100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 56340) -> ('100.100.150.38', 22) -> ('localhost', 47225)
2015-09-04 09:23:03 LOG <manager> [manager_data_18981->manager_server_67026|unlink] INFO: Removing /var/lib/docker directory
sudo: unable to resolve host cloudify-manager-server
[100.100.150.38] out: [100.100.150.38] rtunnel: opened reverse tunnel: ('127.0.0.1', 56341) -> ('100.100.150.38', 22) -> ('localhost', 47225)
2015-09-04 09:23:04 LOG <manager> [manager_data_18981->manager_server_67026|unlink] INFO: Removing mount point /var/lib/docker from file system table
sudo: unable to resolve host cloudify-manager-server
[100.100.150.38] out:

2015-09-04 09:23:05 CFY <manager> [manager_data_18981->manager_server_67026|unlink] Task succeeded 'fabric_plugin.tasks.run_script'
2015-09-04 09:23:05 CFY <manager> [manager_data_18981] Deleting node
2015-09-04 09:23:05 CFY <manager> [volume_3d100] Stopping node
2015-09-04 09:23:06 CFY <manager> [volume_3d100->manager_server_67026|unlink] Sending task 'nova_plugin.server.detach_volume'
2015-09-04 09:23:06 CFY <manager> [volume_3d100->manager_server_67026|unlink] Task started 'nova_plugin.server.detach_volume'
2015-09-04 09:23:08 CFY <manager> [volume_3d100->manager_server_67026|unlink] Task succeeded 'nova_plugin.server.detach_volume'
2015-09-04 09:23:08 CFY <manager> [volume_3d100] Deleting node
2015-09-04 09:23:08 CFY <manager> [volume_3d100.delete] Sending task 'cinder_plugin.volume.delete'
2015-09-04 09:23:08 CFY <manager> [volume_3d100.delete] Task started 'cinder_plugin.volume.delete'
2015-09-04 09:23:08 LOG <manager> [volume_3d100.delete] INFO: deleting volume
2015-09-04 09:23:09 CFY <manager> [volume_3d100.delete] Task succeeded 'cinder_plugin.volume.delete'
2015-09-04 09:23:09 CFY <manager> [manager_server_67026] Stopping node
2015-09-04 09:23:09 CFY <manager> [manager_server_67026.stop] Sending task 'nova_plugin.server.stop'
2015-09-04 09:23:09 CFY <manager> [manager_server_67026.stop] Task started 'nova_plugin.server.stop'
2015-09-04 09:23:10 CFY <manager> [manager_server_67026.stop] Task succeeded 'nova_plugin.server.stop'
2015-09-04 09:23:10 CFY <manager> [manager_server_67026->manager_port_ddf7b|unlink] Sending task 'neutron_plugin.port.detach'
2015-09-04 09:23:10 CFY <manager> [manager_server_67026->manager_server_ip_8611d|unlink] Sending task 'nova_plugin.server.disconnect_floatingip'
2015-09-04 09:23:10 CFY <manager> [manager_server_67026->manager_port_ddf7b|unlink] Task started 'neutron_plugin.port.detach'
2015-09-04 09:23:12 CFY <manager> [manager_server_67026->manager_port_ddf7b|unlink] Task rescheduled 'neutron_plugin.port.detach' -> Waiting for the floating ip 100.100.150.38 to detach from server 8efc4160-ba17-4047-b257-5567404f3f54.. [retry_after=10] [attempt 1/6]
2015-09-04 09:23:12 CFY <manager> [manager_server_67026->manager_server_ip_8611d|unlink] Task started 'nova_plugin.server.disconnect_floatingip'
2015-09-04 09:23:14 CFY <manager> [manager_server_67026->manager_server_ip_8611d|unlink] Task succeeded 'nova_plugin.server.disconnect_floatingip'
2015-09-04 09:23:22 CFY <manager> [manager_server_67026->manager_port_ddf7b|unlink] Sending task 'neutron_plugin.port.detach' [attempt 2/6]
2015-09-04 09:23:22 CFY <manager> [manager_server_67026->manager_port_ddf7b|unlink] Task started 'neutron_plugin.port.detach' [attempt 2/6]

2015-09-04 09:23:23 LOG <manager> [manager_server_67026->manager_port_ddf7b|unlink] INFO: Detaching port a92d8a11-b06d-463a-aa86-54b07ef3fb15...
2015-09-04 09:23:23 LOG <manager> [manager_server_67026->manager_port_ddf7b|unlink] INFO: Successfully detached port a92d8a11-b06d-463a-aa86-54b07ef3fb15
2015-09-04 09:23:23 CFY <manager> [manager_server_67026->manager_port_ddf7b|unlink] Task succeeded 'neutron_plugin.port.detach' [attempt 2/6]
2015-09-04 09:23:23 CFY <manager> [manager_server_67026] Deleting node
2015-09-04 09:23:23 CFY <manager> [manager_server_67026.delete] Sending task 'nova_plugin.server.delete'
2015-09-04 09:23:23 CFY <manager> [manager_server_67026.delete] Task started 'nova_plugin.server.delete'
2015-09-04 09:23:23 LOG <manager> [manager_server_67026.delete] INFO: deleting server
2015-09-04 09:23:58 CFY <manager> [manager_server_67026.delete] Task succeeded 'nova_plugin.server.delete'
2015-09-04 09:23:58 CFY <manager> [management_keypair_37a13] Stopping node
2015-09-04 09:23:58 CFY <manager> [manager_port_ddf7b] Stopping node
2015-09-04 09:23:58 CFY <manager> [manager_server_ip_8611d] Stopping node
2015-09-04 09:23:58 CFY <manager> [manager_server_ip_8611d] Deleting node
2015-09-04 09:23:58 CFY <manager> [management_keypair_37a13] Deleting node
2015-09-04 09:23:58 CFY <manager> [management_keypair_37a13.delete] Sending task 'nova_plugin.keypair.delete'
2015-09-04 09:23:58 CFY <manager> [manager_server_ip_8611d.delete] Sending task 'neutron_plugin.floatingip.delete'
2015-09-04 09:23:58 CFY <manager> [management_keypair_37a13.delete] Task started 'nova_plugin.keypair.delete'
2015-09-04 09:23:58 LOG <manager> [management_keypair_37a13.delete] INFO: deleting keypair
2015-09-04 09:24:01 CFY <manager> [management_keypair_37a13.delete] Task succeeded 'nova_plugin.keypair.delete'
2015-09-04 09:24:01 CFY <manager> [manager_port_ddf7b] Deleting node
2015-09-04 09:24:01 CFY <manager> [manager_server_ip_8611d.delete] Task started 'neutron_plugin.floatingip.delete'
2015-09-04 09:24:01 LOG <manager> [manager_server_ip_8611d.delete] INFO: deleting floatingip
2015-09-04 09:24:01 CFY <manager> [manager_port_ddf7b.delete] Sending task 'neutron_plugin.port.delete'
2015-09-04 09:24:02 CFY <manager> [manager_server_ip_8611d.delete] Task succeeded 'neutron_plugin.floatingip.delete'
2015-09-04 09:24:02 CFY <manager> [manager_port_ddf7b.delete] Task started 'neutron_plugin.port.delete'
2015-09-04 09:24:02 LOG <manager> [manager_port_ddf7b.delete] INFO: deleting port
2015-09-04 09:24:03 CFY <manager> [manager_port_ddf7b.delete] Task succeeded 'neutron_plugin.port.delete'
2015-09-04 09:24:04 CFY <manager> [management_security_group_9c3b0] Stopping node
2015-09-04 09:24:04 CFY <manager> [management_subnet_2c45d] Stopping node
2015-09-04 09:24:04 CFY <manager> [management_subnet_2c45d->router_5484d|unlink] Sending task 'neutron_plugin.router.disconnect_subnet'
2015-09-04 09:24:04 CFY <manager> [management_subnet_2c45d->router_5484d|unlink] Task started 'neutron_plugin.router.disconnect_subnet'
2015-09-04 09:24:06 CFY <manager> [management_subnet_2c45d->router_5484d|unlink] Task succeeded 'neutron_plugin.router.disconnect_subnet'
2015-09-04 09:24:06 CFY <manager> [management_security_group_9c3b0] Deleting node
2015-09-04 09:24:06 CFY <manager> [management_security_group_9c3b0.delete] Sending task 'neutron_plugin.security_group.delete'
2015-09-04 09:24:06 CFY <manager> [management_security_group_9c3b0.delete] Task started 'neutron_plugin.security_group.delete'
2015-09-04 09:24:06 LOG <manager> [management_security_group_9c3b0.delete] INFO: deleting security_group
2015-09-04 09:24:06 CFY <manager> [management_security_group_9c3b0.delete] Task succeeded 'neutron_plugin.security_group.delete'
2015-09-04 09:24:06 CFY <manager> [management_subnet_2c45d] Deleting node
2015-09-04 09:24:06 CFY <manager> [management_subnet_2c45d.delete] Sending task 'neutron_plugin.subnet.delete'
2015-09-04 09:24:06 CFY <manager> [management_subnet_2c45d.delete] Task started 'neutron_plugin.subnet.delete'
2015-09-04 09:24:06 LOG <manager> [management_subnet_2c45d.delete] INFO: deleting subnet
2015-09-04 09:24:10 CFY <manager> [management_subnet_2c45d.delete] Task succeeded 'neutron_plugin.subnet.delete'
2015-09-04 09:24:10 CFY <manager> [router_5484d] Stopping node
2015-09-04 09:24:10 CFY <manager> [management_network_88e00] Stopping node
2015-09-04 09:24:10 CFY <manager> [management_network_88e00] Deleting node
2015-09-04 09:24:10 CFY <manager> [management_network_88e00.delete] Sending task 'neutron_plugin.network.delete'
2015-09-04 09:24:10 CFY <manager> [router_5484d] Deleting node
2015-09-04 09:24:10 CFY <manager> [management_network_88e00.delete] Task started 'neutron_plugin.network.delete'
2015-09-04 09:24:10 LOG <manager> [management_network_88e00.delete] INFO: deleting network
2015-09-04 09:24:11 CFY <manager> [router_5484d.delete] Sending task 'neutron_plugin.router.delete'
2015-09-04 09:24:13 CFY <manager> [management_network_88e00.delete] Task succeeded 'neutron_plugin.network.delete'
2015-09-04 09:24:13 CFY <manager> [router_5484d.delete] Task started 'neutron_plugin.router.delete'
2015-09-04 09:24:13 LOG <manager> [router_5484d.delete] INFO: deleting router
2015-09-04 09:24:17 CFY <manager> [router_5484d.delete] Task succeeded 'neutron_plugin.router.delete'
2015-09-04 09:24:17 CFY <manager> [external_network_e3104] Stopping node
2015-09-04 09:24:17 CFY <manager> [external_network_e3104] Deleting node
2015-09-04 09:24:17 CFY <manager> [external_network_e3104.delete] Sending task 'neutron_plugin.network.delete'
2015-09-04 09:24:17 CFY <manager> [external_network_e3104.delete] Task started 'neutron_plugin.network.delete'
2015-09-04 09:24:17 LOG <manager> [external_network_e3104.delete] INFO: not deleting network since an external network is being used
2015-09-04 09:24:17 CFY <manager> [external_network_e3104.delete] Task succeeded 'neutron_plugin.network.delete'
2015-09-04 09:24:18 CFY <manager> 'uninstall' workflow execution succeeded
Workflow failed: Task failed 'fabric_plugin.tasks.run_module_task' -> failed importing cloudify docker image from http://gigaspaces-repository-eu.s3.amazonaws.com/org/cloudify3/3.2.0/ga-RELEASE/cloudify-docker_3.2.0-ga-b200.tar. reason:run() received nonzero return code 1 while executing!

(cloudify)root@controller:~/manik/cloudify-manager-blueprints-3.2/openstack#
(cloudify)root@controller:~/manik/cloudify-manager-blueprints-3.2/openstack#


------------------------------------------------------------------------End------------------------------------------------------------------------------------------------

tram...@gigaspaces.com

unread,
Sep 6, 2015, 3:54:23 AM9/6/15
to cloudify-users
This is a bug with Openstack where they do not add the hostname of a server upon server creation. However it is only a warning and shouldn't affect any Cloudify executions.


On Tuesday, August 25, 2015 at 7:12:09 AM UTC+3, Manik Sidana wrote:

Manik Sidana

unread,
Sep 8, 2015, 5:40:54 AM9/8/15
to cloudify-users
Hi,

Thanks for your reply.
After struggling a lot with several side issues like MTU, proxy environment variables, nameserver etc, I have managed to reach a certain stage.
Now, when I try to bootstrap cloudify, I get the below error:


2015-09-08 20:08:01 CFY <manager> [manager_data_43075->manager_server_7163b|establish] Task succeeded 'fabric_plugin.tasks.run_script'
2015-09-08 20:08:01 CFY <manager> [manager_281ae] Creating node
2015-09-08 20:08:01 CFY <manager> [manager_281ae] Configuring node
2015-09-08 20:08:01 CFY <manager> [manager_281ae.configure] Sending task 'fabric_plugin.tasks.run_task'
2015-09-08 20:08:01 CFY <manager> [manager_281ae.configure] Task started 'fabric_plugin.tasks.run_task'
2015-09-08 20:08:01 LOG <manager> [manager_281ae.configure] INFO: running task: configure from scripts/configure.py
2015-09-08 20:08:01 LOG <manager> [manager_281ae.configure] INFO: preparing fabric environment...
2015-09-08 20:08:01 LOG <manager> [manager_281ae.configure] INFO: environment prepared successfully
[100.100.150.25] put: /tmp/tmp7hWM4R -> /home/ubuntu/openstack_config.json
2015-09-08 20:08:01 CFY <manager> [manager_281ae.configure] Task succeeded 'fabric_plugin.tasks.run_task'
2015-09-08 20:08:02 CFY <manager> [manager_281ae] Starting node
2015-09-08 20:08:02 CFY <manager> [manager_281ae.start] Sending task 'fabric_plugin.tasks.run_module_task'
2015-09-08 20:08:02 CFY <manager> [manager_281ae.start] Task started 'fabric_plugin.tasks.run_module_task'
2015-09-08 20:08:02 LOG <manager> [manager_281ae.start] INFO: running task: cloudify_cli.bootstrap.tasks.bootstrap_docker
2015-09-08 20:08:02 LOG <manager> [manager_281ae.start] INFO: preparing fabric environment...
2015-09-08 20:08:02 LOG <manager> [manager_281ae.start] INFO: environment prepared successfully
2015-09-08 20:08:02 LOG <manager> [manager_281ae.start] INFO: initializing manager on the machine at 100.100.150.25
[100.100.150.25] run: mkdir -p ~/cloudify
[100.100.150.25] run: sudo which docker
[100.100.150.25] out: sudo: unable to resolve host cloudify-manager-server
[100.100.150.25] out: /usr/bin/docker
[100.100.150.25] out:

[100.100.150.25] run: sudo docker info
[100.100.150.25] out: sudo: unable to resolve host cloudify-manager-server
[100.100.150.25] out: Containers: 0
[100.100.150.25] out: Images: 0
[100.100.150.25] out: Storage Driver: aufs
[100.100.150.25] out:  Root Dir: /var/lib/docker/aufs
[100.100.150.25] out:  Backing Filesystem: extfs
[100.100.150.25] out:  Dirs: 0
[100.100.150.25] out:  Dirperm1 Supported: false
[100.100.150.25] out: Execution Driver: native-0.2
[100.100.150.25] out: Logging Driver: json-file
[100.100.150.25] out: Kernel Version: 3.13.0-63-generic
[100.100.150.25] out: Operating System: Ubuntu 14.04.3 LTS
[100.100.150.25] out: CPUs: 1
[100.100.150.25] out: Total Memory: 1.955 GiB
[100.100.150.25] out: Name: cloudify-manager-server
[100.100.150.25] out: ID: HAHY:6Q5B:GW43:I76T:5UHY:2K7E:QXKI:5HGS:TGU2:6BWV:CCV6:5DGH
[100.100.150.25] out: WARNING: No swap limit support
[100.100.150.25] out:

[100.100.150.25] run: sudo docker inspect data
[100.100.150.25] out: sudo: unable to resolve host cloudify-manager-server
[100.100.150.25] out: Error: No such image or container: data
[100.100.150.25] out: []
[100.100.150.25] out:


Fatal error: run() received nonzero return code 1 while executing!

Requested: sudo docker inspect data
Executed: /bin/bash -l -c "sudo docker inspect data"

Aborting.
[100.100.150.25] run: sudo docker inspect cfy
[100.100.150.25] out: sudo: unable to resolve host cloudify-manager-server
[100.100.150.25] out: Error: No such image or container: cfy
[100.100.150.25] out: []
[100.100.150.25] out:


Fatal error: run() received nonzero return code 1 while executing!

Requested: sudo docker inspect cfy
Executed: /bin/bash -l -c "sudo docker inspect cfy"

Aborting.
2015-09-08 20:08:13 LOG <manager> [manager_281ae.start] INFO: importing cloudify-manager docker image from http://gigaspaces-repository-eu.s3.amazonaws.com/org/cloudify3/3.2.0/ga-RELEASE/cloudify-docker_3.2.0-ga-b200.tar
[100.100.150.25] out: sudo: unable to resolve host cloudify-manager-server

[100.100.150.25] out: Importing [>                                                  ] 545.8 kB/1.159 GB


Currently, this step is in progress for package downloads. But I am concerned on the below error that I got,

[100.100.150.25] out: Error: No such image or container: data

I have around 20GB of a separate volume on cinder available, of which 10GB is taken up by cloudify-manager-server. I hope, space is not an issue.
I have been really struggling on this bootstrap process since a few days due to one error or the other.

Hoping to get your help on this.
It'll be great if our fellow user group members can guide me and get me started with cloudify.

Manik

Manik Sidana

unread,
Sep 8, 2015, 6:31:13 AM9/8/15
to cloudify-users
After all the packages download, I got a failure in what seems to be the final step.
The bootstrapping involves a lot of downloads around 1 GB. Is there a way I can optimize on this in next bootstrapping as it takes a lot of time in downloading.

Any hint on the below error (as well as some help on my previous post in this thread).

2015-09-08 20:59:37 LOG <manager> [manager_281ae.start] INFO: starting a new cloudify mgmt docker services container
[100.100.150.25] run: sudo docker run --name cfy -t --volumes-from data --privileged=False -p 80:80 -p 5555:5555 -p 5672:5672 -p 53229:53229 -p 8100:8100 -p 8101:8101 -p 9200:9200 -p 8086:8086 -e MANAGEMENT_IP=10.67.79.2 -e MANAGER_REST_SECURITY_CONFIG_PATH=/root/rest-security-config.json --restart=always -d cloudify /sbin/my_init
[100.100.150.25] out: sudo: unable to resolve host cloudify-manager-server
[100.100.150.25] out: 18fb16df50ad93fb48164fee6ada34a76d66dd5de59de10bb53c47b4996dae87
[100.100.150.25] out:

2015-09-08 20:59:43 LOG <manager> [manager_281ae.start] INFO: waiting for cloudify management services to start on port 80
2015-09-08 20:59:43 LOG <manager> [manager_281ae.start] INFO: waiting for url http://100.100.150.25:80/version to become available
2015-09-08 21:03:03 LOG <manager> [manager_281ae.start] INFO: failed waiting for cloudify management services to start.
2015-09-08 21:03:03 LOG <manager> [manager_281ae.start] ERROR: Exception raised on operation [fabric_plugin.tasks.run_module_task] invocation
Traceback (most recent call last):
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/cloudify/decorators.py", line 125, in wrapper
    result = func(*args, **kwargs)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric_plugin/tasks.py", line 89, in run_module_task
    return _run_task(task, task_properties, fabric_env)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/fabric_plugin/tasks.py", line 95, in _run_task
    return task(**task_properties)
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/cloudify_cli/bootstrap/tasks.py", line 374, in bootstrap_docker
    return post_bootstrap_actions()
  File "/root/manik/cloudify/local/lib/python2.7/site-packages/cloudify_cli/bootstrap/tasks.py", line 237, in post_bootstrap_actions
    raise NonRecoverableError(err)
NonRecoverableError: failed waiting for cloudify management services to start.
2015-09-08 21:03:03 CFY <manager> [manager_281ae.start] Task failed 'fabric_plugin.tasks.run_module_task' -> failed waiting for cloudify management services to start. [attempt 1/6]
2015-09-08 21:03:03 CFY <manager> 'install' workflow execution failed: Workflow failed: Task failed 'fabric_plugin.tasks.run_module_task' -> failed waiting for cloudify management services to start.
bootstrap failed!
executing teardown due to failed bootstrap

Manik Sidana

unread,
Sep 9, 2015, 6:04:43 AM9/9/15
to cloudify-users
Hi,

Thanks for the help and support on this group.

I have been able to bootstrap cloudify manager :)

This post can be marked as CLOSED.

Thanks

Jason

unread,
Nov 24, 2015, 10:14:29 AM11/24/15
to cloudify-users
Hi Manik,

How did fix the "failed waiting for cloudify management services to start" problem, I am having the same problem here with cloudify 3.2.1 and openstack kilo

Thank you

Manik Sidana

unread,
Dec 23, 2015, 2:32:10 AM12/23/15
to cloudify-users
Hi Jason,

Extremely sorry for a very late reply.
I did not get this error.
Once the docker got downloaded and installed, i got the below logs


......
100.100.150.27] out: cloudify-windows-agent 3.2.0 installation completed successfully!
[100.100.150.27] out:
[100.100.150.27] out:
[100.100.150.27] out:
[100.100.150.27] out: Data-only container
[100.100.150.27] out:
[100.100.150.27] out:

2015-09-09 16:51:35 LOG <manager> [manager_e2ce3.start] INFO: starting a new cloudify mgmt docker services container
[100.100.150.27] run: sudo docker run --name cfy -t --volumes-from data --privileged=False -p 80:80 -p 5555:5555 -p 5672:5672 -p 53229:53229 -p 8100:8100 -p 8101:8101 -p 9200:9200 -p 8086:8086 -e MANAGEMENT_IP=10.67.79.2 -e MANAGER_REST_SECURITY_CONFIG_PATH=/root/rest-security-config.json --restart=always -d cloudify /sbin/my_init
[100.100.150.27] out: sudo: unable to resolve host cloudify-manager-server1
[100.100.150.27] out: 47b0de6eeef0b5fd0cd4917c32c4cab19b1e6819db3fe5a48cf51a78bd018839
[100.100.150.27] out:

2015-09-09 16:51:40 LOG <manager> [manager_e2ce3.start] INFO: waiting for cloudify management services to start on port 80
2015-09-09 16:51:40 LOG <manager> [manager_e2ce3.start] INFO: waiting for url http://<Floating IP>7:80/version to become available
2015-09-09 16:51:48 LOG <manager> [manager_e2ce3.start] INFO: updating provider context on management server...
[100.100.150.27] put: <file obj> -> /home/ubuntu/provider-context.json
[100.100.150.27] run: sudo docker exec -t cfy curl --fail -XPOST localhost:8101/provider/context -H "Content-Type: application/json" -d @/tmp/home/provider-context.json


Can you check whether http://<Cloudify-Floating-IP>:80/version URL works for you.
it gives me below output

{"date": "", "commit": "", "version": "3.2.0", "build": "85"}


Thanks
Manik

Idan Moyal

unread,
Dec 24, 2015, 7:47:03 AM12/24/15
to cloudify-users
Hi Jason,

What is the spec of the VM you're trying to bootstrap on?

You can add the --keep-up-on-failure flag to the bootstrap command so that the VM will remain up.
I suspect it is related to nginx on the manager not starting up properly.

You can check its logs from within the container:
1. SSH to the vm after bootstrap failed.
2. docker exec -it cfy bash
3. Look for nginx logs in /var/log/nginx

In any case please consider trying Cloudify 3.3 which has been recently released.


Regards,
Idan
Reply all
Reply to author
Forward
0 new messages