CLOUD_FOUNDRY_DEPOLYMENT_ON_vSPHERE

379 views
Skip to first unread message

Parthiban Annadurai

unread,
Mar 30, 2015, 7:49:45 AM3/30/15
to vcap...@cloudfoundry.org
Hi Devs,
             I have deployed the CF on vSphere. My bosh vms shows the following,

bosh vms cloudfoundry
Deployment `cloudfoundry'

Director task 551

Task 551 done

+------------------------------------+--------------------+---------------+----------------+
| Job/index                          | State              | Resource Pool | IPs            |
+------------------------------------+--------------------+---------------+----------------+
| unknown/unknown                    | unresponsive agent |               |                |
| unknown/unknown                    | unresponsive agent |               |                |
| unknown/unknown                    | running            | small_z1      | 192.168.33.19  |
| api_worker_z1/0                    | running            | small_z1      | 192.168.33.24  |
| api_worker_z2/0                    | running            | small_z2      | 192.168.33.16  |
| api_z1/0                           | running            | large_z1      | 192.168.33.22  |
| api_z2/0                           | running            | large_z2      | 192.168.33.15  |
| clock_global/0                     | running            | medium_z1     | 192.168.33.23  |
| etcd_z1/0                          | running            | medium_z1     | 192.168.33.65  |
| etcd_z1/1                          | running            | medium_z1     | 192.168.33.66  |
| etcd_z2/0                          | running            | medium_z2     | 192.168.33.154 |
| ha_proxy_z1/0                      | running            | router_z1     | 192.168.33.54  |
| hm9000_z1/0                        | running            | medium_z1     | 192.168.33.25  |
| hm9000_z2/0                        | running            | medium_z2     | 192.168.33.17  |
| loggregator_trafficcontroller_z1/0 | running            | small_z1      | 192.168.33.28  |
| loggregator_z1/0                   | running            | medium_z1     | 192.168.33.27  |
| login_z1/0                         | running            | medium_z1     | 192.168.33.21  |
| login_z2/0                         | running            | medium_z2     | 192.168.33.14  |
| nats_z1/0                          | running            | medium_z1     | 192.168.33.52  |
| nats_z2/0                          | running            | medium_z2     | 192.168.33.142 |
| nfs_z1/0                           | running            | medium_z1     | 192.168.33.53  |
| postgres_z1/0                      | running            | medium_z1     | 192.168.33.63  |
| router_z1/0                        | running            | router_z1     | 192.168.33.56  |
| router_z2/0                        | running            | router_z2     | 192.168.33.146 |
| runner_z1/0                        | running            | runner_z1     | 192.168.33.26  |
| runner_z2/0                        | running            | runner_z2     | 192.168.33.18  |
| uaa_z2/0                           | running            | medium_z2     | 192.168.33.13  |
+------------------------------------+--------------------+---------------+----------------+

VMs total: 27

Now, when am trying to target with cf api command, it shows the following,


Setting api endpoint to https://api.192.168.33.54.xip.io...
FAILED
Error performing request: Get https://api.192.168.33.54.xip.io/v2/info: dial tcp 192.168.33.54:443: connection refused

My question is that, For targeting the api, we need to open the ports 443 and 80 or not?? Or provide me some other suggestions if i followed a wrong methodology.. 

FYI, Am using CF CLI V6..

CF Version 202

BOSH Version 150 for CF Deployments using BOSH..

Thanks in Well Advance..

Parthiban Annadurai

unread,
Mar 30, 2015, 8:56:12 AM3/30/15
to vcap...@cloudfoundry.org
Hi All,
         My system domain in manifest would be like following,

system_domain: 192.168.33.54.xip.io

and i did 

Setting api endpoint to 192.168.33.54.xip.io...
FAILED
Error performing request: Get http://192.168.33.54.xip.io/v2/info: dial tcp 192.168.33.54:80: connection refused

Thanks..

Johannes Hiemer

unread,
Mar 30, 2015, 9:08:22 AM3/30/15
to vcap...@cloudfoundry.org
Looks like a port issue: 80 and 443 open on the IP?

Parthiban Annadurai

unread,
Mar 30, 2015, 9:13:39 AM3/30/15
to vcap...@cloudfoundry.org
@Johannes.. I think it might be closed only.. We need to open to proceed further or what??

Thanks..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/abeb9994-157e-41bc-af3e-ab49d858a89a%40cloudfoundry.org.

To unsubscribe from this group and stop receiving emails from it, send an email to vcap-dev+u...@cloudfoundry.org.

Johannes Hiemer

unread,
Mar 30, 2015, 9:16:05 AM3/30/15
to vcap...@cloudfoundry.org
Hi,
yes you need to open the ports and afterwards you need to connect to:

cf api API.192.168.33.54.xip.io and not cf api 192.168.33.54.xip.io.

Parthiban Annadurai

unread,
Mar 30, 2015, 9:20:05 AM3/30/15
to vcap...@cloudfoundry.org
Thanks Johannes for your lightning replies.. Let me try it first and let you know..

Once again Thanks..

Parthiban Annadurai

unread,
Mar 31, 2015, 1:21:31 AM3/31/15
to vcap...@cloudfoundry.org
@Johannes..  My ports are in open only.. Now after setting proxy, am getting the following,

Setting api endpoint to api.192.168.33.54.xip.io...
FAILED
Server error, status code: 403, error code: 0, message:

Can you please??

Thanks..

Johannes Hiemer

unread,
Mar 31, 2015, 1:53:48 AM3/31/15
to vcap...@cloudfoundry.org
Parthiban, please do a:

CF_TRACE=true cf api api192.168.33.54.xip.io 

But I suspect you are having troubles with a Proxy in between. You get 403 (Forbidden), which I would always set in relation to something between your client and the endpoint you are trying to reach.

Parthiban Annadurai

unread,
Mar 31, 2015, 2:04:58 AM3/31/15
to vcap...@cloudfoundry.org
@Johannes.. The Command CF_TRACE=true cf api api192.168.33.54.xip.io throws the following,

CF_TRACE=true cf api api.192.168.33.54.xip.io

VERSION:
6.10.0-b78bf10

Setting api endpoint to api.192.168.33.54.xip.io...

REQUEST: [2015-03-31T06:17:20Z]
GET /v2/info HTTP/1.1
Accept: application/json
Content-Type: application/json
User-Agent: go-cli 6.10.0-b78bf10 / linux



REQUEST: [2015-03-31T06:17:20Z]
GET /v2/info HTTP/1.1
Accept: application/json
Content-Type: application/json
User-Agent: go-cli 6.10.0-b78bf10 / linux


FAILED
Error performing request: Get http://api.192.168.33.54.xip.io/v2/info: dial tcp 192.168.33.54:80: connection refused
FAILED
Error performing request: Get http://api.192.168.33.54.xip.io/v2/info: dial tcp 192.168.33.54:80: connection refused


Thanks..

Johannes Hiemer

unread,
Mar 31, 2015, 2:12:19 AM3/31/15
to vcap...@cloudfoundry.org
So exactly what I said, either Proxy or Firewall in between. Take this trace to your networking guys...they should be able to identify what is in between.

Parthiban Annadurai

unread,
Mar 31, 2015, 2:19:37 AM3/31/15
to vcap...@cloudfoundry.org
@Johannes.. Let me check with them and try.. But there is No Firewall, that am sure.. 

Thanks..

Johannes Hiemer

unread,
Mar 31, 2015, 2:24:23 AM3/31/15
to vcap...@cloudfoundry.org
But it is the only possible explanation. Your `bosh vms` shows exactly 

ha_proxy_z1/0                      | running            | router_z1     | 192.168.33.54 

as the right API endpoint. So that should work.

Parthiban Annadurai

unread,
Mar 31, 2015, 2:27:41 AM3/31/15
to vcap...@cloudfoundry.org
@Johannes.. Thanks for your reply.. Let me check first..

Parthiban Annadurai

unread,
Mar 31, 2015, 2:40:17 AM3/31/15
to vcap...@cloudfoundry.org
@Johannes.. 

FYI, there is no Proxy as well as Firewall, since we are using a same Lab Network and am able to ping that IP also..

I have spoken with the Network Peoples also..

Your valuable inputs needed here..

Thanks in Well Advance..

Johannes Hiemer

unread,
Mar 31, 2015, 2:46:24 AM3/31/15
to vcap...@cloudfoundry.org
Hm and 192.168.33.54 is up and running? 

Parthiban Annadurai

unread,
Mar 31, 2015, 2:50:40 AM3/31/15
to vcap...@cloudfoundry.org
@Johannes.. Still its Up and Running..

Thanks..

Parthiban Annadurai

unread,
Mar 31, 2015, 5:48:22 AM3/31/15
to vcap...@cloudfoundry.org
@Johannes.. What I need to do now to solve the above issue and proceed further?? 

Need your kind help on this..

Thanks in Well Advance..

Noburou Taniguchi

unread,
Mar 31, 2015, 6:54:59 AM3/31/15
to vcap...@cloudfoundry.org
Hi,

I'm not Johannes but I have two advices for you.

1. Login HAProxy vm and confirm that HAProxy listens port 443:
bosh ssh ha_proxy_z1/0
You will be asked the admin's password. After logined successfully,
sudo lsof -i:443
will show which process is listening port 443
If nothing is shown, the fact is that no process is waiting on 443. Your deployment may be wrong.

2. If #1 is ok, check connectivity by telnet.
On the machine where you run cf cli, type:
telnet 192.168.53.44 443
If your connection is refused, it is a network problem, not a CF problem.
You should ask your network administrator to solve it.

2015年3月31日火曜日 18時48分22秒 UTC+9 Parthiban Annadurai:

Parthiban Annadurai

unread,
Mar 31, 2015, 7:13:15 AM3/31/15
to vcap...@cloudfoundry.org
@Noburou Taniguchi.. Thanks for your kind reply.. When I tried to execute the command sudo lsof -i:443, but its throwing nothing.. My question is that the bosh vms command showing all the VMs are running and you are also saying that "MAY BE MY DEPLOYMENT IS WRONG".. So how i can solve this issue??

Thanks in Well Advance..

Parthiban Annadurai

unread,
Mar 31, 2015, 7:16:27 AM3/31/15
to vcap...@cloudfoundry.org
@Noburou Taniguchi.. Also tell me how to make that port 443 to listen??

Thanks..

Johannes Hiemer

unread,
Mar 31, 2015, 7:24:32 AM3/31/15
to vcap...@cloudfoundry.org
Parthiban,
this is what you should see:

vcap@f6eb2eba-219c-49d1-accc-1a1386b5f65a:~$ sudo lsof -i:443
[sudo] password for vcap: 
COMMAND  PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
haproxy 1569 vcap    5u  IPv4  24183      0t0  TCP *:https (LISTEN)

after connecting via SSH to 192.168.33.54. 

>@Noburou Taniguchi.. Also tell me how to make that port 443 to listen??
This is not possible manually, without ignoring the implications of it. When your deployment is correct, then this port should be available and reachable. 

Parthiban Annadurai

unread,
Mar 31, 2015, 7:48:43 AM3/31/15
to vcap...@cloudfoundry.org
@Johannes and @Noburou Taniguchi.. One question, whether I need to mention the port 443 on my manifest file to listen or its a Inline?? because in my both BOSH as well as CF manifest i am not seeing the port 443..

Thanks..

Johannes Hiemer

unread,
Mar 31, 2015, 7:52:45 AM3/31/15
to vcap...@cloudfoundry.org
No you don't. It is implicit as 443 is the port for https which is automatically configured during bosh setup.

Parthiban Annadurai

unread,
Mar 31, 2015, 7:56:45 AM3/31/15
to vcap...@cloudfoundry.org
@Johannes..

FYI, my bosh deployments will be like the following,


bosh deployments

+------+------------+-----------------------------------------------+
| Name | Release(s) | Stemcell(s)                                   |
+------+------------+-----------------------------------------------+
| bosh | bosh/150   | bosh-vsphere-esxi-ubuntu-trusty-go_agent/2865 |
+------+------------+-----------------------------------------------+

Deployments total: 1
[root@ie1aul0414 ~]# bosh status
Config
             /root/.bosh_config

Director
  Name       microbosh
  URL        https://10.255.237.40:25555
  Version    1.2865.0 (00000000)
  User       admin
  UUID       cb4ef9f0-3ed6-499e-9b41-b57a33376cff
  CPI        vsphere
  dns        enabled (domain_name: microbosh)
  compiled_package_cache disabled
  snapshots  disabled

Deployment
  Manifest   /root/oss/bosh.yml
[root@ie1aul0414 ~]# bosh vms
Deployment `bosh'

Director task 64

Task 64 done

+------------------+---------+---------------+----------------+
| Job/index        | State   | Resource Pool | IPs            |
+------------------+---------+---------------+----------------+
| blobstore/0      | running | small         | 192.168.33.125 |
| director/0       | running | medium        | 192.168.33.124 |
| health_monitor/0 | running | small         | 192.168.33.126 |
| nats/0           | running | small         | 192.168.33.121 |
| postgres/0       | running | small         | 192.168.33.122 |
| redis/0          | running | small         | 192.168.33.123 |
+------------------+---------+---------------+----------------+

VMs total: 6


Can you please verify its correct or not?? Since you are saying that during bosh deployment, it has to take place..

Thanks..

Johannes Hiemer

unread,
Mar 31, 2015, 8:03:38 AM3/31/15
to vcap...@cloudfoundry.org
Parthiban, you making it really hard to keep helping. It would be really helpful if you would do some debugging on your own, instead of asking every little step throughout this installation. Remote debugging via mailing lists, does not make any sense or fun at all. You can't be serious about trying to connect to: api.192.168.33.54.xip.io when your deployment with bosh vms

+------------------+---------+---------------+----------------+
| Job/index        | State   | Resource Pool | IPs            |
+------------------+---------+---------------+----------------+
| blobstore/0      | running | small         | 192.168.33.125 |
| director/0       | running | medium        | 192.168.33.124 |
| health_monitor/0 | running | small         | 192.168.33.126 |
| nats/0           | running | small         | 192.168.33.121 |
| postgres/0       | running | small         | 192.168.33.122 |
| redis/0          | running | small         | 192.168.33.123 |
+------------------+---------+---------------+----------------+

does not contain a single VM with that IP. Or? I say it again, as in the previous thread: it is important that you understand the ideas and concepts and also think on your own. 

Parthiban Annadurai

unread,
Mar 31, 2015, 8:08:35 AM3/31/15
to vcap...@cloudfoundry.org
@Johannes.. Sorry for the confusion first.. You have said that the port listening should occur on bosh deployments, so i just shared my bosh status.. thats it.. and also am not trying to make remote debugging at all, if i struck up with some problem for some couple of days, then only i will post here.. otherwise i will not..

Thanks and Once again Sorry for the confusion..

FYI, the following is the bosh vms output,

+------------------------------------+--------------------+---------------+----------------+
| Job/index                          | State              | Resource Pool | IPs            |
+------------------------------------+--------------------+---------------+----------------+
| unknown/unknown                    | unresponsive agent |               |                |
| unknown/unknown                    | running            | medium_z1     | 192.168.33.20  |
Thanks.. 

Channing Benson

unread,
Apr 1, 2015, 4:59:54 PM4/1/15
to vcap...@cloudfoundry.org
This is certainly a problem with your network configuration.

Can you ping api.192.168.33.54.xip.io

If not, you may need to add a route to get traffic from the subnet where you're running the cf command to 192.168.33.x. 

Also, I'd recommend that you always use https when setting the api, so unless you have configured ha_proxy with a "real" SSL certificate, you will want to use the --skip-ssl-validation option, so your command will look like

               cf api --skip-ssl-validation https://api.192.168.33.54.xip.io/

But fix your network first.


Parthiban Annadurai

unread,
Apr 1, 2015, 11:49:15 PM4/1/15
to vcap...@cloudfoundry.org
@Channing Benson.. First, Thanks for you valuable reply.. 

Am able to ping the IP and also  api.192.168.33.54.xip.io..

Thanks in Well Advance.. 

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Parthiban Annadurai

unread,
Apr 2, 2015, 12:56:31 AM4/2/15
to vcap...@cloudfoundry.org
Hi All,
         Is anybody successfully deployed CFv202 on vSphere using BOSH?? If its so, could you please share the manifest here as well as "bosh vms" output.. So, that it will be helpful for people like me..

Thanks in Well Advance..

Parthiban Annadurai

unread,
Apr 2, 2015, 1:48:43 AM4/2/15
to vcap...@cloudfoundry.org
Hi All,
         Since, am not able to recover the above errors, I just tried to redeploy agian, it throws the following,

bosh deploy

Processing deployment manifest
------------------------------
Getting deployment properties from director...
Compiling deployment manifest...
Please review all changes carefully

Deploying
---------
Deployment name: `cf-deployment.yml'
Director name: `bosh'
Are you sure you want to deploy? (type 'yes' to continue): yes

Director task 814
  Started preparing deployment
  Started preparing deployment > Binding deployment. Done (00:00:00)
  Started preparing deployment > Binding releases. Done (00:00:00)
  Started preparing deployment > Binding existing deployment. Failed: Timed out sending `get_state' to e9d3d9a5-1580-4ff1-a184-c790afa1cdb9 after 45 seconds (00:02:15)

Error 450002: Timed out sending `get_state' to e9d3d9a5-1580-4ff1-a184-c790afa1cdb9 after 45 seconds

Task 814 error

For a more detailed error report, run: bosh task 814 --debug


Any helps are appreciated..

Thanks..

Channing Benson

unread,
Apr 2, 2015, 5:20:12 PM4/2/15
to vcap...@cloudfoundry.org
Did you run "bosh task 814 --debug"? The output from that command probably has slightly more detail.

Parthiban Annadurai

unread,
Apr 2, 2015, 11:42:03 PM4/2/15
to vcap...@cloudfoundry.org
@Channing Benson.. the command bosh task 814 --debug shows the following,

D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(b7903729-67af-4011-b0d6-e9603b94b37e)] DEBUG -- DirectorJobRunner: Processing VM network reservations
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(b7903729-67af-4011-b0d6-e9603b94b37e)] DEBUG -- DirectorJobRunner: Binding instance VM
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(b7903729-67af-4011-b0d6-e9603b94b37e)] DEBUG -- DirectorJobRunner: Found existing job instance `clock_global/0'
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(b7903729-67af-4011-b0d6-e9603b94b37e)] DEBUG -- DirectorJobRunner: Copying job instance `clock_global/0' network reservation {type=dynamic, ip="192.168.33.23"}
I, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(b7903729-67af-4011-b0d6-e9603b94b37e)]  INFO -- DirectorJobRunner: ResourcePool `medium_z1' - Adding allocated VM (index=7)
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(b7903729-67af-4011-b0d6-e9603b94b37e)] DEBUG -- DirectorJobRunner: Found VM 'vm-d09828db-5ed3-4b47-9eb3-ed604f95a8c8' running job instance 'clock_global/0' in resource pool `medium_z1' with reservation '{type=dynamic, ip="192.168.33.23"}'
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(b7903729-67af-4011-b0d6-e9603b94b37e)] DEBUG -- DirectorJobRunner: Finished processing VM network reservations
D, [2015-04-02 05:17:54 #19560] [] DEBUG -- DirectorJobRunner: Thread is no longer needed, cleaning up
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(95c6641e-c438-41e5-bace-a28ecd34770c)] DEBUG -- DirectorJobRunner: (0.001032s) SELECT * FROM "persistent_disks" WHERE ("persistent_disks"."instance_id" = 22)
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(66bbb443-8519-4b39-b22a-7fc3df648dc5)] DEBUG -- DirectorJobRunner: Processing VM network reservations
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(66bbb443-8519-4b39-b22a-7fc3df648dc5)] DEBUG -- DirectorJobRunner: Binding instance VM
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(66bbb443-8519-4b39-b22a-7fc3df648dc5)] DEBUG -- DirectorJobRunner: Found existing job instance `loggregator_trafficcontroller_z1/0'
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(66bbb443-8519-4b39-b22a-7fc3df648dc5)] DEBUG -- DirectorJobRunner: Copying job instance `loggregator_trafficcontroller_z1/0' network reservation {type=dynamic, ip="192.168.33.28"}
I, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(66bbb443-8519-4b39-b22a-7fc3df648dc5)]  INFO -- DirectorJobRunner: ResourcePool `small_z1' - Adding allocated VM (index=2)
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(66bbb443-8519-4b39-b22a-7fc3df648dc5)] DEBUG -- DirectorJobRunner: Found VM 'vm-a8a48c57-221b-401c-bc63-fb6b8ace6585' running job instance 'loggregator_trafficcontroller_z1/0' in resource pool `small_z1' with reservation '{type=dynamic, ip="192.168.33.28"}'
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(66bbb443-8519-4b39-b22a-7fc3df648dc5)] DEBUG -- DirectorJobRunner: Finished processing VM network reservations
D, [2015-04-02 05:17:54 #19560] [] DEBUG -- DirectorJobRunner: Thread is no longer needed, cleaning up
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(940d126c-1109-4e4b-87fc-2694ca3d0279)] DEBUG -- DirectorJobRunner: Verified VM state
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(95c6641e-c438-41e5-bace-a28ecd34770c)] DEBUG -- DirectorJobRunner: Processing VM network reservations
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(95c6641e-c438-41e5-bace-a28ecd34770c)] DEBUG -- DirectorJobRunner: Binding instance VM
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(95c6641e-c438-41e5-bace-a28ecd34770c)] DEBUG -- DirectorJobRunner: Found existing job instance `login_z2/0'
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(95c6641e-c438-41e5-bace-a28ecd34770c)] DEBUG -- DirectorJobRunner: Copying job instance `login_z2/0' network reservation {type=dynamic, ip="192.168.33.14"}
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(d43f6e55-4d98-459e-bf7d-940e8570d934)] DEBUG -- DirectorJobRunner: (0.001012s) SELECT * FROM "instances" WHERE ("instances"."vm_id" = 118) LIMIT 1
I, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(95c6641e-c438-41e5-bace-a28ecd34770c)]  INFO -- DirectorJobRunner: ResourcePool `medium_z2' - Adding allocated VM (index=4)
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(e4214b0b-286a-4a02-be0e-50cbb23a5e4d)] DEBUG -- DirectorJobRunner: (0.000672s) SELECT * FROM "instances" WHERE ("instances"."vm_id" = 120) LIMIT 1
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(d43f6e55-4d98-459e-bf7d-940e8570d934)] DEBUG -- DirectorJobRunner: Verified VM state
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(7a1ee437-a178-4119-8c17-28c3bb4a516f)] DEBUG -- DirectorJobRunner: (0.000872s) SELECT * FROM "instances" WHERE ("instances"."vm_id" = 130) LIMIT 1
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(95c6641e-c438-41e5-bace-a28ecd34770c)] DEBUG -- DirectorJobRunner: Found VM 'vm-d62de441-2d83-4af8-a666-3c5b383e7b19' running job instance 'login_z2/0' in resource pool `medium_z2' with reservation '{type=dynamic, ip="192.168.33.14"}'
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(95c6641e-c438-41e5-bace-a28ecd34770c)] DEBUG -- DirectorJobRunner: Finished processing VM network reservations
D, [2015-04-02 05:17:54 #19560] [] DEBUG -- DirectorJobRunner: Thread is no longer needed, cleaning up
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(7a1ee437-a178-4119-8c17-28c3bb4a516f)] DEBUG -- DirectorJobRunner: Verified VM state
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(d43f6e55-4d98-459e-bf7d-940e8570d934)] DEBUG -- DirectorJobRunner: (0.001004s) SELECT * FROM "persistent_disks" WHERE ("persistent_disks"."instance_id" = 18)
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(7a1ee437-a178-4119-8c17-28c3bb4a516f)] DEBUG -- DirectorJobRunner: (0.000267s) SELECT * FROM "persistent_disks" WHERE ("persistent_disks"."instance_id" = 29)
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(d43f6e55-4d98-459e-bf7d-940e8570d934)] DEBUG -- DirectorJobRunner: Processing VM network reservations
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(d43f6e55-4d98-459e-bf7d-940e8570d934)] DEBUG -- DirectorJobRunner: Binding instance VM
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(d43f6e55-4d98-459e-bf7d-940e8570d934)] DEBUG -- DirectorJobRunner: Found existing job instance `postgres_z1/0'
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(d43f6e55-4d98-459e-bf7d-940e8570d934)] DEBUG -- DirectorJobRunner: Copying job instance `postgres_z1/0' network reservation {type=static, ip="192.168.33.63"}
I, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(d43f6e55-4d98-459e-bf7d-940e8570d934)]  INFO -- DirectorJobRunner: ResourcePool `medium_z1' - Adding allocated VM (index=8)
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(d43f6e55-4d98-459e-bf7d-940e8570d934)] DEBUG -- DirectorJobRunner: Found VM 'vm-a44ca927-f89e-401f-9f44-87237ff5eeb0' running job instance 'postgres_z1/0' in resource pool `medium_z1' with reservation '{type=static, ip="192.168.33.63"}'
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(d43f6e55-4d98-459e-bf7d-940e8570d934)] DEBUG -- DirectorJobRunner: Finished processing VM network reservations
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(7a1ee437-a178-4119-8c17-28c3bb4a516f)] DEBUG -- DirectorJobRunner: Processing VM network reservations
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(7a1ee437-a178-4119-8c17-28c3bb4a516f)] DEBUG -- DirectorJobRunner: Binding instance VM
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(7a1ee437-a178-4119-8c17-28c3bb4a516f)] DEBUG -- DirectorJobRunner: Found existing job instance `hm9000_z2/0'
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(7a1ee437-a178-4119-8c17-28c3bb4a516f)] DEBUG -- DirectorJobRunner: Copying job instance `hm9000_z2/0' network reservation {type=dynamic, ip="192.168.33.17"}
I, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(7a1ee437-a178-4119-8c17-28c3bb4a516f)]  INFO -- DirectorJobRunner: ResourcePool `medium_z2' - Adding allocated VM (index=5)
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(7a1ee437-a178-4119-8c17-28c3bb4a516f)] DEBUG -- DirectorJobRunner: Found VM 'vm-68dc3436-a958-4872-a62c-caf264f3bb9a' running job instance 'hm9000_z2/0' in resource pool `medium_z2' with reservation '{type=dynamic, ip="192.168.33.17"}'
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(7a1ee437-a178-4119-8c17-28c3bb4a516f)] DEBUG -- DirectorJobRunner: Finished processing VM network reservations
D, [2015-04-02 05:17:54 #19560] [] DEBUG -- DirectorJobRunner: Thread is no longer needed, cleaning up
D, [2015-04-02 05:17:54 #19560] [] DEBUG -- DirectorJobRunner: Thread is no longer needed, cleaning up
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(e4214b0b-286a-4a02-be0e-50cbb23a5e4d)] DEBUG -- DirectorJobRunner: Verified VM state
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(940d126c-1109-4e4b-87fc-2694ca3d0279)] DEBUG -- DirectorJobRunner: (0.002900s) SELECT * FROM "persistent_disks" WHERE ("persistent_disks"."instance_id" = 10)
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(940d126c-1109-4e4b-87fc-2694ca3d0279)] DEBUG -- DirectorJobRunner: Processing VM network reservations
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(940d126c-1109-4e4b-87fc-2694ca3d0279)] DEBUG -- DirectorJobRunner: Binding instance VM
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(940d126c-1109-4e4b-87fc-2694ca3d0279)] DEBUG -- DirectorJobRunner: Found existing job instance `ha_proxy_z1/0'
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(940d126c-1109-4e4b-87fc-2694ca3d0279)] DEBUG -- DirectorJobRunner: Copying job instance `ha_proxy_z1/0' network reservation {type=static, ip="192.168.33.54"}
I, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(940d126c-1109-4e4b-87fc-2694ca3d0279)]  INFO -- DirectorJobRunner: ResourcePool `router_z1' - Adding allocated VM (index=0)
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(940d126c-1109-4e4b-87fc-2694ca3d0279)] DEBUG -- DirectorJobRunner: Found VM 'vm-bc0cd151-69e9-48e4-b81b-377220593968' running job instance 'ha_proxy_z1/0' in resource pool `router_z1' with reservation '{type=static, ip="192.168.33.54"}'
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(940d126c-1109-4e4b-87fc-2694ca3d0279)] DEBUG -- DirectorJobRunner: Finished processing VM network reservations
D, [2015-04-02 05:17:54 #19560] [] DEBUG -- DirectorJobRunner: Thread is no longer needed, cleaning up
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(e4214b0b-286a-4a02-be0e-50cbb23a5e4d)] DEBUG -- DirectorJobRunner: (0.000868s) SELECT * FROM "persistent_disks" WHERE ("persistent_disks"."instance_id" = 36)
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(e4214b0b-286a-4a02-be0e-50cbb23a5e4d)] DEBUG -- DirectorJobRunner: Processing VM network reservations
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(e4214b0b-286a-4a02-be0e-50cbb23a5e4d)] DEBUG -- DirectorJobRunner: Binding instance VM
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(e4214b0b-286a-4a02-be0e-50cbb23a5e4d)] DEBUG -- DirectorJobRunner: Found existing job instance `router_z1/0'
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(e4214b0b-286a-4a02-be0e-50cbb23a5e4d)] DEBUG -- DirectorJobRunner: Copying job instance `router_z1/0' network reservation {type=static, ip="192.168.33.56"}
I, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(e4214b0b-286a-4a02-be0e-50cbb23a5e4d)]  INFO -- DirectorJobRunner: ResourcePool `router_z1' - Adding allocated VM (index=1)
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(e4214b0b-286a-4a02-be0e-50cbb23a5e4d)] DEBUG -- DirectorJobRunner: Found VM 'vm-9917f1af-93cb-4c1e-a835-5b4f698d78dc' running job instance 'router_z1/0' in resource pool `router_z1' with reservation '{type=static, ip="192.168.33.56"}'
D, [2015-04-02 05:17:54 #19560] [bind_existing_deployment(e4214b0b-286a-4a02-be0e-50cbb23a5e4d)] DEBUG -- DirectorJobRunner: Finished processing VM network reservations
D, [2015-04-02 05:17:54 #19560] [] DEBUG -- DirectorJobRunner: Thread is no longer needed, cleaning up
D, [2015-04-02 05:17:59 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:18:04 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:18:09 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:18:14 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:18:19 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:18:24 #19560] [task:814-checkpoint] DEBUG -- DirectorJobRunner: (0.000483s) BEGIN
D, [2015-04-02 05:18:24 #19560] [task:814-checkpoint] DEBUG -- DirectorJobRunner: (0.001113s) UPDATE "tasks" SET "checkpoint_time" = '2015-04-02 05:18:24.038530+0000' WHERE ("id" = 814)
D, [2015-04-02 05:18:24 #19560] [task:814-checkpoint] DEBUG -- DirectorJobRunner: (0.001360s) COMMIT
D, [2015-04-02 05:18:24 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:18:29 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:18:34 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:18:39 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:18:39 #19560] [bind_existing_deployment(e9d3d9a5-1580-4ff1-a184-c790afa1cdb9)] DEBUG -- DirectorJobRunner: SENT: agent.e9d3d9a5-1580-4ff1-a184-c790afa1cdb9 {"method":"get_state","arguments":[],"reply_to":"director.72cbbf79-cab0-4f08-8d3a-45621b7fd10e.1d488ffc-6f88-4edb-8a04-ff8edae0c6e4"}
D, [2015-04-02 05:18:39 #19560] [bind_existing_deployment(6668b5ec-2dc4-4790-b499-3b2affd3a7b4)] DEBUG -- DirectorJobRunner: SENT: agent.6668b5ec-2dc4-4790-b499-3b2affd3a7b4 {"method":"get_state","arguments":[],"reply_to":"director.72cbbf79-cab0-4f08-8d3a-45621b7fd10e.65af4874-a029-419f-94fc-7dc4905c861a"}
D, [2015-04-02 05:18:44 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:18:49 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:18:54 #19560] [task:814-checkpoint] DEBUG -- DirectorJobRunner: (0.000431s) BEGIN
D, [2015-04-02 05:18:54 #19560] [task:814-checkpoint] DEBUG -- DirectorJobRunner: (0.000489s) UPDATE "tasks" SET "checkpoint_time" = '2015-04-02 05:18:54.044278+0000' WHERE ("id" = 814)
D, [2015-04-02 05:18:54 #19560] [task:814-checkpoint] DEBUG -- DirectorJobRunner: (0.001075s) COMMIT
D, [2015-04-02 05:18:54 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:18:59 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:19:04 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:19:09 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:19:14 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:19:19 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:19:24 #19560] [task:814-checkpoint] DEBUG -- DirectorJobRunner: (0.000569s) BEGIN
D, [2015-04-02 05:19:24 #19560] [task:814-checkpoint] DEBUG -- DirectorJobRunner: (0.000783s) UPDATE "tasks" SET "checkpoint_time" = '2015-04-02 05:19:24.048401+0000' WHERE ("id" = 814)
D, [2015-04-02 05:19:24 #19560] [task:814-checkpoint] DEBUG -- DirectorJobRunner: (0.001329s) COMMIT
D, [2015-04-02 05:19:24 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:19:24 #19560] [bind_existing_deployment(e9d3d9a5-1580-4ff1-a184-c790afa1cdb9)] DEBUG -- DirectorJobRunner: SENT: agent.e9d3d9a5-1580-4ff1-a184-c790afa1cdb9 {"method":"get_state","arguments":[],"reply_to":"director.72cbbf79-cab0-4f08-8d3a-45621b7fd10e.93bc25d4-fcec-44f6-b3cf-94a36a46fcad"}
D, [2015-04-02 05:19:24 #19560] [bind_existing_deployment(6668b5ec-2dc4-4790-b499-3b2affd3a7b4)] DEBUG -- DirectorJobRunner: SENT: agent.6668b5ec-2dc4-4790-b499-3b2affd3a7b4 {"method":"get_state","arguments":[],"reply_to":"director.72cbbf79-cab0-4f08-8d3a-45621b7fd10e.9d30b543-b186-473e-98c7-7dee6b09e4f9"}
D, [2015-04-02 05:19:29 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:19:34 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:19:39 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:19:44 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:19:49 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:19:54 #19560] [task:814-checkpoint] DEBUG -- DirectorJobRunner: (0.000557s) BEGIN
D, [2015-04-02 05:19:54 #19560] [task:814-checkpoint] DEBUG -- DirectorJobRunner: (0.000718s) UPDATE "tasks" SET "checkpoint_time" = '2015-04-02 05:19:54.052859+0000' WHERE ("id" = 814)
D, [2015-04-02 05:19:54 #19560] [task:814-checkpoint] DEBUG -- DirectorJobRunner: (0.001057s) COMMIT
D, [2015-04-02 05:19:54 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:19:59 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:20:04 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:20:09 #19560] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:20:09 #19560] [] DEBUG -- DirectorJobRunner: Worker thread raised exception: Timed out sending `get_state' to e9d3d9a5-1580-4ff1-a184-c790afa1cdb9 after 45 seconds - /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/agent_client.rb:178:in `block in handle_method'
/var/vcap/packages/ruby/lib/ruby/2.1.0/monitor.rb:211:in `mon_synchronize'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/agent_client.rb:173:in `handle_method'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/agent_client.rb:242:in `send_message'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/agent_client.rb:61:in `block (2 levels) in <class:AgentClient>'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/deployment_plan/assembler.rb:162:in `get_state'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/deployment_plan/assembler.rb:52:in `bind_existing_vm'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/deployment_plan/assembler.rb:41:in `block (4 levels) in bind_existing_deployment'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_formatter.rb:49:in `with_thread_name'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/deployment_plan/assembler.rb:40:in `block (3 levels) in bind_existing_deployment'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:77:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:77:in `block (2 levels) in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:63:in `loop'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:63:in `block in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `block in create_with_logging_context'
D, [2015-04-02 05:20:09 #19560] [] DEBUG -- DirectorJobRunner: Thread is no longer needed, cleaning up
D, [2015-04-02 05:20:09 #19560] [task:814] DEBUG -- DirectorJobRunner: Shutting down pool
D, [2015-04-02 05:20:09 #19560] [] DEBUG -- DirectorJobRunner: Worker thread raised exception: Timed out sending `get_state' to 6668b5ec-2dc4-4790-b499-3b2affd3a7b4 after 45 seconds - /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/agent_client.rb:178:in `block in handle_method'
/var/vcap/packages/ruby/lib/ruby/2.1.0/monitor.rb:211:in `mon_synchronize'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/agent_client.rb:173:in `handle_method'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/agent_client.rb:242:in `send_message'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/agent_client.rb:61:in `block (2 levels) in <class:AgentClient>'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/deployment_plan/assembler.rb:162:in `get_state'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/deployment_plan/assembler.rb:52:in `bind_existing_vm'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/deployment_plan/assembler.rb:41:in `block (4 levels) in bind_existing_deployment'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_formatter.rb:49:in `with_thread_name'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/deployment_plan/assembler.rb:40:in `block (3 levels) in bind_existing_deployment'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:77:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:77:in `block (2 levels) in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:63:in `loop'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:63:in `block in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `block in create_with_logging_context'
D, [2015-04-02 05:20:09 #19560] [] DEBUG -- DirectorJobRunner: Thread is no longer needed, cleaning up
D, [2015-04-02 05:20:09 #19560] [task:814] DEBUG -- DirectorJobRunner: Deleting lock: lock:deployment:cloudfoundry
D, [2015-04-02 05:20:09 #19560] [] DEBUG -- DirectorJobRunner: Lock renewal thread exiting
D, [2015-04-02 05:20:09 #19560] [task:814] DEBUG -- DirectorJobRunner: Deleted lock: lock:deployment:cloudfoundry
I, [2015-04-02 05:20:09 #19560] [task:814]  INFO -- DirectorJobRunner: sending update deployment error event
D, [2015-04-02 05:20:09 #19560] [task:814] DEBUG -- DirectorJobRunner: SENT: hm.director.alert {"id":"4257d797-0756-46c7-839b-186114037b1f","severity":3,"title":"director - error during update deployment","summary":"Error during update deployment for cloudfoundry against Director 68c7f9d5-1d93-43d8-808e-5a66c52e4fb8: #<Bosh::Director::RpcTimeout: Timed out sending `get_state' to e9d3d9a5-1580-4ff1-a184-c790afa1cdb9 after 45 seconds>","created_at":1427952009}
E, [2015-04-02 05:20:09 #19560] [task:814] ERROR -- DirectorJobRunner: Timed out sending `get_state' to e9d3d9a5-1580-4ff1-a184-c790afa1cdb9 after 45 seconds
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/agent_client.rb:178:in `block in handle_method'
/var/vcap/packages/ruby/lib/ruby/2.1.0/monitor.rb:211:in `mon_synchronize'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/agent_client.rb:173:in `handle_method'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/agent_client.rb:242:in `send_message'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/agent_client.rb:61:in `block (2 levels) in <class:AgentClient>'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/deployment_plan/assembler.rb:162:in `get_state'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/deployment_plan/assembler.rb:52:in `bind_existing_vm'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/deployment_plan/assembler.rb:41:in `block (4 levels) in bind_existing_deployment'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_formatter.rb:49:in `with_thread_name'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/deployment_plan/assembler.rb:40:in `block (3 levels) in bind_existing_deployment'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:77:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:77:in `block (2 levels) in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:63:in `loop'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:63:in `block in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `block in create_with_logging_context'
D, [2015-04-02 05:20:09 #19560] [task:814] DEBUG -- DirectorJobRunner: (0.000284s) BEGIN
D, [2015-04-02 05:20:09 #19560] [task:814] DEBUG -- DirectorJobRunner: (0.000551s) UPDATE "tasks" SET "state" = 'error', "timestamp" = '2015-04-02 05:20:09.178966+0000', "description" = 'create deployment', "result" = 'Timed out sending `get_state'' to e9d3d9a5-1580-4ff1-a184-c790afa1cdb9 after 45 seconds', "output" = '/var/vcap/store/director/tasks/814', "checkpoint_time" = '2015-04-02 05:19:54.052859+0000', "type" = 'update_deployment', "username" = 'admin' WHERE ("id" = 814)
D, [2015-04-02 05:20:09 #19560] [task:814] DEBUG -- DirectorJobRunner: (0.001083s) COMMIT
I, [2015-04-02 05:20:09 #19560] []  INFO -- DirectorJobRunner: Task took 2 minutes 17.01530861500001 seconds to process.

Task 814 error


Am not able to find anything so far, if you have found out anything, just share me, so that it will be very helpful for me..

Thanks in Well Advance..

Parthiban Annadurai

unread,
Apr 3, 2015, 1:39:21 AM4/3/15
to vcap...@cloudfoundry.org
Hi All,
         After so many diggings on some logs, i thought the problem with allocating IP to the jobs by Director. Since, am using the SPIFF generated manifest to deploy CF by BOSH, just am added the RESERVED IP block to my network properties to proceed further. Now, when am trying to execute the bosh deploy --recreate command, it throws the following,


bosh deploy --recreate


Processing deployment manifest
------------------------------
Getting deployment properties from director...
Compiling deployment manifest...
Please review all changes carefully

Deploying
---------
Deployment name: `cf-deployment.yml'
Director name: `bosh'
Are you sure you want to deploy? (type 'yes' to continue): yes

Director task 1068

  Started preparing deployment
  Started preparing deployment > Binding deployment. Done (00:00:00)
  Started preparing deployment > Binding releases. Done (00:00:00)
  Started preparing deployment > Binding existing deployment. Done (00:01:30)
  Started preparing deployment > Binding resource pools. Done (00:00:00)
  Started preparing deployment > Binding stemcells. Done (00:00:00)
  Started preparing deployment > Binding templates. Done (00:00:00)
  Started preparing deployment > Binding properties. Done (00:00:00)
  Started preparing deployment > Binding unallocated VMs. Done (00:00:00)
  Started preparing deployment > Binding instance networks. Done (00:00:00)
     Done preparing deployment (00:01:30)

  Started preparing package compilation > Finding packages to compile. Done (00:00:00)

  Started preparing dns > Binding DNS. Done (00:00:00)

  Started preparing configuration > Binding configuration. Failed: Error filling in template `haproxy.config.erb' for `ha_proxy_z1/0' (line 28: Can't find property `["ha_proxy.ssl_pem"]') (00:00:00)

Error 100: Error filling in template `haproxy.config.erb' for `ha_proxy_z1/0' (line 28: Can't find property `["ha_proxy.ssl_pem"]')

Task 1068 error

For a more detailed error report, run: bosh task 1068 --debug


Any helps are greatly appreciated..

Thanks..

Johannes Hiemer

unread,
Apr 3, 2015, 2:04:19 AM4/3/15
to vcap...@cloudfoundry.org
Parthiban,
please take a look at your haproxy Job. It should look like that, or similar:


There you can see the missing property ha_proxy.ssl_pem.

Glad you found the issue with the IPs!

Parthiban Annadurai

unread,
Apr 5, 2015, 4:47:54 AM4/5/15
to vcap...@cloudfoundry.org
@Johannes.. After modifying the ha_proxy job, now am trying to delete the partially deployed CF and redeploying the CF using newly generated manifest, its throwing the following during deletion and also when i do any commands regarding bosh deploy,

bosh deployments

+--------------+------------+-----------------------------------------------+
| Name         | Release(s) | Stemcell(s)                                   |
+--------------+------------+-----------------------------------------------+
| cloudfoundry | cf/202     | bosh-vsphere-esxi-ubuntu-trusty-go_agent/2865 |
+--------------+------------+-----------------------------------------------+

Deployments total: 1
[root@ie1aul0414 ~]# bosh delete deployment cloudfoundry

You are going to delete deployment `cloudfoundry'.

THIS IS A VERY DESTRUCTIVE OPERATION AND IT CANNOT BE UNDONE!

Are you sure? (type 'yes' to continue): yes
HTTP 500:

How I can delete the deployment completely and start fresh deploy??

Thanks..

Any Help is Appreciated..



Parthiban Annadurai

unread,
Apr 7, 2015, 2:51:58 AM4/7/15
to vcap...@cloudfoundry.org
Hi Devs,
            Since, I am not able to recover from the previous errors, i just freshly deployed MicroBOSH and BOSH. Now, am on the process of deploying CF. So, i have clearly checked my CF manifest file which is generated by SPIFF before deploying and also did bosh diff command, it throws NO DIFFERENCE too.. Now, am sharing my CF Manifest file with you all, so anyone find some conflicts, let me know..

Any Help is Greatly Appreciated..

Thanks in Well Advance..
cf-deployment.txt

Parthiban Annadurai

unread,
Apr 7, 2015, 2:58:31 AM4/7/15
to vcap...@cloudfoundry.org
Sorry for the previous mail, My modified manifest file with just one change in adding reserved IP range.. Ignore the previous one..

Thanks in Well Advance..
cf-deployment.txt

Parthiban Annadurai

unread,
Apr 7, 2015, 4:49:03 AM4/7/15
to vcap...@cloudfoundry.org
Hi All,
         I tried to proceed with the current manifest file. It throws the following,

bosh deployment cf-deployment.yml
Deployment set to `/root/oss/cf-release/cf-deployment.yml'
[root@ie1aul0414 cf-release]# bosh deploy


Processing deployment manifest
------------------------------
Getting deployment properties from director...
Unable to get properties list from director, trying without it...
Compiling deployment manifest...
Cannot get current deployment information from director, possibly a new deployment

Please review all changes carefully

Deploying
---------
Deployment name: `cf-deployment.yml'
Director name: `bosh'
Are you sure you want to deploy? (type 'yes' to continue): yes

Director task 5
Error 40001: Required property `networks' was not specified in object ({"default_networks"=>[{"default"=>["dns", "gateway"], "name"=>"cf1", "static_ips"=>["192.168.33.51"]}], "instances"=>1, "name"=>"ha_proxy_z1", "properties"=>{"ha_proxy"=>{"disable_http"=>true, "ssl_ciphers"=>nil, "ssl_pem"=>"-----BEGIN CERTIFICATE-----\nMIIEzTCCA7WgAwIBAgIJAIC+7wJB9dYLMA0GCSqGSIb3DQEBBQUAMIGfMQswCQYD\nVQQGEwJERTE.....YI4irYSCzOCSsWReXsfYlE8dTMPDHPPzZxRz53+0dtgn\nKTshzxzbOzXLkAZES4lSx5dulB1Hc11I+38WhJAA211Y\n-----END CERTIFICATE-----\n-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEA9jdlg94GFpLGLiukAJDQo4XMmSYdyWD2TAQB4QPElWOWfyAE\npytFkmIlfcF+2....XtTx7dzfzUVCei7TCnNrM4yEPgjkO5NUvIaoTPJr\nIw29AnWi626iDqEno6dpNj31zxVWnqSSZUUh7hcJpSpFgmItc2iBBQ==\n-----END RSA PRIVATE KEY-----\n"}, "metron_agent"=>{"zone"=>"z1"}, "networks"=>{"apps"=>"cf1"}, "router"=>{"servers"=>{"z1"=>["192.168.33.56"], "z2"=>["192.168.33.146"]}}}, "resource_pool"=>"router_z1", "templates"=>[{"name"=>"haproxy", "release"=>"cf"}, {"name"=>"metron_agent", "release"=>"cf"}], "update"=>{}})

Task 5 error

For a more detailed error report, run: bosh task 5 --debug


Could anyone help on this??

Thanks..

Johannes Hiemer

unread,
Apr 7, 2015, 4:59:53 AM4/7/15
to vcap...@cloudfoundry.org
Could you please remove:


- default_networks: 
 - name: cf1 static_ips: null


And run again? Don't forget do add a - in front of instances: 1 for the ha_proxy.

...

Parthiban Annadurai

unread,
Apr 7, 2015, 5:29:05 AM4/7/15
to vcap...@cloudfoundry.org
@Johannes.. Thanks for your kind help after removing that part and adding '-' in-front of instances. Now its shows the following,

bosh deployment cf-deployment.yml

[WARNING] cannot access director, trying 4 more times...
[WARNING] cannot access director, trying 3 more times...
[WARNING] cannot access director, trying 2 more times...
[WARNING] cannot access director, trying 1 more times...
[WARNING] cannot access director, trying 4 more times...
[WARNING] cannot access director, trying 3 more times...
HTTP 500:

Seriously, am not getting anything regarding this error because after the error it doesn't show the "bosh task xx --debug" also..

Thanks in Well Advance..


--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Johannes Hiemer

unread,
Apr 7, 2015, 5:31:02 AM4/7/15
to vcap...@cloudfoundry.org
Are you able to login via

bosh login?

Otherwise get into the director via ssh vcap@ip-of-director with password c1oudc0w and reboot the VM.
...

Parthiban Annadurai

unread,
Apr 7, 2015, 6:06:37 AM4/7/15
to vcap...@cloudfoundry.org
@Johannes.. Am able to login and now things proceed fruther.. bosh deploy command running continuosly without ending for some more time now.. Could I wait for the process to end or cancel the task??

Thanks..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Parthiban Annadurai

unread,
Apr 7, 2015, 7:26:31 AM4/7/15
to vcap...@cloudfoundry.org
@Johannes.. Now, bosh deploy command throws the following,

bosh deploy

Processing deployment manifest
------------------------------
Getting deployment properties from director...
Compiling deployment manifest...
Please review all changes carefully

Deploying
---------
Deployment name: `cf-deployment.yml'
Director name: `bosh'
Are you sure you want to deploy? (type 'yes' to continue): yes

Director task 12
Error 40001: Required property `name' was not specified in object ({"default_networks"=>[{"name"=>"cf1", "static_ips"=>nil}]})

Task 12 error

For a more detailed error report, run: bosh task 12 --debug

I think, somewhere missed the network information but when i checked with the manifest, everything looks like better only..

Could you help on this??

Thanks..

Johannes Hiemer

unread,
Apr 7, 2015, 7:53:11 AM4/7/15
to vcap...@cloudfoundry.org
Could you please attach your latest deployment document?
...

Parthiban Annadurai

unread,
Apr 7, 2015, 8:16:30 AM4/7/15
to vcap...@cloudfoundry.org
@Johannes.. Somewhere i missed one symbol '-' and i added now its now showing the following,

bosh deployment cf-deployment.yml
WARNING! Your target has been changed to `https://192.168.33.124:25555'!

Deployment set to `/root/oss/cf-release/cf-deployment.yml'
[root@ie1aul0414 cf-release]# bosh deploy

Processing deployment manifest
------------------------------
Getting deployment properties from director...
Compiling deployment manifest...
Please review all changes carefully

Deploying
---------
Deployment name: `cf-deployment.yml'
Director name: `bosh'
Are you sure you want to deploy? (type 'yes' to continue): yes
[WARNING] cannot access director, trying 4 more times...
[WARNING] cannot access director, trying 3 more times...
[WARNING] cannot access director, trying 2 more times...
[WARNING] cannot access director, trying 1 more times...
cannot access director (Connection refused - connect(2) for "192.168.33.124" port 25555 (https://192.168.33.124:25555))

For your reference am attaching my manifest..

Thanks..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.
cf-deployment.txt

Johannes Hiemer

unread,
Apr 7, 2015, 8:27:03 AM4/7/15
to vcap...@cloudfoundry.org
Partihban please take a look at my previous answers. Either try bosh login, or reboot the director vm.
...

Parthiban Annadurai

unread,
Apr 8, 2015, 2:48:21 AM4/8/15
to vcap...@cloudfoundry.org
@Johannes.. I am getting some weird response from BOSH. After modifying my manifest with the changes, i was trying to set the deployment, but it throws the following,

bosh target
Current target is https://192.168.33.124:25555 (bosh) (BOSH_DIRECTOR IP)
[root@ie1aul0414 oss]# bosh deployment /root/oss/cf-release/cf-deployment.yml
WARNING! Your target has been changed to `https://10.255.237.40:25555'!(MICROBOSH_IP)
Deployment set to `/root/oss/cf-release/cf-deployment.yml'

Why BOSH setting to MicroBOSH IP for deployment, since am using BOSH to deploy CF??

Is it the correct flow or what??

Could you please??

Thanks..


--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Johannes Hiemer

unread,
Apr 8, 2015, 2:49:45 AM4/8/15
to vcap...@cloudfoundry.org
Perhaps a different director Id in your manifest?
...

Parthiban Annadurai

unread,
Apr 8, 2015, 2:53:32 AM4/8/15
to vcap...@cloudfoundry.org
@Johannes.. During my BOSH deployment I specified the 192.168.33.124 IP for BOSH. In CF deployment I used the manifest generated by SPIFF, so i didn't allocated anything there.. And for MicroBOSH deployment i used 10.255.237.40 IP.. Now, Where i need to change??

Thanks..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Parthiban Annadurai

unread,
Apr 8, 2015, 3:03:55 AM4/8/15
to vcap...@cloudfoundry.org
@Johannes..

The Problem is, its taking MicroBOSH IP(10.255.237.40) for deploying CF, since am using BOSH to deploy CF..

Ridiculous CF Installation getting too Hard..

Could you??

Thanks..

Johannes Hiemer

unread,
Apr 8, 2015, 3:09:30 AM4/8/15
to vcap...@cloudfoundry.org
Sorry Parthiban,
but it is quite difficult to follow right now. Are you using microbosh to deploy bosh? Or are you deploying your bosh environment locally?
...

Parthiban Annadurai

unread,
Apr 8, 2015, 3:11:02 AM4/8/15
to vcap...@cloudfoundry.org
@Johannes.. I used MicroBOSH to deploy BOSH and then now am using BOSH to deploy CF.. Just i followed docs.cloudfoundry.org

Thanks..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Johannes Hiemer

unread,
Apr 8, 2015, 3:13:28 AM4/8/15
to vcap...@cloudfoundry.org
Actually that makes no sense. Either you

- deploy microbosh
- deploy bosh with microbosh

or you are following Dr Nics suggestions

- deploy bosh from your jumphost/local machine.

So which way are you using?
...

Parthiban Annadurai

unread,
Apr 8, 2015, 3:18:06 AM4/8/15
to vcap...@cloudfoundry.org
@Johannes.. Am not getting your point exactly.. Could you please elaborate it??
I followed the conventional steps only nothing else..

Thanks..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Johannes Hiemer

unread,
Apr 8, 2015, 3:20:28 AM4/8/15
to vcap...@cloudfoundry.org
Sorry Parthiban this is quite time consuming for me. Please try to setup your environment properly. I can help you with your manifest, but not debugging your environment remotely. There are things you need to figure out yourself. Without having a look onto the environment there is no chance to do proper debugging, which is needed in such cases.
...

Parthiban Annadurai

unread,
Apr 8, 2015, 3:23:35 AM4/8/15
to vcap...@cloudfoundry.org
@Johannes..

FYI,

Step 1: Deployed MicroBOSH (Success)
Step 2: Deployed BOSH using MicroBOSH (Success)
Step 3: Deploying CF using BOSH (In Progress)..

This is what the steps I followed from Conventional CloudFoundry Documentation..

My problem is not with environment, why its taking MicroBOSH IP for deploying CF while am using BOSH Director..

Thanks..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Parthiban Annadurai

unread,
Apr 8, 2015, 3:31:06 AM4/8/15
to vcap...@cloudfoundry.org
@Johannes.. Thanks for your suggestions.. I just rebooted the Director VM, now its gone..

Once again Thanks..

Johannes Hiemer

unread,
Apr 8, 2015, 3:31:35 AM4/8/15
to vcap...@cloudfoundry.org
Parthiban, sorry can't help here. The way is as follows:

deploy your microbosh
target your microbosh (director)
deploy cloudfoundry via microbosh

See http://docs.cloudfoundry.org/deploying/openstack/ step 4 and 5, which outline nearly the same steps you need to take in vSphere unless they are not directly related to Openstack like Security Groups. etc.

So deploy microbosh and target this instance. Then use microbosh to deploy cloudfoundry. Please to this, and come back with questions.
...

Parthiban Annadurai

unread,
Apr 8, 2015, 3:34:56 AM4/8/15
to vcap...@cloudfoundry.org
@Johannes.. Thanks.. Let you know, After trying the steps you provided..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Parthiban Annadurai

unread,
Apr 8, 2015, 7:32:50 AM4/8/15
to vcap...@cloudfoundry.org
@Johannes.. By your valuable suggestion, i just tried to install CF with MicroBOSH instance, it throws the following,

bosh deployment /root/oss/cf-release/cf-deployment.yml
Deployment set to `/root/oss/cf-release/cf-deployment.yml'
[root@ie1aul0414 microbosh1]# bosh deploy


Processing deployment manifest
------------------------------
Getting deployment properties from director...
Compiling deployment manifest...
Please review all changes carefully

Deploying
---------
Deployment name: `cf-deployment.yml'
Director name: `microbosh'

Are you sure you want to deploy? (type 'yes' to continue): yes

Director task 12
  Started preparing deployment
  Started preparing deployment > Binding deployment. Done (00:00:00)
  Started preparing deployment > Binding releases. Done (00:00:00)
  Started preparing deployment > Binding existing deployment. Failed: Timed out sending `get_state' to cfe8ca82-58ca-4813-a0c6-fec063c01c98 after 45 seconds (00:02:15)

Error 450002: Timed out sending `get_state' to cfe8ca82-58ca-4813-a0c6-fec063c01c98 after 45 seconds


Task 12 error

For a more detailed error report, run: bosh task 12 --debug

And, could you please tell me how to locate the compilation VMs on vSphere during bosh deploy execution??

Thanks in Well Advance..

Johannes Hiemer

unread,
Apr 8, 2015, 7:35:27 AM4/8/15
to vcap...@cloudfoundry.org
Parthiban, 
you had this error before I am remember correctly. What we were thinking about was some Network/Firewall issue, but I am not sure how you got around this error.

Normally you should be able to see the compilation VMs in your vSphere Environment. They should be located in the folder you specified in your deployment manifest.
...

Parthiban Annadurai

unread,
Apr 8, 2015, 8:19:04 AM4/8/15
to vcap...@cloudfoundry.org
@Johannes.. Thanks a Lot for all your kind replies and showing such a patience to help me.. Since, last time also i faced this error but afterwards i started freshly and proceed further.. After this error i never proceeded further, so what am thinking is, its because of HARDWARE requirements might be.

Could you please share your HARDWARE requirements to Deploy CF on vSphere using MicroBOSH??, since you have successfully deployed..
It could be helpful to everyone but i followed docs.cloudfoundry.org documentation for checking my requirements..

Thanks in Well Advance..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Parthiban Annadurai

unread,
Apr 9, 2015, 2:10:34 AM4/9/15
to vcap...@cloudfoundry.org, Johannes Hiemer, Dmitriy Kalinin, Dr Nic Williams, Mark Watson
Hi All,
        Somehow i managed that error for the first time. Now, my doubt is, the command bosh ssh shows the following,

[root@ie1aul0414 ~]# bosh ssh

Processing deployment manifest
------------------------------
1. ha_proxy_z1/0
2. nats_z1/0
3. nats_z2/0
4. etcd_z1/0
5. etcd_z1/1
6. etcd_z2/0
7. stats_z1/0
8. nfs_z1/0
9. postgres_z1/0
10. uaa_z1/0
11. uaa_z2/0
12. login_z1/0
13. login_z2/0
14. api_z1/0
15. api_z2/0
16. clock_global/0
17. api_worker_z1/0
18. api_worker_z2/0
19. hm9000_z1/0
20. hm9000_z2/0
21. runner_z1/0
22. runner_z2/0
23. loggregator_z1/0
24. loggregator_z2/0
25. loggregator_trafficcontroller_z1/0
26. loggregator_trafficcontroller_z2/0
27. router_z1/0
28. router_z2/0
29. acceptance_tests/0
30. smoke_tests/0


Then, bosh vms should show 30 VMs or how much??

Sorry, May be this might be very basic knowledge, but am not able to interpret, since am not a expert..

Could you please anyone??

Thanks in Well Advance..

Johannes Hiemer

unread,
Apr 9, 2015, 2:25:44 AM4/9/15
to vcap...@cloudfoundry.org, jvhi...@gmail.com, dkal...@pivotal.io, drnicw...@gmail.com, watso...@gmail.com
Parthiban,
first of all congratulations, for proceeding further. What would be interesting is: how did you solve the timeout error?

Looking at the list of VM, this looks rather complete. You can double check that with your Jobs by counting the number of instances you defined in your deployment manifest.

Regards,
Johannes
...

Parthiban Annadurai

unread,
Apr 9, 2015, 2:35:14 AM4/9/15
to vcap...@cloudfoundry.org, Mark Watson
@Johannes.. The way i resolved was allocating the Static IP range in manifest. Since, Director trying to allocate the already used IP for further deployment, so am just changed the range and a warm thanks to Mark Watson for providing this valuable suggestions.. Also, Johannes in my manifest file also instances are 30, so as you said it should show 30 VMs after successful deployment or how much??

Thanks..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Johannes Hiemer

unread,
Apr 9, 2015, 2:38:52 AM4/9/15
to vcap...@cloudfoundry.org, watso...@gmail.com
Yes of course. Defined 30, deployed 30. Easy math. :-)
...

Parthiban Annadurai

unread,
Apr 9, 2015, 3:00:23 AM4/9/15
to vcap...@cloudfoundry.org, Mark Watson
@Johannes.. But the problem is now, the same error showing on some other phase.. Not on previous phase.. its in different phase..

Any inputs..

Thanks..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Johannes Hiemer

unread,
Apr 9, 2015, 3:09:19 AM4/9/15
to vcap...@cloudfoundry.org, watso...@gmail.com
Parthiban, how should I understand what you describe here? :-)

Please provide more and structured information. 
...

Parthiban Annadurai

unread,
Apr 9, 2015, 3:16:31 AM4/9/15
to vcap...@cloudfoundry.org
Johannes, sorry for the confusion.. Now, my bosh deploy command show the error at following phase

Started binding instance vms
  Started binding instance vms > ha_proxy_z1/0
  Started binding instance vms > nats_z2/0
  Started binding instance vms > nats_z1/0
  Started binding instance vms > etcd_z1/0
  Started binding instance vms > etcd_z1/1
  Started binding instance vms > etcd_z2/0
  Started binding instance vms > stats_z1/0
  Started binding instance vms > nfs_z1/0
  Started binding instance vms > postgres_z1/0
  Started binding instance vms > uaa_z1/0
  Started binding instance vms > uaa_z2/0
  Started binding instance vms > login_z1/0
  Started binding instance vms > login_z2/0
  Started binding instance vms > api_z1/0
     Done binding instance vms > ha_proxy_z1/0 (00:00:00)
  Started binding instance vms > api_z2/0
     Done binding instance vms > nats_z1/0 (00:00:00)
  Started binding instance vms > clock_global/0
     Done binding instance vms > etcd_z2/0 (00:00:00)
     Done binding instance vms > etcd_z1/1 (00:00:00)
  Started binding instance vms > api_worker_z1/0
     Done binding instance vms > nats_z2/0 (00:00:00)
  Started binding instance vms > api_worker_z2/0
     Done binding instance vms > postgres_z1/0 (00:00:00)
  Started binding instance vms > hm9000_z1/0
  Started binding instance vms > hm9000_z2/0
     Done binding instance vms > etcd_z1/0 (00:00:00)
  Started binding instance vms > runner_z1/0
     Done binding instance vms > stats_z1/0 (00:00:00)
  Started binding instance vms > runner_z2/0
  Started binding instance vms > loggregator_z1/0
     Done binding instance vms > uaa_z1/0 (00:00:00)
  Started binding instance vms > loggregator_z2/0
  Started binding instance vms > loggregator_trafficcontroller_z1/0
  Started binding instance vms > loggregator_trafficcontroller_z2/0
  Started binding instance vms > router_z1/0
  Started binding instance vms > router_z2/0
     Done binding instance vms > api_z1/0 (00:00:00)
     Done binding instance vms > api_worker_z1/0 (00:00:00)
     Done binding instance vms > clock_global/0 (00:00:00)
     Done binding instance vms > nfs_z1/0 (00:00:00)
     Done binding instance vms > api_z2/0 (00:00:00)
     Done binding instance vms > runner_z1/0 (00:00:00)
     Done binding instance vms > uaa_z2/0 (00:00:00)
     Done binding instance vms > login_z1/0 (00:00:00)
     Done binding instance vms > loggregator_z2/0 (00:00:00)
     Done binding instance vms > hm9000_z2/0 (00:00:00)
     Done binding instance vms > loggregator_z1/0 (00:00:01)
     Done binding instance vms > loggregator_trafficcontroller_z2/0 (00:00:01)
     Done binding instance vms > runner_z2/0 (00:00:01)
     Done binding instance vms > hm9000_z1/0 (00:00:01)
     Done binding instance vms > loggregator_trafficcontroller_z1/0 (00:00:01)
     Done binding instance vms > router_z2/0 (00:00:01)
     Done binding instance vms > router_z1/0 (00:00:01)
   Failed binding instance vms > login_z2/0: Timed out sending `apply' to e2ddc127-d592-420b-9d0b-393671c34af8 after 45 seconds (00:00:45)
   Failed binding instance vms > api_worker_z2/0: Timed out sending `apply' to bd6c6394-ef92-4879-a4eb-6bd1c9831e36 after 45 seconds (00:00:45)
   Failed binding instance vms (00:00:45)

Error 450002: Timed out sending `apply' to e2ddc127-d592-420b-9d0b-393671c34af8 after 45 seconds

Task 62 error

I think now I have provided the detailed one..

Thanks..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Johannes Hiemer

unread,
Apr 9, 2015, 3:21:21 AM4/9/15
to vcap...@cloudfoundry.org
Perhaps the same issue you described before? Which Mark Watson mentioned? As I said, debugging those things remotely is pure hell, without having the interactivity of a shell. 
...

Parthiban Annadurai

unread,
Apr 9, 2015, 3:23:21 AM4/9/15
to vcap...@cloudfoundry.org
Thanks Johannes.. I will try on my own and let you know.. Just shared for your reference..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Parthiban Annadurai

unread,
Apr 9, 2015, 7:31:59 AM4/9/15
to vcap...@cloudfoundry.org
Hi All,
         Could please anyone tell, from which file(i.e., any Ruby file or someother) the below error status coming from??

Error 450002: Timed out sending `get_state' to e2ddc127-d592-420b-9d0b-393671c34af8 after 45 seconds

Seriously, am not able to trace the error from logs or anywhere..

Thanks..


Parthiban Annadurai

unread,
Apr 9, 2015, 7:49:05 AM4/9/15
to vcap...@cloudfoundry.org
Hi All,
         If i did any changes on the CF manifest file, how can i re deploy all the deployed VMs?? I have tried bosh deploy --recreate as well as bosh deploy --redact-diff, but nothing is reflecting.. I have changed some IPs of the jobs but that is not reflecting, its shows the jobs with same IPs..

Could anyone??

Thanks..

Johannes Hiemer

unread,
Apr 9, 2015, 7:52:07 AM4/9/15
to vcap...@cloudfoundry.org
Did you execute Spiff before? The way is normally:

1) Change manifest
2) Run spiff
3) bosh deploy 

Regards,
Johannes
...

Parthiban Annadurai

unread,
Apr 9, 2015, 7:54:14 AM4/9/15
to vcap...@cloudfoundry.org
@Johannes.. FYI, I have straight away changed on the generated manifest file.. Is it not allowed??

Thanks..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Johannes Hiemer

unread,
Apr 9, 2015, 7:56:12 AM4/9/15
to vcap...@cloudfoundry.org
I am note sure if it is allowed or not. Personally I don't like this giant yml files and as reason I am splitting up those deployment manifests in areas like you can see here:

...

Parthiban Annadurai

unread,
Apr 9, 2015, 8:00:45 AM4/9/15
to vcap...@cloudfoundry.org
@Johannes.. Generally, what i will do is, edit the cf-stub.yml file in the folder cf-release/spec/fixture/vsphere/cf-stub.yml and using SPIFF, just generate the manifest, after that will change the stemcell, releases name and its version. Then, i will try to deploy..

Thanks..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Parthiban Annadurai

unread,
Apr 13, 2015, 3:40:13 AM4/13/15
to Guruprakash S, vcap...@cloudfoundry.org
Hi All,
         Seriously, I don't know, what happened to the deployment, it went through that above error. Now, am struck up with the following error,


bosh deployment /root/oss/cf-release/cf-deployment.yml
Deployment set to `/root/oss/cf-release/cf-deployment.yml'
[root@ie1aul0414 ~]# bosh deploy


Processing deployment manifest
------------------------------
Getting deployment properties from director...
Compiling deployment manifest...
Please review all changes carefully

Deploying
---------
Deployment name: `cf-deployment.yml'
Director name: `microbosh'
Are you sure you want to deploy? (type 'yes' to continue): yes

Director task 382

  Started preparing deployment
  Started preparing deployment > Binding deployment. Done (00:00:00)
  Started preparing deployment > Binding releases. Done (00:00:00)
  Started preparing deployment > Binding existing deployment. Done (00:00:00)

  Started preparing deployment > Binding resource pools. Done (00:00:00)
  Started preparing deployment > Binding stemcells. Done (00:00:00)
  Started preparing deployment > Binding templates. Done (00:00:00)
  Started preparing deployment > Binding properties. Done (00:00:00)
  Started preparing deployment > Binding unallocated VMs. Done (00:00:00)
  Started preparing deployment > Binding instance networks. Done (00:00:00)
     Done preparing deployment (00:00:00)

  Started preparing package compilation > Finding packages to compile. Done (00:00:00)

  Started preparing dns > Binding DNS. Done (00:00:00)

  Started creating bound missing vms
  Started creating bound missing vms > small_z1/0
  Started creating bound missing vms > medium_z1/0
   Failed creating bound missing vms > small_z1/0: execution expired (00:02:08)
   Failed creating bound missing vms > medium_z1/0: execution expired (00:02:08)
   Failed creating bound missing vms (00:02:08)

Error 100: execution expired

Task 382 error

For a more detailed error report, run: bosh task 382 --debug


Have anyone came across this error before??

Could anyone please on this issue??

Thanks in Well Advance..

On 13 April 2015 at 09:45, Parthiban Annadurai <senji...@gmail.com> wrote:
Hi GuruPrakash,
                        Have you got the issue am facing by taking a look on the manifest file which i have shared with you??

Thanks in Well Advance..

On 10 April 2015 at 16:02, Parthiban Annadurai <senji...@gmail.com> wrote:
Hi GuruPrakash. Let me try to explain the things which trying to do.. Please take a look and respond me back at any time..

See, in the SPIFF generated manifest, there is no IP allocated for Acceptance and Smoke test, it will try to allocate during bosh deploy and in my case that job is not at all showing.

- instances: 1
  lifecycle: errand
  name: acceptance_tests
  networks:
  - name: cf1
  resource_pool: small_errand
  templates:
  - name: acceptance-tests
    release: cf
- instances: 1
  lifecycle: errand
  name: smoke_tests
  networks:
  - name: cf1
  properties:
    networks:
      apps: cf1
  resource_pool: small_errand
  templates:
  - name: smoke-tests
    release: cf

So, now what am planning is why not i can allocate the IP like below before deploying proceeds by taking from my IP range,

- instances: 1
  lifecycle: errand
  name: acceptance_tests
  networks:
  - name: cf1
    static_ips:
    - 192.168.33.70
  resource_pool: small_errand
  templates:
  - name: acceptance-tests
    release: cf
- instances: 1
  lifecycle: errand
  name: smoke_tests
  networks:
  - name: cf1
    static_ips:
    - 192.168.33.71
  properties:
    networks:
      apps: cf1
  resource_pool: small_errand
  templates:
  - name: smoke-tests
    release: cf


Are you able to get my point??

Hope you may help on this..

Thanks..

On 10 April 2015 at 12:36, Parthiban Annadurai <senji...@gmail.com> wrote:
And GuruPrakash, shall we allocate the IP by own to jobs alone in the manifest directly?? Is it acceptable??

Thanks..

On 10 April 2015 at 12:35, Parthiban Annadurai <senji...@gmail.com> wrote:
First a warm Thanks for helping me out on this GuruPrakash.. With this mail am attaching my cf-deployment.yml file for your reference.. Please find it below and let me know if you find any conflicts at any time.. since am struck with this for the past 3 weeks..

Once again Thanks..

On 10 April 2015 at 12:24, Guruprakash S <prakas...@gmail.com> wrote:
You can share your manifest with all the confidential info removed. I shall try to take a look tomorrow my morning and will update. Thanks


On Thursday, April 9, 2015, Parthiban Annadurai <senji...@gmail.com> wrote:
GuruPrakash, that is what the problem. I have 30 jobs on my manifest. so i had given some 20 to cf1 and 16 to cf2 i guess. Now, what is my question is how much(i.e., in ur manifest how you specified the range) we can give for successful deployment??

Could you please help me on this issue to solve??

Thanks in Well Advance..

On 10 April 2015 at 12:11, Guruprakash S <prakas...@gmail.com> wrote:
Parthi,

Can you assign more static ips in the static ips section of networking part in your yaml file. This is because workers & canaries if at all mentioned in your yaml file needs some extra ips apart from the ones needed for CF Components like router,etcd,hm... Etc.

After adding more static apps,re-run the deployment through bosh and see if it goes through.

Thanks,
Guruprakash


On Thursday, April 9, 2015, Parthiban Annadurai <senji...@gmail.com> wrote:
@GuruPrakash.. First a very warm Thanks for your valuable reply.. But, the problem is, not able to find out the timed out VM on vSphere console, since it has created around some 28 to 30 VM.. Is there any way to find that exactly??

Thanks in Well Advance..

On 9 April 2015 at 23:09, Guruprakash S <prakas...@gmail.com> wrote:
Hey Parthi,

Try this document and see if it helps for timeout issue that you are facing.

http://docs.cloudfoundry.org/running/troubleshooting.html

Parthiban Annadurai

unread,
Apr 13, 2015, 9:08:28 AM4/13/15
to Guruprakash S, vcap...@cloudfoundry.org
Hi All,
              I have surpassed the above errors by increasing the resource on vSphere. Thanks All.. Now, am struck up with the following.


bosh deployment /root/oss/cf-release/cf-deployment.yml
Deployment set to `/root/oss/cf-release/cf-deployment.yml'
[root@ie1aul0414 ~]# bosh deploy

Processing deployment manifest
------------------------------
Getting deployment properties from director...
Compiling deployment manifest...
Please review all changes carefully

Deploying
---------
Deployment name: `cf-deployment.yml'
Director name: `microbosh'
Are you sure you want to deploy? (type 'yes' to continue): yes

Director task 395

  Started preparing deployment
  Started preparing deployment > Binding deployment. Done (00:00:00)
  Started preparing deployment > Binding releases. Done (00:00:00)
  Started preparing deployment > Binding existing deployment. Done (00:00:00)
  Started preparing deployment > Binding resource pools. Done (00:00:00)
  Started preparing deployment > Binding stemcells. Done (00:00:00)
  Started preparing deployment > Binding templates. Done (00:00:00)
  Started preparing deployment > Binding properties. Done (00:00:01)

  Started preparing deployment > Binding unallocated VMs. Done (00:00:00)
  Started preparing deployment > Binding instance networks. Done (00:00:00)
     Done preparing deployment (00:00:01)


  Started preparing package compilation > Finding packages to compile. Done (00:00:00)

  Started preparing dns > Binding DNS. Done (00:00:00)

  Started creating bound missing vms
  Started creating bound missing vms > small_z1/0
  Started creating bound missing vms > medium_z1/0
     Done creating bound missing vms > small_z1/0 (00:00:44)
     Done creating bound missing vms > medium_z1/0 (00:00:46)
     Done creating bound missing vms (00:00:46)

  Started binding instance vms

  Started binding instance vms > loggregator_trafficcontroller_z1/0
  Started binding instance vms > postgres_z1/0. Done (00:00:00)
     Done binding instance vms > loggregator_trafficcontroller_z1/0 (00:00:00)
     Done binding instance vms (00:00:00)

  Started preparing configuration > Binding configuration. Done (00:00:05)

  Started updating job ha_proxy_z1 > ha_proxy_z1/0 (canary). Failed: `ha_proxy_z1/0' is not running after update (00:10:06)

Error 400007: `ha_proxy_z1/0' is not running after update

Task 395 error

For a more detailed error report, run: bosh task 395 --debug


Hope someone will help on this. I have attached my manifest with this mail. Please find it below.

Thanks in Well Advance..
cf-deployment.txt

Johannes Hiemer

unread,
Apr 13, 2015, 9:13:20 AM4/13/15
to vcap...@cloudfoundry.org, prakas...@gmail.com
Hi,
please provide the debug log of the task and afterwards get via SSH into the instance to take a look into

/var/vcap/sys/logs.
...

Parthiban Annadurai

unread,
Apr 13, 2015, 9:16:59 AM4/13/15
to vcap...@cloudfoundry.org
@Johannes.. FYI,

D, [2015-04-13 12:33:49 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:33:54 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:33:59 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:34:04 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000165s) BEGIN
D, [2015-04-13 12:34:04 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000287s) UPDATE "tasks" SET "checkpoint_time" = '2015-04-13 12:34:04.422992+0000' WHERE ("id" = 395)
D, [2015-04-13 12:34:04 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.001075s) COMMIT
D, [2015-04-13 12:34:04 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:34:09 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:34:14 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:34:19 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:34:24 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:34:29 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:34:34 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000201s) BEGIN
D, [2015-04-13 12:34:34 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000411s) UPDATE "tasks" SET "checkpoint_time" = '2015-04-13 12:34:34.425973+0000' WHERE ("id" = 395)
D, [2015-04-13 12:34:34 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.001259s) COMMIT
D, [2015-04-13 12:34:34 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:34:39 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:34:44 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
I, [2015-04-13 12:34:48 #31034] [canary_update(ha_proxy_z1/0)]  INFO -- DirectorJobRunner: Checking if ha_proxy_z1/0 has been updated after 63.333333333333336 seconds
D, [2015-04-13 12:34:48 #31034] [canary_update(ha_proxy_z1/0)] DEBUG -- DirectorJobRunner: SENT: agent.07477f12-af4d-47fc-bb2e-d3ff95f9bb5b {"method":"get_state","arguments":[],"reply_to":"director.2502dd46-ce40-4772-b10f-386482f5d7eb.8fb20b94-0776-42de-8319-fe14b8b166e1"}
D, [2015-04-13 12:34:48 #31034] [] DEBUG -- DirectorJobRunner: RECEIVED: director.2502dd46-ce40-4772-b10f-386482f5d7eb.8fb20b94-0776-42de-8319-fe14b8b166e1 {"value":{"properties":{"logging":{"max_log_file_size":""}},"job":{"name":"ha_proxy_z1","release":"","template":"haproxy","version":"7bd402abf9d8fe86c1adf649c791fed92a03653c","sha1":"88a89bd6cd01c9b1390a2f0d27443c83b3a697a9","blobstore_id":"5fed26eb-e946-4216-952a-235dc99fd0a1","templates":[{"name":"haproxy","version":"7bd402abf9d8fe86c1adf649c791fed92a03653c","sha1":"88a89bd6cd01c9b1390a2f0d27443c83b3a697a9","blobstore_id":"5fed26eb-e946-4216-952a-235dc99fd0a1"},{"name":"metron_agent","version":"4cf0a43aa50c72ea4dd4538f7289aa97f68de3cd","sha1":"304b93276d9df64042ba994bbca9601e66db2256","blobstore_id":"9b3e412b-98ce-438d-9652-4d7de52fbf67"}]},"packages":{"common":{"name":"common","version":"43595236d1ce5f9a6120198108c226c07ab17012.1","sha1":"c002fcd08d711baf15707f6e0824324ecc8d96ce","blobstore_id":"584fc8bd-c835-4b8e-7b18-98fed900c30c"},"haproxy":{"name":"haproxy","version":"630ad6d6e1d3cab4547ce104f3019b483f354613.1","sha1":"5e7b97d56c0f76ad009366fb2a2faad96e8abe25","blobstore_id":"19670058-d247-48d6-658e-26768b5823f9"},"metron_agent":{"name":"metron_agent","version":"b6b63ba2f186801009546f6d190516452b4ce7b0.1","sha1":"1021acfd05f8a6481120fd3d259900094436cd84","blobstore_id":"48a32300-6250-4bbe-6966-d360cae6e9fc"}},"configuration_hash":"2b1c3ce6fdb95d1f73348e240052833a7eaeacf9","networks":{"cf1":{"cloud_properties":{"name":"CF-33"},"default":["dns","gateway"],"dns":["10.255.232.50","10.255.237.40"],"dns_record_name":"0.ha-proxy-z1.cf1.cloudfoundry.microbosh","gateway":"192.168.33.1","ip":"192.168.33.51","netmask":"255.255.255.0"}},"resource_pool":{"cloud_properties":{"cpu":1,"disk":2048,"ram":1024},"name":"router_z1","stemcell":{"name":"bosh-vsphere-esxi-ubuntu-trusty-go_agent","version":"2865"}},"deployment":"cloudfoundry","index":0,"persistent_disk":0,"persistent_disk_pool":null,"rendered_templates_archive":{"sha1":"427abe50d93f952292b4153a8db4947322a87042","blobstore_id":"23728c62-9164-45e5-b37f-d94de04e01e9"},"agent_id":"07477f12-af4d-47fc-bb2e-d3ff95f9bb5b","bosh_protocol":"1","job_state":"failing","vm":{"name":"vm-6a77bb07-1f72-485e-b0d4-074c43a1004e"},"ntp":{"message":"bad ntp server"}}}
I, [2015-04-13 12:34:48 #31034] [canary_update(ha_proxy_z1/0)]  INFO -- DirectorJobRunner: Waiting for 63.333333333333336 seconds to check ha_proxy_z1/0 status
D, [2015-04-13 12:34:49 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:34:54 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:34:59 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:35:04 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000194s) BEGIN
D, [2015-04-13 12:35:04 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000369s) UPDATE "tasks" SET "checkpoint_time" = '2015-04-13 12:35:04.429838+0000' WHERE ("id" = 395)
D, [2015-04-13 12:35:04 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.001314s) COMMIT
D, [2015-04-13 12:35:04 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:35:09 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:35:14 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:35:19 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:35:24 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:35:29 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:35:34 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000204s) BEGIN
D, [2015-04-13 12:35:34 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000370s) UPDATE "tasks" SET "checkpoint_time" = '2015-04-13 12:35:34.433544+0000' WHERE ("id" = 395)
D, [2015-04-13 12:35:34 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.001347s) COMMIT
D, [2015-04-13 12:35:34 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:35:39 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:35:44 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:35:49 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
I, [2015-04-13 12:35:52 #31034] [canary_update(ha_proxy_z1/0)]  INFO -- DirectorJobRunner: Checking if ha_proxy_z1/0 has been updated after 63.333333333333336 seconds
D, [2015-04-13 12:35:52 #31034] [canary_update(ha_proxy_z1/0)] DEBUG -- DirectorJobRunner: SENT: agent.07477f12-af4d-47fc-bb2e-d3ff95f9bb5b {"method":"get_state","arguments":[],"reply_to":"director.2502dd46-ce40-4772-b10f-386482f5d7eb.186653d1-6e37-44f6-af95-1a7fdc2b0b51"}
D, [2015-04-13 12:35:52 #31034] [] DEBUG -- DirectorJobRunner: RECEIVED: director.2502dd46-ce40-4772-b10f-386482f5d7eb.186653d1-6e37-44f6-af95-1a7fdc2b0b51 {"value":{"properties":{"logging":{"max_log_file_size":""}},"job":{"name":"ha_proxy_z1","release":"","template":"haproxy","version":"7bd402abf9d8fe86c1adf649c791fed92a03653c","sha1":"88a89bd6cd01c9b1390a2f0d27443c83b3a697a9","blobstore_id":"5fed26eb-e946-4216-952a-235dc99fd0a1","templates":[{"name":"haproxy","version":"7bd402abf9d8fe86c1adf649c791fed92a03653c","sha1":"88a89bd6cd01c9b1390a2f0d27443c83b3a697a9","blobstore_id":"5fed26eb-e946-4216-952a-235dc99fd0a1"},{"name":"metron_agent","version":"4cf0a43aa50c72ea4dd4538f7289aa97f68de3cd","sha1":"304b93276d9df64042ba994bbca9601e66db2256","blobstore_id":"9b3e412b-98ce-438d-9652-4d7de52fbf67"}]},"packages":{"common":{"name":"common","version":"43595236d1ce5f9a6120198108c226c07ab17012.1","sha1":"c002fcd08d711baf15707f6e0824324ecc8d96ce","blobstore_id":"584fc8bd-c835-4b8e-7b18-98fed900c30c"},"haproxy":{"name":"haproxy","version":"630ad6d6e1d3cab4547ce104f3019b483f354613.1","sha1":"5e7b97d56c0f76ad009366fb2a2faad96e8abe25","blobstore_id":"19670058-d247-48d6-658e-26768b5823f9"},"metron_agent":{"name":"metron_agent","version":"b6b63ba2f186801009546f6d190516452b4ce7b0.1","sha1":"1021acfd05f8a6481120fd3d259900094436cd84","blobstore_id":"48a32300-6250-4bbe-6966-d360cae6e9fc"}},"configuration_hash":"2b1c3ce6fdb95d1f73348e240052833a7eaeacf9","networks":{"cf1":{"cloud_properties":{"name":"CF-33"},"default":["dns","gateway"],"dns":["10.255.232.50","10.255.237.40"],"dns_record_name":"0.ha-proxy-z1.cf1.cloudfoundry.microbosh","gateway":"192.168.33.1","ip":"192.168.33.51","netmask":"255.255.255.0"}},"resource_pool":{"cloud_properties":{"cpu":1,"disk":2048,"ram":1024},"name":"router_z1","stemcell":{"name":"bosh-vsphere-esxi-ubuntu-trusty-go_agent","version":"2865"}},"deployment":"cloudfoundry","index":0,"persistent_disk":0,"persistent_disk_pool":null,"rendered_templates_archive":{"sha1":"427abe50d93f952292b4153a8db4947322a87042","blobstore_id":"23728c62-9164-45e5-b37f-d94de04e01e9"},"agent_id":"07477f12-af4d-47fc-bb2e-d3ff95f9bb5b","bosh_protocol":"1","job_state":"failing","vm":{"name":"vm-6a77bb07-1f72-485e-b0d4-074c43a1004e"},"ntp":{"message":"bad ntp server"}}}
I, [2015-04-13 12:35:52 #31034] [canary_update(ha_proxy_z1/0)]  INFO -- DirectorJobRunner: Waiting for 63.333333333333336 seconds to check ha_proxy_z1/0 status
D, [2015-04-13 12:35:54 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:35:59 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:36:04 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000171s) BEGIN
D, [2015-04-13 12:36:04 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000379s) UPDATE "tasks" SET "checkpoint_time" = '2015-04-13 12:36:04.437429+0000' WHERE ("id" = 395)
D, [2015-04-13 12:36:04 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.001124s) COMMIT
D, [2015-04-13 12:36:04 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:36:09 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:36:14 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:36:19 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:36:24 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:36:29 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:36:34 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000199s) BEGIN
D, [2015-04-13 12:36:34 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000396s) UPDATE "tasks" SET "checkpoint_time" = '2015-04-13 12:36:34.440954+0000' WHERE ("id" = 395)
D, [2015-04-13 12:36:34 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000960s) COMMIT
D, [2015-04-13 12:36:34 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:36:39 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:36:44 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:36:49 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:36:54 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
I, [2015-04-13 12:36:55 #31034] [canary_update(ha_proxy_z1/0)]  INFO -- DirectorJobRunner: Checking if ha_proxy_z1/0 has been updated after 63.333333333333336 seconds
D, [2015-04-13 12:36:55 #31034] [canary_update(ha_proxy_z1/0)] DEBUG -- DirectorJobRunner: SENT: agent.07477f12-af4d-47fc-bb2e-d3ff95f9bb5b {"method":"get_state","arguments":[],"reply_to":"director.2502dd46-ce40-4772-b10f-386482f5d7eb.0ffa28a8-934a-464c-a039-c862fce4645d"}
D, [2015-04-13 12:36:55 #31034] [] DEBUG -- DirectorJobRunner: RECEIVED: director.2502dd46-ce40-4772-b10f-386482f5d7eb.0ffa28a8-934a-464c-a039-c862fce4645d {"value":{"properties":{"logging":{"max_log_file_size":""}},"job":{"name":"ha_proxy_z1","release":"","template":"haproxy","version":"7bd402abf9d8fe86c1adf649c791fed92a03653c","sha1":"88a89bd6cd01c9b1390a2f0d27443c83b3a697a9","blobstore_id":"5fed26eb-e946-4216-952a-235dc99fd0a1","templates":[{"name":"haproxy","version":"7bd402abf9d8fe86c1adf649c791fed92a03653c","sha1":"88a89bd6cd01c9b1390a2f0d27443c83b3a697a9","blobstore_id":"5fed26eb-e946-4216-952a-235dc99fd0a1"},{"name":"metron_agent","version":"4cf0a43aa50c72ea4dd4538f7289aa97f68de3cd","sha1":"304b93276d9df64042ba994bbca9601e66db2256","blobstore_id":"9b3e412b-98ce-438d-9652-4d7de52fbf67"}]},"packages":{"common":{"name":"common","version":"43595236d1ce5f9a6120198108c226c07ab17012.1","sha1":"c002fcd08d711baf15707f6e0824324ecc8d96ce","blobstore_id":"584fc8bd-c835-4b8e-7b18-98fed900c30c"},"haproxy":{"name":"haproxy","version":"630ad6d6e1d3cab4547ce104f3019b483f354613.1","sha1":"5e7b97d56c0f76ad009366fb2a2faad96e8abe25","blobstore_id":"19670058-d247-48d6-658e-26768b5823f9"},"metron_agent":{"name":"metron_agent","version":"b6b63ba2f186801009546f6d190516452b4ce7b0.1","sha1":"1021acfd05f8a6481120fd3d259900094436cd84","blobstore_id":"48a32300-6250-4bbe-6966-d360cae6e9fc"}},"configuration_hash":"2b1c3ce6fdb95d1f73348e240052833a7eaeacf9","networks":{"cf1":{"cloud_properties":{"name":"CF-33"},"default":["dns","gateway"],"dns":["10.255.232.50","10.255.237.40"],"dns_record_name":"0.ha-proxy-z1.cf1.cloudfoundry.microbosh","gateway":"192.168.33.1","ip":"192.168.33.51","netmask":"255.255.255.0"}},"resource_pool":{"cloud_properties":{"cpu":1,"disk":2048,"ram":1024},"name":"router_z1","stemcell":{"name":"bosh-vsphere-esxi-ubuntu-trusty-go_agent","version":"2865"}},"deployment":"cloudfoundry","index":0,"persistent_disk":0,"persistent_disk_pool":null,"rendered_templates_archive":{"sha1":"427abe50d93f952292b4153a8db4947322a87042","blobstore_id":"23728c62-9164-45e5-b37f-d94de04e01e9"},"agent_id":"07477f12-af4d-47fc-bb2e-d3ff95f9bb5b","bosh_protocol":"1","job_state":"failing","vm":{"name":"vm-6a77bb07-1f72-485e-b0d4-074c43a1004e"},"ntp":{"message":"bad ntp server"}}}
I, [2015-04-13 12:36:55 #31034] [canary_update(ha_proxy_z1/0)]  INFO -- DirectorJobRunner: Waiting for 63.333333333333336 seconds to check ha_proxy_z1/0 status
D, [2015-04-13 12:36:59 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:37:04 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000184s) BEGIN
D, [2015-04-13 12:37:04 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000390s) UPDATE "tasks" SET "checkpoint_time" = '2015-04-13 12:37:04.444270+0000' WHERE ("id" = 395)
D, [2015-04-13 12:37:04 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.001203s) COMMIT
D, [2015-04-13 12:37:04 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:37:09 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:37:14 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:37:19 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:37:24 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:37:29 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:37:34 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000262s) BEGIN
D, [2015-04-13 12:37:34 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000397s) UPDATE "tasks" SET "checkpoint_time" = '2015-04-13 12:37:34.447769+0000' WHERE ("id" = 395)
D, [2015-04-13 12:37:34 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.001159s) COMMIT
D, [2015-04-13 12:37:34 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:37:39 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:37:44 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:37:49 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:37:54 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
I, [2015-04-13 12:37:58 #31034] [canary_update(ha_proxy_z1/0)]  INFO -- DirectorJobRunner: Checking if ha_proxy_z1/0 has been updated after 63.333333333333336 seconds
D, [2015-04-13 12:37:58 #31034] [canary_update(ha_proxy_z1/0)] DEBUG -- DirectorJobRunner: SENT: agent.07477f12-af4d-47fc-bb2e-d3ff95f9bb5b {"method":"get_state","arguments":[],"reply_to":"director.2502dd46-ce40-4772-b10f-386482f5d7eb.70951e70-8df4-48d2-8325-6c853d7c9a1e"}
D, [2015-04-13 12:37:58 #31034] [] DEBUG -- DirectorJobRunner: RECEIVED: director.2502dd46-ce40-4772-b10f-386482f5d7eb.70951e70-8df4-48d2-8325-6c853d7c9a1e {"value":{"properties":{"logging":{"max_log_file_size":""}},"job":{"name":"ha_proxy_z1","release":"","template":"haproxy","version":"7bd402abf9d8fe86c1adf649c791fed92a03653c","sha1":"88a89bd6cd01c9b1390a2f0d27443c83b3a697a9","blobstore_id":"5fed26eb-e946-4216-952a-235dc99fd0a1","templates":[{"name":"haproxy","version":"7bd402abf9d8fe86c1adf649c791fed92a03653c","sha1":"88a89bd6cd01c9b1390a2f0d27443c83b3a697a9","blobstore_id":"5fed26eb-e946-4216-952a-235dc99fd0a1"},{"name":"metron_agent","version":"4cf0a43aa50c72ea4dd4538f7289aa97f68de3cd","sha1":"304b93276d9df64042ba994bbca9601e66db2256","blobstore_id":"9b3e412b-98ce-438d-9652-4d7de52fbf67"}]},"packages":{"common":{"name":"common","version":"43595236d1ce5f9a6120198108c226c07ab17012.1","sha1":"c002fcd08d711baf15707f6e0824324ecc8d96ce","blobstore_id":"584fc8bd-c835-4b8e-7b18-98fed900c30c"},"haproxy":{"name":"haproxy","version":"630ad6d6e1d3cab4547ce104f3019b483f354613.1","sha1":"5e7b97d56c0f76ad009366fb2a2faad96e8abe25","blobstore_id":"19670058-d247-48d6-658e-26768b5823f9"},"metron_agent":{"name":"metron_agent","version":"b6b63ba2f186801009546f6d190516452b4ce7b0.1","sha1":"1021acfd05f8a6481120fd3d259900094436cd84","blobstore_id":"48a32300-6250-4bbe-6966-d360cae6e9fc"}},"configuration_hash":"2b1c3ce6fdb95d1f73348e240052833a7eaeacf9","networks":{"cf1":{"cloud_properties":{"name":"CF-33"},"default":["dns","gateway"],"dns":["10.255.232.50","10.255.237.40"],"dns_record_name":"0.ha-proxy-z1.cf1.cloudfoundry.microbosh","gateway":"192.168.33.1","ip":"192.168.33.51","netmask":"255.255.255.0"}},"resource_pool":{"cloud_properties":{"cpu":1,"disk":2048,"ram":1024},"name":"router_z1","stemcell":{"name":"bosh-vsphere-esxi-ubuntu-trusty-go_agent","version":"2865"}},"deployment":"cloudfoundry","index":0,"persistent_disk":0,"persistent_disk_pool":null,"rendered_templates_archive":{"sha1":"427abe50d93f952292b4153a8db4947322a87042","blobstore_id":"23728c62-9164-45e5-b37f-d94de04e01e9"},"agent_id":"07477f12-af4d-47fc-bb2e-d3ff95f9bb5b","bosh_protocol":"1","job_state":"failing","vm":{"name":"vm-6a77bb07-1f72-485e-b0d4-074c43a1004e"},"ntp":{"message":"bad ntp server"}}}
I, [2015-04-13 12:37:58 #31034] [canary_update(ha_proxy_z1/0)]  INFO -- DirectorJobRunner: Waiting for 63.333333333333336 seconds to check ha_proxy_z1/0 status
D, [2015-04-13 12:37:59 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:38:04 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000192s) BEGIN
D, [2015-04-13 12:38:04 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000572s) UPDATE "tasks" SET "checkpoint_time" = '2015-04-13 12:38:04.451525+0000' WHERE ("id" = 395)
D, [2015-04-13 12:38:04 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000769s) COMMIT
D, [2015-04-13 12:38:04 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:38:09 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:38:14 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:38:19 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:38:24 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:38:29 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:38:34 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000211s) BEGIN
D, [2015-04-13 12:38:34 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.000395s) UPDATE "tasks" SET "checkpoint_time" = '2015-04-13 12:38:34.455148+0000' WHERE ("id" = 395)
D, [2015-04-13 12:38:34 #31034] [task:395-checkpoint] DEBUG -- DirectorJobRunner: (0.001106s) COMMIT
D, [2015-04-13 12:38:34 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:38:39 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:38:44 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:38:49 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:38:54 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:38:59 #31034] [] DEBUG -- DirectorJobRunner: Renewing lock: lock:deployment:cloudfoundry
I, [2015-04-13 12:39:02 #31034] [canary_update(ha_proxy_z1/0)]  INFO -- DirectorJobRunner: Checking if ha_proxy_z1/0 has been updated after 63.333333333333336 seconds
D, [2015-04-13 12:39:02 #31034] [canary_update(ha_proxy_z1/0)] DEBUG -- DirectorJobRunner: SENT: agent.07477f12-af4d-47fc-bb2e-d3ff95f9bb5b {"method":"get_state","arguments":[],"reply_to":"director.2502dd46-ce40-4772-b10f-386482f5d7eb.7729b595-6a49-4513-94c1-ca7465166a9f"}
D, [2015-04-13 12:39:02 #31034] [] DEBUG -- DirectorJobRunner: RECEIVED: director.2502dd46-ce40-4772-b10f-386482f5d7eb.7729b595-6a49-4513-94c1-ca7465166a9f {"value":{"properties":{"logging":{"max_log_file_size":""}},"job":{"name":"ha_proxy_z1","release":"","template":"haproxy","version":"7bd402abf9d8fe86c1adf649c791fed92a03653c","sha1":"88a89bd6cd01c9b1390a2f0d27443c83b3a697a9","blobstore_id":"5fed26eb-e946-4216-952a-235dc99fd0a1","templates":[{"name":"haproxy","version":"7bd402abf9d8fe86c1adf649c791fed92a03653c","sha1":"88a89bd6cd01c9b1390a2f0d27443c83b3a697a9","blobstore_id":"5fed26eb-e946-4216-952a-235dc99fd0a1"},{"name":"metron_agent","version":"4cf0a43aa50c72ea4dd4538f7289aa97f68de3cd","sha1":"304b93276d9df64042ba994bbca9601e66db2256","blobstore_id":"9b3e412b-98ce-438d-9652-4d7de52fbf67"}]},"packages":{"common":{"name":"common","version":"43595236d1ce5f9a6120198108c226c07ab17012.1","sha1":"c002fcd08d711baf15707f6e0824324ecc8d96ce","blobstore_id":"584fc8bd-c835-4b8e-7b18-98fed900c30c"},"haproxy":{"name":"haproxy","version":"630ad6d6e1d3cab4547ce104f3019b483f354613.1","sha1":"5e7b97d56c0f76ad009366fb2a2faad96e8abe25","blobstore_id":"19670058-d247-48d6-658e-26768b5823f9"},"metron_agent":{"name":"metron_agent","version":"b6b63ba2f186801009546f6d190516452b4ce7b0.1","sha1":"1021acfd05f8a6481120fd3d259900094436cd84","blobstore_id":"48a32300-6250-4bbe-6966-d360cae6e9fc"}},"configuration_hash":"2b1c3ce6fdb95d1f73348e240052833a7eaeacf9","networks":{"cf1":{"cloud_properties":{"name":"CF-33"},"default":["dns","gateway"],"dns":["10.255.232.50","10.255.237.40"],"dns_record_name":"0.ha-proxy-z1.cf1.cloudfoundry.microbosh","gateway":"192.168.33.1","ip":"192.168.33.51","netmask":"255.255.255.0"}},"resource_pool":{"cloud_properties":{"cpu":1,"disk":2048,"ram":1024},"name":"router_z1","stemcell":{"name":"bosh-vsphere-esxi-ubuntu-trusty-go_agent","version":"2865"}},"deployment":"cloudfoundry","index":0,"persistent_disk":0,"persistent_disk_pool":null,"rendered_templates_archive":{"sha1":"427abe50d93f952292b4153a8db4947322a87042","blobstore_id":"23728c62-9164-45e5-b37f-d94de04e01e9"},"agent_id":"07477f12-af4d-47fc-bb2e-d3ff95f9bb5b","bosh_protocol":"1","job_state":"failing","vm":{"name":"vm-6a77bb07-1f72-485e-b0d4-074c43a1004e"},"ntp":{"message":"bad ntp server"}}}
E, [2015-04-13 12:39:02 #31034] [canary_update(ha_proxy_z1/0)] ERROR -- DirectorJobRunner: Error updating canary instance: #<Bosh::Director::AgentJobNotRunning: `ha_proxy_z1/0' is not running after update>
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/instance_updater.rb:85:in `update'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/job_updater.rb:74:in `block (2 levels) in update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_formatter.rb:49:in `with_thread_name'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/job_updater.rb:72:in `block in update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/event_log.rb:97:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/event_log.rb:97:in `advance_and_track'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/job_updater.rb:71:in `update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/job_updater.rb:65:in `block (2 levels) in update_canaries'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:77:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:77:in `block (2 levels) in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:63:in `loop'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:63:in `block in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `block in create_with_logging_context'
D, [2015-04-13 12:39:02 #31034] [] DEBUG -- DirectorJobRunner: Worker thread raised exception: `ha_proxy_z1/0' is not running after update - /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/instance_updater.rb:85:in `update'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/job_updater.rb:74:in `block (2 levels) in update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_formatter.rb:49:in `with_thread_name'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/job_updater.rb:72:in `block in update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/event_log.rb:97:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/event_log.rb:97:in `advance_and_track'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/job_updater.rb:71:in `update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/job_updater.rb:65:in `block (2 levels) in update_canaries'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:77:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:77:in `block (2 levels) in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:63:in `loop'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:63:in `block in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `block in create_with_logging_context'
D, [2015-04-13 12:39:02 #31034] [] DEBUG -- DirectorJobRunner: Thread is no longer needed, cleaning up
D, [2015-04-13 12:39:02 #31034] [task:395] DEBUG -- DirectorJobRunner: Shutting down pool
D, [2015-04-13 12:39:02 #31034] [task:395] DEBUG -- DirectorJobRunner: (0.000745s) SELECT "stemcells".* FROM "stemcells" INNER JOIN "deployments_stemcells" ON (("deployments_stemcells"."stemcell_id" = "stemcells"."id") AND ("deployments_stemcells"."deployment_id" = 3))
D, [2015-04-13 12:39:02 #31034] [task:395] DEBUG -- DirectorJobRunner: Deleting lock: lock:deployment:cloudfoundry
D, [2015-04-13 12:39:02 #31034] [] DEBUG -- DirectorJobRunner: Lock renewal thread exiting
D, [2015-04-13 12:39:02 #31034] [task:395] DEBUG -- DirectorJobRunner: Deleted lock: lock:deployment:cloudfoundry
I, [2015-04-13 12:39:02 #31034] [task:395]  INFO -- DirectorJobRunner: sending update deployment error event
D, [2015-04-13 12:39:02 #31034] [task:395] DEBUG -- DirectorJobRunner: SENT: hm.director.alert {"id":"2116decd-a809-409c-9671-bd9fca3260e2","severity":3,"title":"director - error during update deployment","summary":"Error during update deployment for cloudfoundry against Director d88bce25-25f1-4873-9e7b-5c554e2deabe: #<Bosh::Director::AgentJobNotRunning: `ha_proxy_z1/0' is not running after update>","created_at":1428928742}
E, [2015-04-13 12:39:02 #31034] [task:395] ERROR -- DirectorJobRunner: `ha_proxy_z1/0' is not running after update
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/instance_updater.rb:85:in `update'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/job_updater.rb:74:in `block (2 levels) in update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_formatter.rb:49:in `with_thread_name'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/job_updater.rb:72:in `block in update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/event_log.rb:97:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/event_log.rb:97:in `advance_and_track'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/job_updater.rb:71:in `update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2865.0/lib/bosh/director/job_updater.rb:65:in `block (2 levels) in update_canaries'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:77:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:77:in `block (2 levels) in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:63:in `loop'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2865.0/lib/common/thread_pool.rb:63:in `block in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `block in create_with_logging_context'
D, [2015-04-13 12:39:02 #31034] [task:395] DEBUG -- DirectorJobRunner: (0.000102s) BEGIN
D, [2015-04-13 12:39:02 #31034] [task:395] DEBUG -- DirectorJobRunner: (0.000457s) UPDATE "tasks" SET "state" = 'error', "timestamp" = '2015-04-13 12:39:02.309854+0000', "description" = 'create deployment', "result" = '`ha_proxy_z1/0'' is not running after update', "output" = '/var/vcap/store/director/tasks/395', "checkpoint_time" = '2015-04-13 12:38:34.455148+0000', "type" = 'update_deployment', "username" = 'admin' WHERE ("id" = 395)
D, [2015-04-13 12:39:02 #31034] [task:395] DEBUG -- DirectorJobRunner: (0.001249s) COMMIT
I, [2015-04-13 12:39:02 #31034] []  INFO -- DirectorJobRunner: Task took 10 minutes 59.805384995000054 seconds to process.

Task 395 error


Thanks..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Johannes Hiemer

unread,
Apr 13, 2015, 9:20:05 AM4/13/15
to vcap...@cloudfoundry.org
This is not enough. Please share the logs from the VM as well.


On Monday, April 13, 2015 at 3:16:59 PM UTC+2, Parthiban Annadurai wrote:
@Johannes.. FYI,

...

Parthiban Annadurai

unread,
Apr 13, 2015, 9:35:21 AM4/13/15
to vcap...@cloudfoundry.org
@Johannes.. FYI, i have just copied all the files under /var/vcap/sys/log on the ha_proxy VM.. Please find it below.. If you find any conflicts, let me know..

Thanks in Well Advance..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.
metron_agent.stderr.log
metron_agent.stdout.log
metron_agent_ctl.err.log
metron_agent_ctl.log

Johannes Hiemer

unread,
Apr 13, 2015, 9:47:08 AM4/13/15
to vcap...@cloudfoundry.org
So there seems to be an error with the syslog. Could you please use v205, instead of 202. If I remember correctly there some issues with v202. And afterwards run the deployment again.

Regards,
Johannes
...

Parthiban Annadurai

unread,
Apr 13, 2015, 9:53:47 AM4/13/15
to vcap...@cloudfoundry.org
@Johannes.. But, in the file named metron_agent.stdout.log shows that

{"timestamp":1428929311.434787273,"process_id":5980,"source":"metron","log_level":"warn","message":"Failed to create client: Could not connect to NATS: nats: No servers available for connection","data":null,"file":"/var/vcap/data/compile/metron_agent/loggregator/src/github.com/cloudfoundry/loggregatorlib/cfcomponent/registrars/collectorregistrar/collector_registrar.go","line":41,"method":"github.com/cloudfoundry/loggregatorlib/cfcomponent/registrars/collectorregistrar.(*CollectorRegistrar).Run"}

Is it okay??

Thanks..

--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.

Parthiban Annadurai

unread,
Apr 14, 2015, 1:28:26 AM4/14/15
to vcap...@cloudfoundry.org, Johannes Hiemer
Hi All,
          In my deployment ha_proxy_z1 VM is keep on failing. So, i tried to start it manually, it shows me the following,

root@07477f12-af4d-47fc-bb2e-d3ff95f9bb5b:/var/vcap/jobs/haproxy/bin# ./haproxy_ctl start
Starting HAProxy: Tue Apr 14 05:38:05 UTC 2015: Starting HAProxy
[ALERT] 103/053805 (31226) : parsing [/var/vcap/jobs/haproxy/config/haproxy.config:23] : 'bind :443' : unable to load SSL private key from PEM file '/var/vcap/jobs/haproxy/config/cert.pem'.
[ALERT] 103/053805 (31226) : parsing [/var/vcap/jobs/haproxy/config/haproxy.config:32] : 'bind :4443' : unable to load SSL private key from PEM file '/var/vcap/jobs/haproxy/config/cert.pem'.
[ALERT] 103/053805 (31226) : Error(s) found in configuration file : /var/vcap/jobs/haproxy/config/haproxy.config
[ALERT] 103/053805 (31226) : Proxy 'https-in': no SSL certificate specified for bind ':443' at [/var/vcap/jobs/haproxy/config/haproxy.config:23] (use 'crt').
[ALERT] 103/053805 (31226) : Proxy 'ssl-in': no SSL certificate specified for bind ':4443' at [/var/vcap/jobs/haproxy/config/haproxy.config:32] (use 'crt').
[ALERT] 103/053805 (31226) : Fatal errors found in configuration.
Tue Apr 14 05:38:05 UTC 2015: Errored starting HAProxy
Tue Apr 14 05:38:05 UTC 2015: Errored starting HAProxy
FAILED - check logs
haproxy.


Could anyone on this??

Thanks in Well Advance..

Parthiban Annadurai

unread,
Apr 14, 2015, 4:21:37 AM4/14/15
to vcap...@cloudfoundry.org, Johannes Hiemer
Hi All,
          Its the problem in SSL PEM.. Just i changed the certificates and it surpassed the error.. Now, it throws the following,

[root@ie1aul0414 ~]# bosh deployment /root/oss/cf-release/cf-deployment.yml

Deployment set to `/root/oss/cf-release/cf-deployment.yml'
[root@ie1aul0414 ~]# bosh deploy

Processing deployment manifest
------------------------------
Getting deployment properties from director...
Compiling deployment manifest...
Please review all changes carefully

Deploying
---------
Deployment name: `cf-deployment.yml'
Director name: `microbosh'
Are you sure you want to deploy? (type 'yes' to continue): yes

Director task 420

  Started preparing deployment
  Started preparing deployment > Binding deployment. Done (00:00:00)
  Started preparing deployment > Binding releases. Done (00:00:00)
  Started preparing deployment > Binding existing deployment. Done (00:00:01)

  Started preparing deployment > Binding resource pools. Done (00:00:00)
  Started preparing deployment > Binding stemcells. Done (00:00:00)
  Started preparing deployment > Binding templates. Done (00:00:00)
  Started preparing deployment > Binding properties. Done (00:00:00)
  Started preparing deployment > Binding unallocated VMs. Done (00:00:00)
  Started preparing deployment > Binding instance networks. Done (00:00:00)
     Done preparing deployment (00:00:01)

  Started preparing package compilation > Finding packages to compile. Done (00:00:00)

  Started preparing dns > Binding DNS. Done (00:00:00)

  Started preparing configuration > Binding configuration. Done (00:00:02)

  Started updating job ha_proxy_z1 > ha_proxy_z1/0 (canary). Done (00:01:05)
  Started updating job nats_z1 > nats_z1/0 (canary). Done (00:01:47)
  Started updating job nats_z2 > nats_z2/0 (canary). Done (00:00:42)
  Started updating job etcd_z1
  Started updating job etcd_z1 > etcd_z1/0 (canary). Done (00:01:02)
  Started updating job etcd_z1 > etcd_z1/1. Done (00:01:37)
     Done updating job etcd_z1 (00:02:39)
  Started updating job etcd_z2 > etcd_z2/0 (canary). Done (00:00:56)
  Started updating job stats_z1 > stats_z1/0 (canary). Done (00:01:48)
  Started updating job nfs_z1 > nfs_z1/0 (canary). Done (00:01:55)
  Started updating job postgres_z1 > postgres_z1/0 (canary). Done (00:00:58)
  Started updating job uaa_z1 > uaa_z1/0 (canary). Done (00:00:49)
  Started updating job uaa_z2 > uaa_z2/0 (canary). Done (00:00:50)
  Started updating job login_z1 > login_z1/0 (canary). Done (00:01:57)
  Started updating job login_z2 > login_z2/0 (canary). Done (00:01:53)
  Started updating job api_z1 > api_z1/0 (canary). Failed: `api_z1/0' is not running after update (00:16:35)

Error 400007: `api_z1/0' is not running after update

Task 420 error

For a more detailed error report, run: bosh task 420 --debug


Could anyone??

Thanks..

Parthiban Annadurai

unread,
Apr 14, 2015, 7:37:02 AM4/14/15
to vcap...@cloudfoundry.org, Johannes Hiemer, Dr Nic Williams, Alvise Dorigo, Guruprakash S, Stanley Kao, Mark Watson, Dmitriy Kalinin
Hi All,
         Finally, I have deployed the CF using MicroBOSH successfully with all your help on vSphere.. Thanks a Ton All..

Regards

Parthiban A

Johannes Hiemer

unread,
Apr 14, 2015, 7:40:21 AM4/14/15
to vcap...@cloudfoundry.org, jvhi...@gmail.com, drnicw...@gmail.com, alvi...@gmail.com, prakas...@gmail.com, stanle...@gmail.com, watso...@gmail.com, dkal...@pivotal.io
Great Parthiban, congrats! :-)
...

Dr Nic Williams

unread,
Apr 14, 2015, 10:18:58 AM4/14/15
to Johannes Hiemer, vcap...@cloudfoundry.org, alvi...@gmail.com, dkal...@pivotal.io, jvhi...@gmail.com, prakas...@gmail.com, stanle...@gmail.com, watso...@gmail.com
Congrats! 
Reply all
Reply to author
Forward
0 new messages