Error 400007: `api_z1/0' is not running after update

294 views
Skip to first unread message

Arunava Basu

unread,
Apr 15, 2015, 10:05:40 AM4/15/15
to bosh-...@cloudfoundry.org
I am getting 'Error 400007: `api_z1/0' is not running after update' when I am trying to deploy CF v202 in vcloud director.

bosh deploy error:
 Started updating job api_z1 > api_z1/0 (canary). Failed: `api_z1/0' is not running after update (00:23:39)

Error 400007: `api_z1/0' is not running after update

bosh task --debug:
[2015-04-15 13:46:57 #18081] [canary_update(api_z1/0)] ERROR -- DirectorJobRunner: Error updating canary instance: #<Bosh::Director::AgentJobNotRunning: `api_z1/0' is not running after update>
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/instance_updater.rb:85:in `update'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/job_updater.rb:74:in `block (2 levels) in update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2859.0/lib/common/thread_formatter.rb:49:in `with_thread_name'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/job_updater.rb:72:in `block in update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/event_log.rb:97:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/event_log.rb:97:in `advance_and_track'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/job_updater.rb:71:in `update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/job_updater.rb:65:in `block (2 levels) in update_canaries'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2859.0/lib/common/thread_pool.rb:77:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2859.0/lib/common/thread_pool.rb:77:in `block (2 levels) in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2859.0/lib/common/thread_pool.rb:63:in `loop'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2859.0/lib/common/thread_pool.rb:63:in `block in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `block in create_with_logging_context'
D, [2015-04-15 13:46:57 #18081] [] DEBUG -- DirectorJobRunner: Worker thread raised exception: `api_z1/0' is not running after update - /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/instance_updater.rb:85:in `update'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/job_updater.rb:74:in `block (2 levels) in update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2859.0/lib/common/thread_formatter.rb:49:in `with_thread_name'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/job_updater.rb:72:in `block in update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/event_log.rb:97:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/event_log.rb:97:in `advance_and_track'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/job_updater.rb:71:in `update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/job_updater.rb:65:in `block (2 levels) in update_canaries'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2859.0/lib/common/thread_pool.rb:77:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2859.0/lib/common/thread_pool.rb:77:in `block (2 levels) in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2859.0/lib/common/thread_pool.rb:63:in `loop'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2859.0/lib/common/thread_pool.rb:63:in `block in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `block in create_with_logging_context'
D, [2015-04-15 13:46:57 #18081] [] DEBUG -- DirectorJobRunner: Thread is no longer needed, cleaning up
D, [2015-04-15 13:46:57 #18081] [task:56] DEBUG -- DirectorJobRunner: Shutting down pool
D, [2015-04-15 13:46:57 #18081] [task:56] DEBUG -- DirectorJobRunner: (0.000977s) SELECT "stemcells".* FROM "stemcells" INNER JOIN "deployments_stemcells" ON (("deployments_stemcells"."stemcell_id" = "stemcells"."id") AND ("deployments_stemcells"."deployment_id" = 1))D, [2015-04-15 13:46:57 #18081] [task:56] DEBUG -- DirectorJobRunner: Deleting lock: lock:deployment:canopy-ocf-tai-dev
D, [2015-04-15 13:46:57 #18081] [task:56] DEBUG -- DirectorJobRunner: Deleted lock: lock:deployment:canopy-ocf-tai-dev
I, [2015-04-15 13:46:57 #18081] [task:56]  INFO -- DirectorJobRunner: sending update deployment error event
D, [2015-04-15 13:46:57 #18081] [task:56] DEBUG -- DirectorJobRunner: SENT: hm.director.alert {"id":"c713e889-e28e-4308-a684-aac338a8a1b3","severity":3,"title":"director - error during update deployment","summary":"Error during update deployment for canopy-ocf-tai-dev against Director bc0b1140-b484-4cff-bd8b-276dcc3bb461: #<Bosh::Director::AgentJobNotRunning: `api_z1/0' is not running after update>","created_at":1429105617}
E, [2015-04-15 13:46:57 #18081] [task:56] ERROR -- DirectorJobRunner: `api_z1/0' is not running after update
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/instance_updater.rb:85:in `update'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/job_updater.rb:74:in `block (2 levels) in update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2859.0/lib/common/thread_formatter.rb:49:in `with_thread_name'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/job_updater.rb:72:in `block in update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/event_log.rb:97:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/event_log.rb:97:in `advance_and_track'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/job_updater.rb:71:in `update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2859.0/lib/bosh/director/job_updater.rb:65:in `block (2 levels) in update_canaries'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2859.0/lib/common/thread_pool.rb:77:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2859.0/lib/common/thread_pool.rb:77:in `block (2 levels) in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2859.0/lib/common/thread_pool.rb:63:in `loop'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2859.0/lib/common/thread_pool.rb:63:in `block in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `block in create_with_logging_context'
D, [2015-04-15 13:46:57 #18081] [task:56] DEBUG -- DirectorJobRunner: (0.000144s) BEGIN
D, [2015-04-15 13:46:57 #18081] [task:56] DEBUG -- DirectorJobRunner: (0.000627s) UPDATE "tasks" SET "state" = 'error', "timestamp" = '2015-04-15 13:46:57.597221+0000', "description" = 'create deployment', "result" = '`api_z1/0'' is not running after update', "output" = '/var/vcap/store/director/tasks/56', "checkpoint_time" = '2015-04-15 13:46:45.275188+0000', "type" = 'update_deployment', "username" = 'admin' WHERE ("id" = 56)
D, [2015-04-15 13:46:57 #18081] [] DEBUG -- DirectorJobRunner: Lock renewal thread exiting
D, [2015-04-15 13:46:57 #18081] [task:56] DEBUG -- DirectorJobRunner: (0.001253s) COMMIT
I, [2015-04-15 13:46:57 #18081] []  INFO -- DirectorJobRunner: Task took 23 minutes 43.567434737999974 seconds to process.


After `bosh ssh api_z1`, I got the following stuffs.

root@c997731f-d1ea-4532-88e0-ce54b032847f:/var/vcap/sys/log# monit summary
The Monit daemon 5.2.4 uptime: 13m

Process 'cloud_controller_ng'       Connection failed
Process 'cloud_controller_worker_local_1' not monitored
Process 'cloud_controller_worker_local_2' not monitored
Process 'nginx_cc'                  initializing
Process 'metron_agent'              running
File 'nfs_mounter'                  accessible
System 'system_c997731f-d1ea-4532-88e0-ce54b032847f' running

root@c997731f-d1ea-4532-88e0-ce54b032847f:/var/vcap/sys/log# tail -f *.log                                                   [64/1889]
==> cloud_controller_ng_ctl.err.log <==
[2015-04-15 13:38:01+0000] ------------ STARTING cloud_controller_ng_ctl at Wed Apr 15 13:36:55 UTC 2015 --------------
[2015-04-15 13:41:49+0000] ------------ STARTING cloud_controller_ng_ctl at Wed Apr 15 13:38:02 UTC 2015 --------------
[2015-04-15 13:42:54+0000] ------------ STARTING cloud_controller_ng_ctl at Wed Apr 15 13:41:50 UTC 2015 --------------
[2015-04-15 13:43:52+0000] ------------ STARTING cloud_controller_ng_ctl at Wed Apr 15 13:42:55 UTC 2015 --------------
[2015-04-15 13:45:01+0000] ------------ STARTING cloud_controller_ng_ctl at Wed Apr 15 13:43:53 UTC 2015 --------------
[2015-04-15 13:46:32+0000] ------------ STARTING cloud_controller_ng_ctl at Wed Apr 15 13:45:02 UTC 2015 --------------
[2015-04-15 13:47:56+0000] ------------ STARTING cloud_controller_ng_ctl at Wed Apr 15 13:46:33 UTC 2015 --------------

==> cloud_controller_ng_ctl.log <==
[2015-04-15 13:46:32+0000] Deprecated: Use -s or --insert-seed flag
[2015-04-15 13:46:32+0000] Killing /var/vcap/sys/run/cloud_controller_ng/cloud_controller_ng.pid: 6127
[2015-04-15 13:46:32+0000] .Stopped
[2015-04-15 13:47:56+0000] ------------ STARTING cloud_controller_ng_ctl at Wed Apr 15 13:46:33 UTC 2015 --------------
[2015-04-15 13:47:56+0000] Preparing local package directory
[2015-04-15 13:47:56+0000] Preparing local resource_pool directory
[2015-04-15 13:47:56+0000] Preparing local droplet directory
[2015-04-15 13:47:56+0000] Deprecated: Use -s or --insert-seed flag
[2015-04-15 13:47:56+0000] Killing /var/vcap/sys/run/cloud_controller_ng/cloud_controller_ng.pid: 7035
[2015-04-15 13:47:56+0000] .Stopped

==> cloud_controller_worker_ctl.err.log <==
[2015-04-15 13:42:47+0000] ------------ STARTING cloud_controller_worker_ctl at Wed Apr 15 13:42:03 UTC 2015 --------------
[2015-04-15 13:42:53+0000] ------------ STARTING cloud_controller_worker_ctl at Wed Apr 15 13:42:04 UTC 2015 --------------
[2015-04-15 13:43:51+0000] ------------ STARTING cloud_controller_worker_ctl at Wed Apr 15 13:43:06 UTC 2015 --------------
[2015-04-15 13:43:51+0000] ------------ STARTING cloud_controller_worker_ctl at Wed Apr 15 13:43:07 UTC 2015 --------------
[2015-04-15 13:44:51+0000] ------------ STARTING cloud_controller_worker_ctl at Wed Apr 15 13:44:04 UTC 2015 --------------
[2015-04-15 13:45:00+0000] ------------ STARTING cloud_controller_worker_ctl at Wed Apr 15 13:44:05 UTC 2015 --------------
[2015-04-15 13:46:19+0000] ------------ STARTING cloud_controller_worker_ctl at Wed Apr 15 13:45:13 UTC 2015 --------------
[2015-04-15 13:46:31+0000] ------------ STARTING cloud_controller_worker_ctl at Wed Apr 15 13:45:14 UTC 2015 --------------
[2015-04-15 13:47:42+0000] ------------ STARTING cloud_controller_worker_ctl at Wed Apr 15 13:46:45 UTC 2015 --------------
[2015-04-15 13:47:55+0000] ------------ STARTING cloud_controller_worker_ctl at Wed Apr 15 13:46:46 UTC 2015 --------------

==> cloud_controller_worker_ctl.log <==
[2015-04-15 13:47:55+0000] [Worker(cc_api_worker.api_z1.0.2)] Job VCAP::CloudController::Jobs::ExceptionCatchingJob (id=11987) RUNNING
[2015-04-15 13:47:55+0000] [Worker(cc_api_worker.api_z1.0.2)] Job VCAP::CloudController::Jobs::ExceptionCatchingJob (id=11987) COMPLET
ED after 15.1268
[2015-04-15 13:47:55+0000] [Worker(cc_api_worker.api_z1.0.2)] Job VCAP::CloudController::Jobs::ExceptionCatchingJob (id=11989) RUNNING
[2015-04-15 13:47:55+0000] [Worker(cc_api_worker.api_z1.0.2)] Job VCAP::CloudController::Jobs::ExceptionCatchingJob (id=11989) COMPLET
ED after 0.0284
[2015-04-15 13:47:55+0000] [Worker(cc_api_worker.api_z1.0.2)] Job VCAP::CloudController::Jobs::ExceptionCatchingJob (id=11990) RUNNING
[2015-04-15 13:47:55+0000] [Worker(cc_api_worker.api_z1.0.2)] Exiting...
[2015-04-15 13:47:55+0000] [Worker(cc_api_worker.api_z1.0.2)] Job VCAP::CloudController::Jobs::ExceptionCatchingJob (id=11990) COMPLET
ED after 41.1403
[2015-04-15 13:47:55+0000] [Worker(cc_api_worker.api_z1.0.2)] 3 jobs processed at 0.0532 j/s, 0 failed
[2015-04-15 13:47:55+0000] Killing /var/vcap/sys/run/cloud_controller_ng/cloud_controller_worker_2.pid: 7330
[2015-04-15 13:47:55+0000] .............Stopped

==> metron_agent_ctl.err.log <==
[2015-04-15 13:36:55+0000] ------------ STARTING metron_agent_ctl at Wed Apr 15 13:36:54 UTC 2015 --------------

==> metron_agent_ctl.log <==
[2015-04-15 13:36:55+0000] ------------ STARTING metron_agent_ctl at Wed Apr 15 13:36:54 UTC 2015 --------------
[2015-04-15 13:36:55+0000] rsyslog stop/waiting
[2015-04-15 13:36:55+0000] rsyslog start/running, process 1615

==> nfs_mounter_ctl.err.log <==
[2015-04-15 13:45:02+0000] mount.nfs: trying 192.168.0.17 prog 100003 vers 3 prot TCP port 2049
[2015-04-15 13:45:02+0000] mount.nfs: trying 192.168.0.17 prog 100005 vers 3 prot UDP port 38168
[2015-04-15 13:46:33+0000] stop: Unknown instance:
[2015-04-15 13:46:33+0000] mount.nfs: mount(2): No such file or directory
[2015-04-15 13:46:33+0000] mount.nfs: trying 192.168.0.17 prog 100003 vers 3 prot TCP port 2049
[2015-04-15 13:46:33+0000] mount.nfs: trying 192.168.0.17 prog 100005 vers 3 prot UDP port 38168
[2015-04-15 13:47:57+0000] stop: Unknown instance:
[2015-04-15 13:47:57+0000] mount.nfs: mount(2): No such file or directory
[2015-04-15 13:47:57+0000] mount.nfs: trying 192.168.0.17 prog 100003 vers 3 prot TCP port 2049
[2015-04-15 13:47:57+0000] mount.nfs: trying 192.168.0.17 prog 100005 vers 3 prot UDP port 38168

==> nfs_mounter_ctl.log <==
[2015-04-15 13:47:57+0000] idmapd start/post-stop, process 7965
[2015-04-15 13:47:57+0000] Found NFS mount, unmounting...
[2015-04-15 13:47:57+0000] NFS unmounted
[2015-04-15 13:47:57+0000] Mounting NFS...
[2015-04-15 13:47:57+0000] mount.nfs: timeout set for Wed Apr 15 13:49:57 2015
[2015-04-15 13:47:57+0000] mount.nfs: trying text-based options 'timeo=10,intr,lookupcache=positive,vers=4,addr=192.168.0.17,clientadd
r=192.168.0.86'
[2015-04-15 13:47:57+0000] mount.nfs: trying text-based options 'timeo=10,intr,lookupcache=positive,addr=192.168.0.17'
[2015-04-15 13:47:57+0000] mount.nfs: prog 100003, trying vers=3, prot=6
[2015-04-15 13:47:57+0000] mount.nfs: prog 100005, trying vers=3, prot=17
[2015-04-15 13:47:57+0000] NFS mounted

==> nginx_ctl.err.log <==
[2015-04-15 13:43:08+0000] ------------ STARTING nginx_ctl at Wed Apr 15 13:43:08 UTC 2015 --------------
[2015-04-15 13:43:08+0000] nginx: [emerg] duplicate location /admin/ in /var/vcap/jobs/cloud_controller_ng/config/nginx.conf:88
[2015-04-15 13:44:06+0000] ------------ STARTING nginx_ctl at Wed Apr 15 13:44:06 UTC 2015 --------------
[2015-04-15 13:44:06+0000] nginx: [emerg] duplicate location /admin/ in /var/vcap/jobs/cloud_controller_ng/config/nginx.conf:88
[2015-04-15 13:45:15+0000] ------------ STARTING nginx_ctl at Wed Apr 15 13:45:15 UTC 2015 --------------
[2015-04-15 13:45:15+0000] nginx: [emerg] duplicate location /admin/ in /var/vcap/jobs/cloud_controller_ng/config/nginx.conf:88
[2015-04-15 13:46:47+0000] ------------ STARTING nginx_ctl at Wed Apr 15 13:46:47 UTC 2015 --------------
[2015-04-15 13:46:47+0000] nginx: [emerg] duplicate location /admin/ in /var/vcap/jobs/cloud_controller_ng/config/nginx.conf:88
[2015-04-15 13:48:11+0000] ------------ STARTING nginx_ctl at Wed Apr 15 13:48:11 UTC 2015 --------------
[2015-04-15 13:48:11+0000] nginx: [emerg] duplicate location /admin/ in /var/vcap/jobs/cloud_controller_ng/config/nginx.conf:88

==> nginx_ctl.log <==
[2015-04-15 13:43:08+0000] ------------ STARTING nginx_ctl at Wed Apr 15 13:43:08 UTC 2015 --------------
[2015-04-15 13:43:08+0000] Removing stale pidfile...
[2015-04-15 13:44:06+0000] ------------ STARTING nginx_ctl at Wed Apr 15 13:44:06 UTC 2015 --------------
[2015-04-15 13:44:06+0000] Removing stale pidfile...
[2015-04-15 13:45:15+0000] ------------ STARTING nginx_ctl at Wed Apr 15 13:45:15 UTC 2015 --------------
[2015-04-15 13:45:15+0000] Removing stale pidfile...
[2015-04-15 13:46:47+0000] ------------ STARTING nginx_ctl at Wed Apr 15 13:46:47 UTC 2015 --------------
[2015-04-15 13:46:47+0000] Removing stale pidfile...
[2015-04-15 13:48:11+0000] ------------ STARTING nginx_ctl at Wed Apr 15 13:48:11 UTC 2015 --------------
[2015-04-15 13:48:11+0000] Removing stale pidfile...

Attaching my deployment manifest file (cf-tai-deploy.yml)
Could anyone please help me on this issue.
cf-tai-deploy.yml

Arunava Basu

unread,
Apr 15, 2015, 10:13:14 AM4/15/15
to bosh-...@cloudfoundry.org
- bosh gem versions 
bosh-core (1.2915.0, 1.2905.0)
bosh-director-core (1.2915.0, 1.2905.0)
bosh-registry (1.2915.0, 1.2905.0)
bosh-stemcell (1.2915.0, 1.2905.0)
bosh-template (1.2915.0, 1.2905.0)
bosh_aws_cpi (1.2915.0, 1.2905.0)
bosh_cli (1.2915.0, 1.2905.0)
bosh_cli_plugin_micro (1.2915.0, 1.2905.0)
bosh_common (1.2915.0, 1.2905.0)
bosh_cpi (1.2915.0, 1.2905.0)
bosh_openstack_cpi (1.2915.0, 1.2905.0)
bosh_vcloud_cpi (0.7.3, 0.7.2)
bosh_vsphere_cpi (1.2915.0, 1.2905.0)

- bosh status
Config
             /home/basu/.bosh_config

Director
  Name       microbosh-ocf
  URL        https://192.168.0.11:25555
  Version    1.2859.0 (00000000)
  User       admin
  UUID       bc0b1140-b484-4cff-bd8b-276dcc3bb461
  CPI        vcloud
  dns        enabled (domain_name: microbosh)
  compiled_package_cache disabled
  snapshots  disabled

Deployment
  Manifest   /home/basu/cf-release/cf-tai-deploy.yml

- stemcell version(s) you are using 
bosh stemcells

+------------------------------------------+---------+-------------------------------------------------------------+
| Name                                     | Version | CID                                                         |
+------------------------------------------+---------+-------------------------------------------------------------+
| bosh-vsphere-esxi-ubuntu-trusty-go_agent | 2859*   | urn:vcloud:catalogitem:efa648e1-c4c7-443c-a8a7-98e24d5632fd |
+------------------------------------------+---------+-------------------------------------------------------------+

- the release you are trying to deploy 
cf-202.yml

Stanley Kao

unread,
Apr 15, 2015, 11:13:33 AM4/15/15
to bosh-...@cloudfoundry.org
Maybe you can try buildpack_directory_key & droplet_directory_key in different values, for example buildpack_admin & droplet_admin. They are folder names be created in the nfs_server.

Stanley

To unsubscribe from this group and stop receiving emails from it, send an email to bosh-users+...@cloudfoundry.org.

Parthiban Annadurai

unread,
Apr 15, 2015, 11:22:35 AM4/15/15
to bosh-...@cloudfoundry.org
@Arunava Basu.. Try that option specified by @Stanley, its worked for me..

Arunava Basu

unread,
Apr 16, 2015, 10:06:10 AM4/16/15
to bosh-...@cloudfoundry.org
@Stanley @Parthiban, Thanks a lot.

Now I am able to deploy Cloud Foundry. But when I am trying to login, it returns Server error, status code: 500, error code: , message:

Do I need to change anything on the Manifest file to fix this issue??

Arunava Basu

unread,
Apr 16, 2015, 12:00:59 PM4/16/15
to bosh-...@cloudfoundry.org
Fix this issue in manifest by changing the Login and UAA URL to https 
Reply all
Reply to author
Forward
0 new messages