Micro bosh deployment doesn't work for dhcp based networks in Openstack Ice House.

336 views
Skip to first unread message

raunak lakhwani

unread,
Jul 2, 2014, 9:29:35 AM7/2/14
to vcap...@cloudfoundry.org, sudipto....@gmail.com
Hi,

I have been trying to deploy micro_bosh using openstack icehouse and dhcp based networks.
Following is the network section of my micro_bosh.yml file:

                network:
                  type: dynamic
                  cloud_properties:
                   net_id: <dhcp_net_id_from_neutron>

When i execute 

bosh micro deployment micro_bosh

I am getting the following error:

/usr/local/rvm/rubies/ruby-1.9.3-p547/lib/ruby/1.9.1/uri/generic.rb:213:in `initialize': the scheme https does not accept registry part: :25555 (or bad hostname?) (URI::InvalidURIError)

However this error is not seen when i specify an IP address in the network section like this:

                network:
                  type: dynamic
                  ip: x.y.z.f
                  cloud_properties:
                   net_id: <dhcp_net_id_from_neutron>

I am sure that the DHCP address should not require us to specify an IP address.
Can someone help me with what might be going wrong here?

Thanks,
Ronak

Gowri LN

unread,
Aug 8, 2014, 5:25:54 AM8/8/14
to vcap...@cloudfoundry.org, sudipto....@gmail.com
Hi,

I am facing similar issue. were you able to resolve it ? 

sekha...@gmail.com

unread,
Aug 12, 2014, 5:28:46 AM8/12/14
to vcap...@cloudfoundry.org, sudipto....@gmail.com
Hello there -

Request to take a look at this, and advise me a solution if possible...is
I am currently facing exactly the same issue as reported by you. However, when I change the type: manual and  specify an IP address from the Openstack fixed-ip-v4 pool\ along with the UUID for this network pool (you can get the uuid by issuing the command nova-manage network list), I don't get any error while deploying setting the deployment target. However when I deploy micro bosh it succeeds half-way and then waits endlessly when it comes to "Waiting for agent" section. When I looked at bosh_micro.log I saw an error that "No route to host". This message repeats itself.

My micro_bosh.yml contains as below:

name: microbosh-openstack

logging:
  level: DEBUG

network:
  type: manual
#  vip: # Optional
  ip: 10.154.0.254
  cloud_properties:
    net_id: f472ce54-5f9c-4e79-8626-389e2b50220e


resources:
  persistent_disk: 6000
  cloud_properties:
    instance_type: m1.small

cloud:
  plugin: openstack
  properties:
    openstack:
      auth_url: http://10.154.12.89:5000/v2.0/
      username: admin
      api_key: nimbus360
      tenant: admin
      region: default
      default_security_groups: ["ssh", "bosh", "cf-public", "cf-private", "cf"]
      default_key_name: microbosh
      private_key: /home/nimbus1

apply_spec:
  properties:
    director:
      max_threads: 3
    hm:
      resurrector_enabled: true
    ntp:
      - 0.north-america.pool.ntp.org
      - 1.north-america.pool.ntp.org

Thanks,
Sekhar H.



On Wednesday, July 2, 2014 6:59:35 PM UTC+5:30, raunak lakhwani wrote:

dkal...@pivotal.io

unread,
Aug 14, 2014, 2:07:21 AM8/14/14
to vcap...@cloudfoundry.org, sudipto....@gmail.com
Error 'no route to host' means that `bosh micro deploy` command cannot access that IP from the deploying machine. 

Are you deploying from outside of the cluster? If so you need to be on a machine that would be able to route to a deployed machine. 

Some people call such machine a jump-box or an inception VM. It's made accessible from outside of the cluster and can be SSH-ed into and used for running commands like `bosh micro deploy`.

Sekhar Hari

unread,
Aug 14, 2014, 2:41:36 AM8/14/14
to vcap...@cloudfoundry.org, sudipto....@gmail.com, dkal...@pivotal.io
Thanks. I manged to sort out this problem by adding the "gateway" parameter in the micro_bosh.yml file. The Micro-BOSH is now succussfully deployed. However, since yesterday I am seeing another  problem when trying to upload a cf release (v170) through "bosh upload release/cf-v170.yml". The process successfully downloads the packages and the job. Also, it successfully uploaded the stemcell. After this it enters into a process called as "Director - Task 1". Here it builds the packages; however when it comes to build a package called as "buildpack_cache", it fails with the error as follows:

When it reaches the packaging step for this particular package, after exactly 2 minutes the process fails with the error -

<Bosh::Blobstore::BlobstoreError: Failed to create object, underlying error: #<HTTPClient::SendTimeoutError: execution expired>
/var/vcap/bosh/lib/ruby/gems/1.9.1/gems/httpclient-2.2.4/lib/httpclient/http.rb:555:in
`write'

After deep search in various websites, and Google groups, I found that this is causing because the  "Sendtimeout" variable in the file /var/vcap/bosh/lib/ruby/gems/1.9.1/gems/httpclient-2.2.4/lib/httpclient/session.rb" is set to 120 sec. I changed this to a larger value 3600 sec; however, when I re-tried, the process still fails at the same step exactly after 2 min.

The question is do I need to restart the MicroBOSH VM or restart the Ruby services for the session.rb change to take effect? Also, are there any other files where I should increase the timeout value?

Your kind help is highly appreciated.

Many thanks,
Sekhar H.

Sekhar Hari

unread,
Aug 18, 2014, 1:23:23 AM8/18/14
to vcap...@cloudfoundry.org, sudipto....@gmail.com, dkal...@pivotal.io
This is still a problem and there is no luck to get this working. The "bosh upload releases/cf-170.yml" always fails exactly at the point of creating a package for "buildpack_cache" with the same "SendTimeOut" error as narrated in my post earlier.

Can somebody kindly help to solve this?

Many thanks,
Sekhar H.

dkal...@pivotal.io

unread,
Aug 18, 2014, 1:58:10 AM8/18/14
to vcap...@cloudfoundry.org, sudipto....@gmail.com, dkal...@pivotal.io
Could you paste in full cli output? 

I'm guessing you are running out of space on microbosh vm. Run df -h on the box and possibly give it bigger ephemeral/persistent disk.

Sekhar Hari

unread,
Aug 18, 2014, 2:52:16 AM8/18/14
to vcap...@cloudfoundry.org, sudipto....@gmail.com, dkal...@pivotal.io
I defined an ephemeral disk of size 20GB.

df -h gives me the following output:

root@bm-2653ca75-a8c2-48a1-a6b2-d0c63e46d82e:/home/vcap# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda1             9.4G  1.2G  7.8G  13% /
none                  999M  176K  998M   1% /dev
none                 1005M     0 1005M   0% /dev/shm
none                 1005M   56K 1005M   1% /var/run
none                 1005M     0 1005M   0% /var/lock
none                 1005M     0 1005M   0% /lib/init/rw
/dev/vdb2              18G  591M   17G   4% /var/vcap/data
/dev/loop0            124M  5.6M  118M   5% /tmp
/dev/vdc1              16G  386M   15G   3% /var/vcap/store
root@bm-2653ca75-a8c2-48a1-a6b2-d0c63e46d82e:/home/vcap#

Following is the output that I see in cli when executing "bosh upload release releases/cf-v170.yml

Uploading release
release.tgz: 100% |oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo| 1.3GB 1.4MB/s Time: 00:15:49

Director task 4
Started extracting release > Extracting release. Done (00:02:37)

Started verifying manifest > Verifying manifest. Done (00:00:00)

Started resolving package dependencies > Resolving package dependencies. Done (00:00:00)

Started creating new packages
Started creating new packages > buildpack_cache/8. Failed: Failed to create object, underlying error: # /var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient/http.rb:555:in write' /var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient/http.rb:555:in<<'
/var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient/http.rb:555:in dump_file' /var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient/http.rb:484:indump'
/var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient/http.rb:889:in dump' /var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient/session.rb:600:inblock in query'
/var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient/session.rb:598:in query' /var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient/session.rb:161:inquery'
/var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient.rb:1060:in do_get_block' /var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient.rb:869:inblock in do_request'
/var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient.rb:961:in rescue in protect_keep_alive_disconnected' /var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient.rb:955:inprotect_keep_alive_disconnected'
/var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient.rb:868:in do_request' /var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient.rb:756:inrequest'
/var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient.rb:671:in put' /var/vcap/packages/director/gem_home/gems/blobstore_client-1.2427.0/lib/blobstore_client/dav_blobstore_client.rb:39:increate_file'
/var/vcap/packages/director/gem_home/gems/blobstore_client-1.2427.0/lib/blobstore_client/base.rb:27:in create' /var/vcap/packages/ruby/lib/ruby/1.9.1/forwardable.rb:201:increate'
/var/vcap/packages/ruby/lib/ruby/1.9.1/forwardable.rb:201:in create' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/blob_util.rb:8:inblock in create_blob'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/blob_util.rb:8:in open' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/blob_util.rb:8:increate_blob'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:368:in create_package' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:287:inblock (2 levels) in create_packages'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/event_log.rb:83:in call' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/event_log.rb:83:inadvance_and_track'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/event_log.rb:36:in track' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:285:inblock in create_packages'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:283:in each' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:283:increate_packages'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:269:in process_packages' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:140:inprocess_release'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:55:in block in perform' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/lock_helper.rb:47:inblock in with_release_lock'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/lock.rb:58:in lock' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/lock_helper.rb:47:inwith_release_lock'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:55:in perform' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/job_runner.rb:98:inperform_job'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/job_runner.rb:29:in block in run' /var/vcap/packages/director/gem_home/gems/bosh_common-1.2427.0/lib/common/thread_formatter.rb:46:inwith_thread_name'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/job_runner.rb:29:in run' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/base_job.rb:10:inperform'
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/job.rb:125:in perform' /var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:186:inperform'
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:149:in block in work' /var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:128:inloop'
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:128:in work' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/bin/bosh-director-worker:76:in'
/var/vcap/packages/director/bin/bosh-director-worker:23:in load' /var/vcap/packages/director/bin/bosh-director-worker:23:in' (00:03:48)

Error 100: Failed to create object, underlying error: # /var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient/http.rb:555:in write' /var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient/http.rb:555:in<<'
/var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient/http.rb:555:in dump_file' /var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient/http.rb:484:indump'
/var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient/http.rb:889:in dump' /var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient/session.rb:600:inblock in query'
/var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient/session.rb:598:in query' /var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient/session.rb:161:inquery'
/var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient.rb:1060:in do_get_block' /var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient.rb:869:inblock in do_request'
/var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient.rb:961:in rescue in protect_keep_alive_disconnected' /var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient.rb:955:inprotect_keep_alive_disconnected'
/var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient.rb:868:in do_request' /var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient.rb:756:inrequest'
/var/vcap/packages/director/gem_home/gems/httpclient-2.2.4/lib/httpclient.rb:671:in put' /var/vcap/packages/director/gem_home/gems/blobstore_client-1.2427.0/lib/blobstore_client/dav_blobstore_client.rb:39:increate_file'
/var/vcap/packages/director/gem_home/gems/blobstore_client-1.2427.0/lib/blobstore_client/base.rb:27:in create' /var/vcap/packages/ruby/lib/ruby/1.9.1/forwardable.rb:201:increate'
/var/vcap/packages/ruby/lib/ruby/1.9.1/forwardable.rb:201:in create' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/blob_util.rb:8:inblock in create_blob'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/blob_util.rb:8:in open' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/blob_util.rb:8:increate_blob'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:368:in create_package' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:287:inblock (2 levels) in create_packages'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/event_log.rb:83:in call' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/event_log.rb:83:inadvance_and_track'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/event_log.rb:36:in track' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:285:inblock in create_packages'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:283:in each' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:283:increate_packages'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:269:in process_packages' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:140:inprocess_release'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:55:in block in perform' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/lock_helper.rb:47:inblock in with_release_lock'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/lock.rb:58:in lock' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/lock_helper.rb:47:inwith_release_lock'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/update_release.rb:55:in perform' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/job_runner.rb:98:inperform_job'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/job_runner.rb:29:in block in run' /var/vcap/packages/director/gem_home/gems/bosh_common-1.2427.0/lib/common/thread_formatter.rb:46:inwith_thread_name'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/job_runner.rb:29:in run' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/lib/bosh/director/jobs/base_job.rb:10:inperform'
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/job.rb:125:in perform' /var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:186:inperform'
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:149:in block in work' /var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:128:inloop'
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:128:in work' /var/vcap/packages/director/gem_home/gems/bosh-director-1.2427.0/bin/bosh-director-worker:76:in'
/var/vcap/packages/director/bin/bosh-director-worker:23:in load' /var/vcap/packages/director/bin/bosh-director-worker:23:in'

Regards,
Sekhar H.

Dmitriy Kalinin

unread,
Aug 18, 2014, 3:06:16 AM8/18/14
to Sekhar Hari, vcap...@cloudfoundry.org, sudipto....@gmail.com
You can run 'watch df -h' on microbosh vm while uploading release to see which mount point fills up - ephemeral (/var/vcap/data) or persistent (/var/vcap/store).

Sekhar Hari

unread,
Aug 18, 2014, 3:18:14 AM8/18/14
to vcap...@cloudfoundry.org, sekha...@gmail.com, sudipto....@gmail.com, dkal...@pivotal.io
I uploaded the latest stemcell just now using the command - bosh upload stemcell bosh-stemcell-latest-openstack-kvm-ubuntu.tgz

The upload was successful. However, when I ran "df -h" immediately after this, the output was as follows:


root@bm-2653ca75-a8c2-48a1-a6b2-d0c63e46d82e:/home/vcap# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda1             9.4G  1.2G  7.8G  13% /
none                  999M  176K  998M   1% /dev
none                 1005M     0 1005M   0% /dev/shm
none                 1005M   56K 1005M   1% /var/run
none                 1005M     0 1005M   0% /var/lock
none                 1005M     0 1005M   0% /lib/init/rw
/dev/vdb2              18G  945M   16G   4% /var/vcap/data

/dev/loop0            124M  5.6M  118M   5% /tmp
/dev/vdc1              16G  386M   15G   3% /var/vcap/store

Then I ran "df -h" once again: Now the output is as follows:


root@bm-2653ca75-a8c2-48a1-a6b2-d0c63e46d82e:/home/vcap# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda1             9.4G  1.2G  7.8G  13% /
none                  999M  176K  998M   1% /dev
none                 1005M     0 1005M   0% /dev/shm
none                 1005M   56K 1005M   1% /var/run
none                 1005M     0 1005M   0% /var/lock
none                 1005M     0 1005M   0% /lib/init/rw
/dev/vdb2              18G  593M   17G   4% /var/vcap/data

/dev/loop0            124M  5.6M  118M   5% /tmp
/dev/vdc1              16G  386M   15G   3% /var/vcap/store

I don't know what is going wrong. The stemcell doesn't seem to be present anywhere in the microbosh VM.

Thanks,
Sekhar H.

Dmitriy Kalinin

unread,
Aug 18, 2014, 3:20:59 AM8/18/14
to Sekhar Hari, vcap...@cloudfoundry.org, sudipto....@gmail.com
Stemcells are saved into infrastructure resource. Releases are unpacked into a BOSH managed blobstore which is either local to some VM or off-site like S3. 

You can only see local fs changes *while* you upload since temporary file will be created (in /var/vcap/data/...). 

Sekhar Hari

unread,
Aug 18, 2014, 3:22:31 AM8/18/14
to vcap...@cloudfoundry.org, sekha...@gmail.com, sudipto....@gmail.com, dkal...@pivotal.io
'watch df -h' shows that the stemcell is being uploaded to /dev/vdb2.

Thanks,
Sekhar H.

On Monday, August 18, 2014 12:36:16 PM UTC+5:30, Dmitriy Kalinin wrote:

Sekhar Hari

unread,
Aug 18, 2014, 3:29:26 AM8/18/14
to vcap...@cloudfoundry.org, sekha...@gmail.com, sudipto....@gmail.com, dkal...@pivotal.io
So I think there is enough space available (17 GB) in the microbosh VM for emphemeral disk as well as persistent disk (15 GB). Not sure why am I unable to upload the cf-v170 release. This is really frustrating because I have, so far, executed the upload release command thrice, and this fails each time precisely at the point of creating a package for buildpack_cache with SendTimeoutError. Is this an un-reported/ reported bug? If not, is there a solution to solve this problem?

Thanks,
Sekhar H.

Sekhar Hari

unread,
Aug 18, 2014, 3:46:30 AM8/18/14
to vcap...@cloudfoundry.org, sekha...@gmail.com, sudipto....@gmail.com, dkal...@pivotal.io
Also, even after uploading the stemcell, the 'bosh stemcells' command says "No stemcells". This is even more troubling. So where is the just now uploaded stemcell? Is BOSH really uploading it?

Thanks,
Sekhar H.

Dmitriy Kalinin

unread,
Aug 18, 2014, 4:12:45 AM8/18/14
to Sekhar Hari, vcap...@cloudfoundry.org, Sudipto Biswas
If `bosh stemcells` does not show a stemcell that means that BOSH did not successfully upload a stemcell. Please include output from `bosh upload stemcell ...` command. It should contain an error.

Sekhar Hari

unread,
Aug 18, 2014, 5:34:16 AM8/18/14
to vcap...@cloudfoundry.org, sekha...@gmail.com, sudipto....@gmail.com, dkal...@pivotal.io
Please find below the cli output of this command:

root@Nimbus360-OS-Ctrl:~/bosh-workspace/stemcells# bosh upload stemcell bosh-stemcell-latest-openstack-kvm-ubuntu.tgz

Verifying stemcell...
File exists and readable                                     OK
Verifying tarball...
Read tarball                                                 OK
Manifest exists                                              OK
Stemcell image file                                          OK
Stemcell properties                                          OK

Stemcell info
-------------
Name:    bosh-openstack-kvm-ubuntu
Version: 2427

Checking if stemcell already exists...
No

Uploading stemcell...

bosh-stemcell: 100% |oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo| 353.5MB 459.1KB/s Time: 00:13:08
root@Nimbus360-OS-Ctrl:~/bosh-workspace/stemcells# bosh stemcells
No stemcells
root@Nimbus360-OS-Ctrl:~/bosh-workspace/stemcells#

As per the document - http://docs.cloudfoundry.org/deploying/openstack/uploading_bosh_stemcell.html - after "Uploading stemcell" step, this should show "save stemcell" and then "done". Finally it should say "Stemcell uploaded and created"

But these messages are not showing up in the output when I executed.

Thanks,
Sekhar H.

Sekhar Hari

unread,
Aug 18, 2014, 6:04:44 AM8/18/14
to vcap...@cloudfoundry.org, sekha...@gmail.com, sudipto....@gmail.com, dkal...@pivotal.io
For some reason I don't see my previous reply to your question: Hence writing the output of `bosh upload stemcell ...` once again -

Please find below the cli output of this command:

root@Nimbus360-OS-Ctrl:~/bosh-workspace/stemcells# bosh upload stemcell bosh-stemcell-latest-openstack-kvm-ubuntu.tgz

Verifying stemcell...
File exists and readable                                     OK
Verifying tarball...
Read tarball                                                 OK
Manifest exists                                              OK
Stemcell image file                                          OK
Stemcell properties                                          OK

Stemcell info
-------------
Name:    bosh-openstack-kvm-ubuntu
Version: 2427

Checking if stemcell already exists...
No

Uploading stemcell...

bosh-stemcell: 100% |oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo| 353.5MB 459.1KB/s Time: 00:13:08
root@Nimbus360-OS-Ctrl:~/bosh-workspace/stemcells# bosh stemcells
No stemcells
root@Nimbus360-OS-Ctrl:~/bosh-workspace/stemcells# 

As per the document - http://docs.cloudfoundry.org/deploying/openstack/uploading_bosh_stemcell.html - after "Uploading stemcell" step, this should show "save stemcell" and then "done". Finally it should say "Stemcell uploaded and created"

But these messages are not showing up in the output when I executed.

Thanks,
Sekhar H.

Dmitriy Kalinin

unread,
Aug 18, 2014, 12:48:29 PM8/18/14
to Sekhar Hari, vcap...@cloudfoundry.org, Sudipto Biswas
It appears that `bosh upload stemcell` abruptly exits. Can you tar up and attach all logs from /var/vcap/sys/log. (Make sure logs are sanitized or just can share them directly with me - dkal...@pivotal.io).

Sekhar Hari

unread,
Aug 19, 2014, 1:05:51 AM8/19/14
to vcap...@cloudfoundry.org, sekha...@gmail.com, sudipto....@gmail.com, dkal...@pivotal.io
I sent you the logs in tar archive format to your email ID. Kindly check and help as soon as possible. My entire project team is dependent on this installation to push some applications. Appreciate your assistance so far. I tried uploading the stemcell once again. The error that I see is as follows:

================================================================================================================================
root@Nimbus360-OS-Ctrl:~/bosh-workspace/stemcells# bosh upload stemcell http://bosh-jenkins-artifacts.s3.amazonaws.com/bosh-stemcell/openstack/bosh-stemcell-latest-openstack-kvm-ubuntu.tgz

Using remote stemcell `http://bosh-jenkins-artifacts.s3.amazonaws.com/bosh-stemcell/openstack/bosh-stemcell-latest-openstack-kvm-ubuntu.tgz'

Director task 1
  Started update stemcell
  Started update stemcell > Downloading remote stemcell. Done (00:19:31)
  Started update stemcell > Extracting stemcell archive. Done (00:04:22)
  Started update stemcell > Verifying stemcell manifest. Done (00:00:01)
  Started update stemcell > Checking if this stemcell already exists. Done (00:00:02)
  Started update stemcell > Uploading stemcell bosh-openstack-kvm-ubuntu/2427 to the cloud. Failed: Task 1 cancelled (03:55:08)

Error 10001: Task 1 cancelled

================================================================================================================================

Thanks,
Sekhar H.
...

Dmitriy Kalinin

unread,
Aug 19, 2014, 1:46:07 AM8/19/14
to Sekhar Hari, vcap...@cloudfoundry.org, Sudipto Biswas
- multiple processes timed out connecting to redis. that explains 'bosh upload stemcell' did not continue because BOSH Director uses resque internally to schedule long running tasks. however redis itself did not produce any error logs which leads me to believe it was just slow to start.

- above output shows 'task 1 cancelled'. auto cancellation of tasks happens automatically if task does not checkpoint often enough which in lots of cases means that BOSH Director processes are running very slow. 

i just noticed that your micrbosh manifest specifies vm as m1.small. i think bumping it to a larger instance should give it more headroom which should fix problems listed above.

Sekhar Hari

unread,
Aug 19, 2014, 2:07:55 AM8/19/14
to vcap...@cloudfoundry.org, sekha...@gmail.com, sudipto....@gmail.com, dkal...@pivotal.io
Thanks. Actually, I have only 4 GB RAM each in my Openstack Controller and Compute nodes.This is why I gave m1.small for the MicroBOSH instance. I will now try to give m1.large and share the result.

Regards,
Sekhar H.
...

Sekhar Hari

unread,
Aug 20, 2014, 3:11:39 AM8/20/14
to vcap...@cloudfoundry.org, sekha...@gmail.com, sudipto....@gmail.com, dkal...@pivotal.io
Many, many thanks for your assistance. All your comments and suggestions were very apt.

I increased the RAM to 8 GB; and provided m1.medium in the micro_bosh.yml. Everything succeeded now, and the whole process was very fast.

The outputs are as follows:

root@Nimbus360-OS-Ctrl:~/bosh-workspace/deployments/cf# bosh releases

+------+----------+-------------+
| Name | Versions | Commit Hash |
+------+----------+-------------+
| cf   | 170      | 0c0c72c3+   |
+------+----------+-------------+
(+) Uncommitted changes

Releases total: 1
root@Nimbus360-OS-Ctrl:~/bosh-workspace/deployments/cf# bosh stemcells

+---------------------------+---------+--------------------------------------+
| Name                      | Version | CID                                  |
+---------------------------+---------+--------------------------------------+
| bosh-openstack-kvm-ubuntu | 2427    | a363b39d-fde6-44b1-a8db-b0a0046a379d |
+---------------------------+---------+--------------------------------------+

(*) Currently in-use

Stemcells total: 1

The `bosh releases' command shows that there are "Uncommitted changes". Do I need to do something about this or can I ignore? Kindly let me know.

I am now trying to work my way to create a CF deployment manifest file. Would you be kind enough to provide me a working sample for cf release 170? After the deployment is complete, I need to push a Java application running on Tomcat with its backend as MySQL 5.6. So please advise me of a suitable manifest file that can handle to push this application. Since I have only a maximum of 8 GB memory available in my Openstack Compute and Controller node, I will need only two VM's (m1.small, m1.tiny) to be created through the deployment manifest file.

Regards,
Sekhar H.
...

Dmitriy Kalinin

unread,
Aug 20, 2014, 2:06:44 PM8/20/14
to Sekhar Hari, vcap...@cloudfoundry.org, Sudipto Biswas
Nothing to do about 'Uncommitted changes'. I'll look into it why it happens for cf releases.

cf-release repository [1] contains script ./generate_deployment_manifest that will generate a manifest for a specific infrastructure with bunch of settings. After the manifest is generated you can collocate multiple jobs onto a single VM such that you only end up with 2 VMs.

You will most likely have see lots of timeout problems and general slowness since 8gb is usually not enough for deploying CF on Openstack.

Sekhar Hari

unread,
Aug 21, 2014, 4:17:40 AM8/21/14
to vcap...@cloudfoundry.org, sekha...@gmail.com, sudipto....@gmail.com, dkal...@pivotal.io
Thanks once again. Just wondering how can I configure MicroBOSH to spawn the VM's (application VM's) on the Openstack Compute node? MicroBOSH is currently spawning application VM's on the Openstack Controller node and not on the Compute node. This is causing serious memory contention because I have only 8 GB available in my Controller node. Of which 4 GB has been taken by the MicroBOSH, and the remaining by Openstack and Ubuntu.

My current Infrastructure is as follows:

We are doing a PoC of Openstack/ CloudFoundry on two DELL laptops. One laptop is running the Openstack Controller, and  the other one is the Openstack Compute rig. Each has 8 GB RAM and 350 GB HDD. Currently, BOSH has created a MicroBOSH VM instance on the Controller node. This VM has taken 4 GB RAM.

Regards,
Sekhar  H.

Sekhar Hari

unread,
Aug 27, 2014, 7:05:53 AM8/27/14
to vcap...@cloudfoundry.org, sekha...@gmail.com, sudipto....@gmail.com, dkal...@pivotal.io

I cloned into https://github.com/cloudfoundry/cf-release, and when I run `./generate_deployment_manifest openstack`, I get the following error:

root@Nimbus360-OS-Ctrl:~/cf-release# ./generate_deployment_manifest openstack
2014/08/27 16:23:18 error generating manifest: unresolved nodes:
    (( meta.floating_static_ips ))    in dynaml    jobs.[0].networks.[0].static_ips
    (( static_ips(0) ))    in ./templates/cf-infrastructure-openstack.yml    jobs.[0].networks.[1].static_ips
    (( static_ips(1) ))    in ./templates/cf-infrastructure-openstack.yml    jobs.[1].networks.[0].static_ips
    (( static_ips(2) ))    in ./templates/cf-infrastructure-openstack.yml    jobs.[2].networks.[0].static_ips
    (( static_ips(3) ))    in ./templates/cf-infrastructure-openstack.yml    jobs.[3].networks.[0].static_ips
    (( static_ips(4) ))    in ./templates/cf-infrastructure-openstack.yml    jobs.[4].networks.[0].static_ips
    (( static_ips(5) ))    in ./templates/cf-infrastructure-openstack.yml    jobs.[5].networks.[0].static_ips
    (( static_ips(6) ))    in ./templates/cf-infrastructure-openstack.yml    jobs.[6].networks.[0].static_ips
    (( static_ips(7) ))    in ./templates/cf-infrastructure-openstack.yml    jobs.[7].networks.[0].static_ips
    (( static_ips(8, 9, 10) ))    in ./templates/cf-infrastructure-openstack.yml    jobs.[8].networks.[0].static_ips
    (( merge ))    in ./templates/cf-infrastructure-openstack.yml    meta.floating_static_ips
    (( merge ))    in ./templates/cf-infrastructure-openstack.yml    meta.openstack
    (( merge ))    in ./templates/cf-infrastructure-openstack.yml    networks
    (( merge ))    in ./templates/cf-infrastructure-openstack.yml    properties.cc
    (( jobs.postgres_z1.networks.cf1.static_ips.[0] ))    in dynaml    properties.databases.address
    (( merge ))    in ./templates/cf-infrastructure-openstack.yml    properties.databases.roles.[0].password
    (( merge ))    in ./templates/cf-infrastructure-openstack.yml    properties.databases.roles.[1].password
    (( jobs.nats_z1.networks.cf1.static_ips.[0] ))    in dynaml    properties.nats.address
    (( jobs.postgres_z1.networks.cf1.static_ips.[0] ))    in dynaml    properties.ccdb.address
    (( merge ))    in ./templates/cf-infrastructure-openstack.yml    properties.ccdb.roles.[0].password
    (( merge ))    in ./templates/cf-infrastructure-openstack.yml    properties.uaa.clients
    (( jobs.postgres_z1.networks.cf1.static_ips.[0] ))    in dynaml    properties.uaadb.address
    (( merge ))    in ./templates/cf-infrastructure-openstack.yml    properties.uaadb.roles.[0].password

Can you please advise what is going wrong? I have not set up any floating IP address pool as I don't have a public IP available that I can use. However, I have fixed static ipv4 setup.

Thanks,
Sekhar H.

Reply all
Reply to author
Forward
0 new messages