|Pushing ruby app to V2 cloud - Error, no such file ruby_versions.yml||Cornelia Davis||3/20/13 12:21 PM|
I have a hand-crafted V2 installation to which I am trying to push a ruby app. It is correctly identified as a ruby app as it reports that it is "Installing ruby." but then the following in thrown:
/usr/lib/ruby/1.9.1/psych.rb:297:in `initialize': No such file or directory - ruby_versions.yml (Errno::ENOENT)
from /usr/lib/ruby/1.9.1/psych.rb:297:in `open'
from /usr/lib/ruby/1.9.1/psych.rb:297:in `load_file'
from /home/cdavisafc/cloud-fabric/dea_ng/buildpacks/vendor/ruby/lib/language_pack/ruby.rb:192:in `block (2 levels) in ruby_versions'
from /home/cdavisafc/cloud-fabric/dea_ng/buildpacks/vendor/ruby/lib/language_pack/ruby.rb:190:in `chdir'
from /home/cdavisafc/cloud-fabric/dea_ng/buildpacks/vendor/ruby/lib/language_pack/ruby.rb:190:in `block in ruby_versions'
from /usr/lib/ruby/1.9.1/tmpdir.rb:83:in `mktmpdir'
from /home/cdavisafc/cloud-fabric/dea_ng/buildpacks/vendor/ruby/lib/language_pack/ruby.rb:189:in `ruby_versions'
from /home/cdavisafc/cloud-fabric/dea_ng/buildpacks/vendor/ruby/lib/language_pack/ruby.rb:235:in `install_ruby'
from /home/cdavisafc/cloud-fabric/dea_ng/buildpacks/vendor/ruby/lib/language_pack/ruby.rb:77:in `compile'
from /home/cdavisafc/cloud-fabric/dea_ng/buildpacks/vendor/ruby/bin/compile:11:in `block in <main>'
from /home/cdavisafc/cloud-fabric/dea_ng/buildpacks/vendor/ruby/lib/language_pack/base.rb:84:in `log'
from /home/cdavisafc/cloud-fabric/dea_ng/buildpacks/vendor/ruby/bin/compile:10:in `<main>'
/home/cdavisafc/cloud-fabric/dea_ng/buildpacks/lib/installer.rb:17:in `compile': Buildpack compilation step failed: (RuntimeError)
from /home/cdavisafc/cloud-fabric/dea_ng/buildpacks/lib/buildpack.rb:15:in `block in stage_application'
from /home/cdavisafc/cloud-fabric/dea_ng/buildpacks/lib/buildpack.rb:11:in `chdir'
from /home/cdavisafc/cloud-fabric/dea_ng/buildpacks/lib/buildpack.rb:11:in `stage_application'
from /home/cdavisafc/cloud-fabric/dea_ng/buildpacks/bin/run:10:in `<main>'
A bit deeper in the debugging and I find that after looking in the buildpack cache and in the blobstore it tries to curl https://s3.amazonaws.com/heroku-buildpack-ruby/ruby_versions.yml. I think the S3 bucket is accessible as prior attents to get bundler-1.3.2.tgz at that URL seem to have been successful.
|Re: Pushing ruby app to V2 cloud - Error, no such file ruby_versions.yml||James Bayer||3/25/13 6:54 AM|
I missed this post last week, I'll have the team working on buildpacks take a look.
It looks like we require a ruby_versions.yml file to be in the blob store and perhaps your blob store does not have one?
Until we get an answer, one thing you could do to troubleshoot is fork this buildpack, make some mods for additional debugging, logging, or hardcode the ruby binaries s3 url you want to use.
|Re: Pushing ruby app to V2 cloud - Error, no such file ruby_versions.yml||Matthew Boedicker||3/25/13 5:08 PM|
In lib/language_pack/package_fetcher.rb it tries the buildpack cache, the blobstore and then S3 in that order. If you could put some debug in those methods and figure out where it fails, it might be a bug or an error that could be handled better.
|Cornelia Davis||4/15/13 9:43 PM||<This message has been deleted.>|
|Re: Pushing ruby app to V2 cloud - Error, no such file ruby_versions.yml||Cornelia Davis||4/15/13 9:49 PM|
And here I am few weeks later, after a bit of vaca and then this challenge (https://groups.google.com/a/cloudfoundry.org/forum/#!topic/vcap-dev/8GoLbq90xeY), I am still struggling with this. I was able to find the code where it looks in the cache, blobstore and then on s3 and doesn't find it in any of those locations. And yes, I'm thinking about putting it into the blobstore or my own S3 location I believe the real issue is that I don't seem to be getting any dns resolution from within the warden container.
Is there some configuration that I need to set to allow the warden container access to dns resolution?
|Re: [vcap-dev] Re: Pushing ruby app to V2 cloud - Error, no such file ruby_versions.yml||Matthew Boedicker||4/16/13 8:47 AM|
What does /etc/resolv.conf inside the container look like? The
container should get the same resolv.conf as the host is it running
on. Does DNS resolution work from the host?
|Re: [vcap-dev] Re: Pushing ruby app to V2 cloud - Error, no such file ruby_versions.yml||Cornelia Davis||4/16/13 10:54 AM|
I think the issue is in the way my VM is configured.
So yes, the resolv.conf in the container exactly matched that of the host (the host being a VMWare workstation guest) as follows:
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
And then I found this post which indicates that if the host has the nameserver set to localhost that the warden container resolv.conf nameserver should be set to the $network_host_ip. What I now don't understand is, this network host ip, does that server need to be set up to proxy to a DNS server? Cloudfoundry doesn't stand up a nameserver does it?
And, finally, I suspect that the configuration of my virtual machine on which I am running all of this is having an impact. As you can see above, my VM guest, which is my warden host, is set up with a nameserver 127.0.0.1 which I'm betting means it lets the VM guest proxy to an actual DNS server via VMWorkstation. Is there something I need to configure a particular way in my VMWare Workstation/VM Guest? I'm trying different settings now - bridged/nats with different settings but no joy yet.
|Re: [vcap-dev] Re: Pushing ruby app to V2 cloud - Error, no such file ruby_versions.yml||Cornelia Davis||4/22/13 2:10 PM|
In a hallway conversation I had with Jesse Zhang, he suggested the reason for the DNS behavior I describe here was that I was running Ubuntu DESKTOP, instead of server and that the Network Manager of desktop is responsible for proxying the DNS requests... well he was absolutely right! I just redid my minimal V2 cloud foundry install on Ubuntu 12.04 SERVER and the resolv.conf does not have 127.0.0.1 as the nameserver, instead it has a non-local IP address. The warden containers, which inherit the resolve.conf from the machine running the dea can then do name resolution as needed during the staging process and I'm all set.