Two Errors After installation

178 views
Skip to first unread message

Soheil Eizadi

unread,
Feb 11, 2015, 1:13:22 AM2/11/15
to openc...@googlegroups.com
I have two issues after the initial installation:
1). An error with DNS Servers, I see that Dns-Domain is set but not Dns-Servers.
2). Error with SSL verify turned off, on Chef Client. I see that the Client.rb file is missing the line. I assume I can ignore this message for test, more for production.

At this point the installation guide notes that I should have all green check marks, so not sure what went wrong with my installation.

More detail logs below.

Is there a section on how DNS is going to work with Opencrowbar. Looks like there is recipe to run Bind server.
-Soheil

Available AttributesValue
Dns serversNot set
Dns-domaintest.acme.com

RuntimeError: dns-service not available
Backtrace:
/opt/opencrowbar/core/rails/app/models/service.rb:64:in `internal_do_transition'
/opt/opencrowbar/core/rails/app/models/barclamp_dns/service.rb:19:in `do_transition'
/opt/opencrowbar/core/rails/app/models/barclamp_crowbar/role_provided_jig.rb:23:in `run'
/opt/opencrowbar/core/rails/app/models/jig.rb:155:in `block in run_job'
/opt/opencrowbar/core/rails/app/models/jig.rb:148:in `loop'
/opt/opencrowbar/core/rails/app/models/jig.rb:148:in `run_job'
/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/performable_method.rb:30:in `perform'
/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/backend/base.rb:94:in `block in invoke_job'
/var/cache/crowbar/gems/ruby/2.1.0/

Rob Hirschfeld

unread,
Feb 11, 2015, 9:24:08 AM2/11/15
to openc...@googlegroups.com
RE #2) I assume you mean w/ Chef Provision after Crowbar sets up the node.  I saw this same issue and created a node-role to deal with it tied to the "Chef Ready" milestone.  I wrote some documentation about it here: https://github.com/ravolt/chef-provisioning-crowbar/#chef-ready-barclamp

Basically, you can put your Chef server's cert into the /etc/tftpboot/files area and Crowbar will populate it into the client's trusted certs area.  I only spent a little time with this, so it likely needs to attention.  The good news is that it's very simple Crowbar code.

--
You received this message because you are subscribed to the Google Groups "Crowbar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencrowbar...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Rob
____________________________
Rob Hirschfeld, 512-773-7522
RackN CEO/Founder (r...@rackn.com)

I am in CENTRAL (-6) time
http://robhirschfeld.com
twitter: @zehicle, github: cloudedge & ravolt

Rob Hirschfeld

unread,
Feb 11, 2015, 9:25:59 AM2/11/15
to openc...@googlegroups.com
RE #1) Can you check the Consul service (http://[crowbar admin]:8500) to see if the dns-service is registered there?

On Wed, Feb 11, 2015 at 12:13 AM, Soheil Eizadi <s.ei...@gmail.com> wrote:

--
You received this message because you are subscribed to the Google Groups "Crowbar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencrowbar...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Greg Althaus

unread,
Feb 11, 2015, 9:27:59 AM2/11/15
to openc...@googlegroups.com
This error is because the dns-service node role that watches consul for the dns-server to show up didn't in a timely manner.  There may be additional messages in /var/log/crowbar/production.log. 

At one point, you mentioned that you modified the bind9 cookbook to get dns to work.  Is that still true?

I need to update the docs about the new services layout.  The system wants to setup a dns-server through OpenCrowbar.  This runs on the admin node.  The completion of this step is supposed to register the service with consul.  This should wake the spinning dns-service up to make progress.  This should enable the dns-client and other blocked roles to continue.  As Rob, just pointed out, you can check consul by checking its UI.

Thanks,
Greg

On Wed, Feb 11, 2015 at 12:13 AM, Soheil Eizadi <s.ei...@gmail.com> wrote:

Soheil Eizadi

unread,
Feb 11, 2015, 5:08:18 PM2/11/15
to openc...@googlegroups.com
Hi Greg/Rob,
The DNS Service is not running I checked Consul (http://[crowbar admin]:8500) and it confirms what I see in Opencrowbar (http://[crowbar admin]:3000) that DNS Service did not come up.

I have not modified any cookbooks just trying to run the standard build w/ --develop option.

I looked at the production log and I see where the service is flagged as not running but don't know where to look why it is not running. I searched for errors in the log and posted any network related errors and DNS Service error.
-Soheil

^[[0m[2015-02-08T21:51:51-08:00] ERROR: Running exception handlers
[2015-02-08T21:51:51-08:00] ERROR: Exception handlers complete
[2015-02-08T21:51:51-08:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
[2015-02-08T21:51:51-08:00] INFO: Sending resource update report (run-id: 0d5dd332-eafd-40e8-84e2-9769f6f0d56c)
[2015-02-08T21:51:51-08:00] ERROR: Cannot resolve conduit 1g0 with known interfaces ["eth0", "eth1"]
[2015-02-08T21:51:51-08:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)

^[[0;37m2015-02-08 22:33:01.235^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36mSQL (0.8ms)^[[0m  ^[[1mUPDATE "node_roles" SET "runlog" = $1, "updated_at" = $2 WHERE "node_roles"."id" = 1^[[0m  [["runlog", "RuntimeError: dns-service not available\nBacktrace:\n/opt/opencrowbar/core/rails/app/models/service.rb:64:in `internal_do_transition'\n/opt/opencrowbar/core/rails/app/models/barclamp_dns/service.rb:19:in `do_transition'\n/opt/opencrowbar/core/rails/app/models/barclamp_crowbar/role_provided_jig.rb:23:in `run'\n/opt/opencrowbar/core/rails/app/models/jig.rb:155:in `block in run_job'\n/opt/opencrowbar/core/rails/app/models/jig.rb:148:in `loop'\n/opt/opencrowbar/core/rails/app/models/jig.rb:148:in `run_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/performable_method.rb:30:in `perform'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/backend/base.rb:94:in `block in invoke_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `block in initialize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `execute'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:40:in `run_callbacks'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/backend/base.rb:91:in `invoke_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:204:in `block (2 levels) in run'\n/usr/lib64/ruby/2.1.0/timeout.rb:91:in `block in timeout'\n/usr/lib64/ruby/2.1.0/timeout.rb:101:in `call'\n/usr/lib64/ruby/2.1.0/timeout.rb:101:in `timeout'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:204:in `block in run'\n/usr/lib64/ruby/2.1.0/benchmark.rb:294:in `realtime'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:203:in `run'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:280:in `block in reserve_and_run_one_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `block in initialize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `execute'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:40:in `run_callbacks'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:280:in `reserve_and_run_one_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:187:in `block in work_off'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:186:in `times'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:186:in `work_off'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:150:in `block (4 levels) in start'\n/usr/lib64/ruby/2.1.0/benchmark.rb:294:in `realtime'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:149:in `block (3 levels) in start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `block in initialize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `execute'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:40:in `run_callbacks'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:148:in `block (2 levels) in start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:147:in `loop'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:147:in `block in start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/plugins/clear_locks.rb:7:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/plugins/clear_locks.rb:7:in `block (2 levels) in <class:ClearLocks>'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:79:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:79:in `block (2 levels) in add'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `block in initialize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:79:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:79:in `block in add'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `execute'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:40:in `run_callbacks'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:146:in `start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:124:in `run'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:112:in `block in run_process'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/application.rb:255:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/application.rb:255:in `block in start_proc'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/daemonize.rb:82:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/daemonize.rb:82:in `call_as_daemon'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/application.rb:259:in `start_proc'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/application.rb:296:in `start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/controller.rb:70:in `run'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons.rb:197:in `block in run_proc'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/cmdline.rb:109:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/cmdline.rb:109:in `catch_exceptions'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons.rb:196:in `run_proc'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:110:in `run_process'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:91:in `block in daemonize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:89:in `times'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:89:in `daemonize'\nscript/delayed_job:5:in `<main>'"], ["updated_at", Mon, 09 Feb 2015 06:33:01 UTC +00:00]]
^[[0;37m2015-02-08 22:33:01.236^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35mNodeRole Load (0.6ms)^[[0m  SELECT "node_roles".* FROM "node_roles" WHERE "node_roles"."id" = $1 LIMIT 1  [[
"id", 1]]
^[[0;37m2015-02-08 22:33:01.239^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36mSQL (1.5ms)^[[0m  ^[[1mUPDATE "node_roles" SET "state" = $1, "updated_at" = $2 WHERE "node_roles"."id" = 1^[
[0m  [["state", -1], ["updated_at", Mon, 09 Feb 2015 06:33:01 UTC +00:00]]
^[[0;37m2015-02-08 22:33:01.256^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35mSQL (2.6ms)^[[0m  UPDATE "node_roles" SET "state" = 3 WHERE "node_roles"."id" IN (SELECT "node_roles"."id" FROM "node_roles" INNER JOIN "node_role_all_pcms" ON "node_roles"."id" = "node_role_all_pcms"."child_id" WHERE "node_role_all_pcms"."parent_id" = $1 AND (state NOT IN(4,2)) ORDER BY cohort ASC)  [["parent_id", 1]]
^[[0;37m2015-02-08 22:33:01.260^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36m (3.4ms)^[[0m  ^[[1mCOMMIT^[[0m
^[[0;37m2015-02-08 22:33:01.260^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35m (0.1ms)^[[0m  BEGIN
^[[0;37m2015-02-08 22:33:01.262^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36mRole Load (0.5ms)^[[0m  ^[[1mSELECT "roles".* FROM "roles" WHERE "roles"."id" = $1 LIMIT 1^[[0m  [["id", 35]]
^[[0;37m2015-02-08 22:33:01.263^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35m (0.2ms)^[[0m  COMMIT
^[[0;37m2015-02-08 22:33:01.264^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36mDeployment Load (0.4ms)^[[0m  ^[[1mSELECT "deployments".* FROM "deployments" WHERE "deployments"."id" = $1 LIMIT 1^[[0m  [["id", 1]]
^[[0;37m2015-02-08 22:33:01.266^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35mNode Load (0.5ms)^[[0m  SELECT "nodes".* FROM "nodes" WHERE "nodes"."id" = $1 LIMIT 1  [["id", 1]]
^[[0;37m2015-02-08 22:33:01.267^[[0m [29784] [^[[0;37mDEBUG^[[0m] NodeRole system: system-phantom.internal.local: dns-service: Calling on_error hook.
^[[0;37m2015-02-08 22:33:01.267^[[0m [29784] [^[[0;37mDEBUG^[[0m] No override for BarclampDns::Service.on_error event: dns-service on system-phantom.internal.local
^[[0;37m2015-02-08 22:33:01.267^[[0m [29784] [^[[0;37mDEBUG^[[0m] Run: Finished job 1 for system: system-phantom.internal.local: dns-service, exceptions raised.
^[[0;37m2015-02-08 22:33:01.267^[[0m [29784] [^[[31mERROR^[[0m] RuntimeError: dns-service not available

Greg Althaus

unread,
Feb 11, 2015, 5:16:54 PM2/11/15
to openc...@googlegroups.com
This line is the worrying one to me:

[2015-02-08T21:51:51-08:00] ERROR: Cannot resolve conduit 1g0 with known interfaces ["eth0", "eth1"]

Can you describe the hardware you are running on?  It seems like the system has two interfaces, but neither at 1G nics.  That can currently cause issues with the services coming up.  That is fixable.  Like changing the 1g0 in the crowbar-config.sh file to ?1g0.

Thanks,
Greg

Soheil Eizadi

unread,
Feb 11, 2015, 5:41:36 PM2/11/15
to openc...@googlegroups.com
Hi Greg,
It is an HP BladeServer I am running OpenCrowbar on one of the blades. I am hoping that it will detect ILO/Power-Managment attributes also about the BladeServer environment in doing discovery.

The details about the blade shown blow for the test setup. For the next test setup I will migrating away from this HP BladeServer to a Dell environment using Dell iDRAC7 and Dell R420 and R720(R430) servers. That system is being built out I was hoping to get a test setup with the HP BladeServer meanwhile. The BIOS is pretty old I could upgrade it is you think that be a cause.

-Soheil


ModelNameIPMgmt NameMgmt IPPart NoSNBIOSCPUMemoryHDD











BL460cG7bl02x.x.x.x
yyyyy
x.x.x.x
637391-B21MXQ2520PXZI27 12/03/2012Intel(R) Xeon(R) CPU E5649 @ 2.53GHz (6 Cores)24GB (2x8,4,2x2)2x300GB



# ethtool eth0
Settings for eth0:
    Supported ports: [ Backplane ]
    Supported link modes:   10000baseKR/Full
    Supported pause frame use: Symmetric
    Supports auto-negotiation: Yes
    Advertised link modes:  10000baseKR/Full
    Advertised pause frame use: Symmetric
    Advertised auto-negotiation: Yes
    Speed: 1000Mb/s
    Duplex: Full
    Port: Other
    PHYAD: 0
    Transceiver: internal
    Auto-negotiation: on
    Supports Wake-on: g
    Wake-on: g
    Current message level: 0x00002000 (8192)
                   hw
    Link detected: yes

Greg Althaus

unread,
Feb 11, 2015, 5:53:47 PM2/11/15
to openc...@googlegroups.com
The first problem is that it only has 10G nics. :-)  YOu need to find the 1g0 strings in the crowbar-config.sh and change them to ?1g0.  This will allow the system to use the 10g nics as the admin networks.  This will require you to rerun production.sh which will mean you should probably start over.  We can try and walk you through changing it, but I'm not sure that will work cleanly.  It should, but I suspect we may have bugs in that path.  To try that, you would go into the admin network page and change the conduit from 1g0 to ?1g0.  You would then need to rerun all the node roles from the network-admin node role on.  There may not be that many so it might just work.  In fact, try that first.

1. Click Networks , Click admin, change 1g0 to ?1g0 and click the first update.
2. Click the OpenCrowbar logo and click the network-admin node role in the lower right layer cake piece.  CLick the retry button.  I think that should "go".

You may hit another problem with IPMI configuration under ilo.  I saw some bizarre interactions with some HP hardware, but couldn't believe it was standard.  We may need to add a quirk to deal with it in the IPMI hardware code.  We'd love to work with you on that.  In fact, to check that out, can you, if you are comfortable with it, send me output from "crowbar nodes list" to gr...@rackn.com.  I think that will tell me what hardware you have.  I think you may be far enough along to have that info.  I'm basically looking for the output from "ipmitool mcinfo"

Thanks,
Greg

Soheil Eizadi

unread,
Feb 11, 2015, 9:19:22 PM2/11/15
to openc...@googlegroups.com
I tried changing it in the WebGUI and hitting retry, I got an error again, but noted that log display in the screen below did not change, it was showing the previous timestamp and log. In any case I am reloading CentOS and doing a fresh install with the suggested change to the crowbar-config.sh. I will let it run overnight and converge.
-Soheil

Soheil Eizadi

unread,
Feb 12, 2015, 2:01:45 AM2/12/15
to openc...@googlegroups.com
OK, I got further this time, in Consul WebUI, Crowbar database, DNS and NTP services. This time it is failing on the Network Lldpd service. This seems to be a problem with the DNS Service coming up and killing DNS lookups on the server.

I am not sure how this is suppose to work?

I looked at the Bind confiiguration files and looks like there are DNS Forwarders setup. It looks like the previous DNS Server that the was in resolv.conf was replaced with local Bind Server and that DNS Server IP was not moved a Forwarder in Bind Configuration for recursive lookups.

-Soheil

# ping www.google.com
ping: unknown host www.google.com
RuntimeError: Chef jig run for system: bl02.atg.inca.infoblox.com: network-lldpd failed
Out: [2015-02-11T10:48:06-08:00] INFO: Forking chef instance to converge...
[2015-02-11T10:48:06-08:00] WARN: 
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
.....
[2015-02-11T10:48:17-08:00] INFO: Processing directory[/etc/chef/ohai_plugins] action create (ohai::default line 23)
[2015-02-11T10:48:17-08:00] INFO: Processing remote_directory[/etc/chef/ohai_plugins] action create (ohai::default line 32)
[2015-02-11T10:48:17-08:00] INFO: Processing cookbook_file[/etc/chef/ohai_plugins/crowbar.rb] action create (dynamically defined)
[2015-02-11T10:48:17-08:00] INFO: Processing cookbook_file[/etc/chef/ohai_plugins/README] action create (dynamically defined)
[2015-02-11T10:48:17-08:00] WARN: [DEPRECATION] Plugin at /etc/chef/ohai_plugins/crowbar.rb is a version 6 plugin. Version 6 plugins will not be supported in future releases of Ohai. Please upgrade your plugin to version 7 plugin syntax. For more information visit here: docs.opscode.com/ohai_custom.html
[2015-02-11T10:48:28-08:00] INFO: Processing directory[/etc/chef/ohai_plugins] action nothing (ohai::default line 23)
[2015-02-11T10:48:28-08:00] INFO: Processing remote_directory[/etc/chef/ohai_plugins] action nothing (ohai::default line 32)
[2015-02-11T10:48:28-08:00] INFO: Processing log[running on OS:[centos] on ProLiant BL460c G7 hardware ] action write (utils::default line 28)
[2015-02-11T10:48:28-08:00] INFO: running on OS:[centos] on ProLiant BL460c G7 hardware 
[2015-02-11T10:48:28-08:00] INFO: Processing package[lldpd] action install (network::lldpd line 20)
[2015-02-11T11:03:45-08:00] ERROR: /opt/chef/embedded/lib/ruby/gems/1.9.1/gems/chef-11.16.4/lib/chef/provider/package/yum-dump.py exceeded timeout 900
 [0m
================================================================================ [0m
 [31mError executing action `install` on resource 'package[lldpd]' [0m
================================================================================ [0m

Greg Althaus

unread,
Feb 12, 2015, 8:42:19 AM2/12/15
to openc...@googlegroups.com
I'm checking it.  Let me get you the exact stuff to add to the crowbar-config.sh file.  I'm not sure why we don't see this problem in other places.  I have a guess, but not sure.  Does your admin node only have one interface configured?  Does the second interface get configured through DHCP?

Thanks,
Greg

Greg Althaus

unread,
Feb 12, 2015, 9:19:03 AM2/12/15
to openc...@googlegroups.com
I'm trying to wrap all of this into a GUI piece at some point.  Just not there yet. Sorry.

In crowbar-config.sh, you will need to find this line:
crowbar roles bind dns-database to "$FQDN"

Right after it add the following and substitute appropriately.
----
ROLE_ID=`crowbar roles show dns-server | grep '"id"'`
ROLE_ID=${ROLE_ID##*:}
ROLE_ID=${ROLE_ID%,}
NODE_ROLE_ID=`crowbar noderoles list | grep -B2 -A2 "\"role_id\":$ROLE_ID" | grep -B3 -A2 '"node_id": 2' | grep \"id\"`
NODE_ROLE_ID=${NODE_ROLE_ID##*:}
NODE_ROLE_ID=${NODE_ROLE_ID%,}
crowbar noderoles set $NODE_ROLE_ID attrib dns-forwarders to "{ \"value\": [ \"<IP OF DNS TO FORWARD TO>\" ] }"
---

Replace <IP OF DNS TO FORWARD TO> with the IP of your server.  You could try running the above little script as root on the admin node, then retrying the dns-server role, then retry the lldp role.  Otherwise, You get to start over. :-(. 

My current guess is that most of our admin nodes are getting there DNS resolve inject (and reset) by DHCP or docker.  If you don't have a DHCP-based internet connection or you have a routed admin network, then you will probably hit this problem.

Thanks,
Greg

Soheil Eizadi

unread,
Feb 12, 2015, 1:08:31 PM2/12/15
to openc...@googlegroups.com
Thanks Greg,
If I were to start over where would I have set this up? Also I don't want the opencrowbar to use the domain associated with the host that was give to it as a server for OpenCrowbar. If the domain was acme.com, I want it to create a new sub-zone off the one that was give to it with the server e.g. crowbar.acme.com. The current behavior breaks existing zone and how they are suppose to work. I am trying to follow the scripts in ./production.sh and crowbar-config.sh that set this up.
-Soheil

Greg Althaus

unread,
Feb 12, 2015, 1:16:32 PM2/12/15
to openc...@googlegroups.com
We don't have the forwarders in a place to easily set on start over yet.

So, the code snippet I sent eariler would have to be added to the crowbar-config.sh script to make forwarders work.  I'm working on a patch, but it is a little while out.

Hmm - the domainname is trickier.  I see where you are headed and I think it is a good idea.  It will require some code changes.  Let me check them out.  We assume that the admin node will be named into the domain that the managed nodes are.  it is a current simplifying assumption.  It should be changed to disassociate the two.  Let me look at the two.

Thanks,
Greg

Greg Althaus

unread,
Feb 12, 2015, 1:26:12 PM2/12/15
to openc...@googlegroups.com
Okay, so I checked it.  It is easy to change, but the side effects may prevent it from working. 

So, you can change the domainname clients use by changing the dns-domain attribute setting command in the crowbar-config.sh script.  This currently just uses the domainname of the admin node as passed in.  This could be changed to whatever you wanted.  So, I think that works for new nodes.  The problem is that currently the admin node gets an admin network address.  That will be mapped in DNS. The assumption for chef is that the ip and name of the admin node match.  The problem is that it might create the parent domain from the admin node name and conflict again.  I'm not sure.  Or create a name/IP mismatch and chef certificates will start failing.  I'll have to try this case.  I'm working on a different piece that allows the separation of the DHCP server and the provisioner.  That work is starting to get into this space as well.  So, I'm already thinking about it.

Thanks,
Greg

Soheil Eizadi

unread,
Feb 12, 2015, 3:06:37 PM2/12/15
to openc...@googlegroups.com
I tried the patch and hit Retry the DNS Server task hang in "To Do" state for a long time. I then decided just to reboot the server rather than debug why it was hanging (I assume due to dependency.) After I rebooted then server is hanging :( I thought it was due to /etc/resolv.conf, as it points to the local Bind Server and nothing external would get resolved, I changed that and reboot the server again, but still same behavior. May be I will send you a partial log to see you can ideas of where to look.
-Soheil

Greg Althaus

unread,
Feb 12, 2015, 3:42:52 PM2/12/15
to openc...@googlegroups.com
I'll have to try it myself and see.   Let me try and recap the environment to make sure I try and get something close.

Admin node is a physical blade.
Centos-6.6 installed using the --develop install script.
The node had 2 10G ports.
This part I'm not clear on:
interface 1 is the admin network.
interface 2 is the internet network (this is static
Or:
interface 1 is the admin network with a real router out to the internet.
 - this potential other issues.  I'm going to assume the first case.

You have an external DNS server. with domain xyz.com
You want your cluster to have cluster.xyz.com.
Your admin node has a public IP of Z.Z.Z.Z and name admin.xyz.com on the second interface.
Your admin node will/has an admin IP of 192.168.124.10 on the first interface.

You run production.sh with my changes to set the domainname to cluster.xyz.com and the conduit change to enable 10G networking.  You pass admin.xyz.com to the production.sh script.

I think that is what you've tried to get to this point.  This points out that what we want to enable is the hostname to be admin.xyz.com for IP Z.Z.Z.Z, but allow the node to also be know as admin.cluster.xyz.com for IP 192.168.124.10.  This may be much easier to do.

Thanks,
Greg


Soheil Eizadi

unread,
Feb 12, 2015, 5:07:18 PM2/12/15
to openc...@googlegroups.com
Hi Greg,
The server has two NICs, but I only have one 1 port active at this time the BMC and Admin Network are on the same net, different ranges. I would use the term public loosely. It is visible in public, in the corporate network, but our whole corporate network is in the private/NAT space with respect to public internet. The single network admin/BMC means that there is always network connectivity, for production we don't plan to run it this way.

I don't have DHCP Server running on this network, static IP for OpenCrowbar Server, external DNS to resolve Host FQDN admin.xyz.com.
There is an external Cisco Router for the BMC/Admin to corporate networks (from there routes to public internet).

I made the change to 1ge to ?1ge that got me to the DNS Forwarder issue, which I patched with the small script you sent.

I have not made the change to domainname, I plan to reimage that server and do another run with that change. I will let you know how far I get, in that run I hope to patch the forwarder also in running the production script, I need to figure out how to put it in even until we have a permanent solution.
-Soheil

Greg Althaus

unread,
Feb 12, 2015, 5:33:59 PM2/12/15
to openc...@googlegroups.com
Okay, pretty much my worst case.  :-) 

It may get you further along.  Sorry this is messing with you so much.  OpenCrowbar comes from a pretty opinionated reference architecture.  We are trying to loosen those requirements.  This is one of the areas where we need some help.

Thanks,
Greg

Soheil

unread,
Feb 12, 2015, 5:57:44 PM2/12/15
to openc...@googlegroups.com
Sorry, would putting BMC on it own net, but one nic help?

-Soheil

Greg Althaus

unread,
Feb 12, 2015, 6:08:15 PM2/12/15
to openc...@googlegroups.com
no - that shoudln't matter.  We won't necessarily be able to manage the admin node through IPMI (it depends upon how the port if wired), but we don't really do that at all.  the rest of the nodes should be fine.

Thanks,
greg

Soheil Eizadi

unread,
Feb 13, 2015, 8:58:34 AM2/13/15
to openc...@googlegroups.com
I tried to put the patch for the secondary DNS in the crowbar-config script, but looks like the dns-forwarders attribute is not created at that point yet. So the script failed, without that fix it breaks again at the lldpd package installation. I need to look at it some more today, another thought was to take the Bind server completely out of the deployment and make it external.
-Soheil

ROLE_ID=${ROLE_ID##*:}
ROLE_ID=${ROLE_ID%,}
NODE_ROLE_ID=`crowbar noderoles list | grep -B2 -A2 "\"role_id\":$ROLE_ID" | grep -B3 -A2 '"node_id": 2' | grep \"id\"`
NODE_ROLE_ID=${NODE_ROLE_ID##*:}
NODE_ROLE_ID=${NODE_ROLE_ID%,}
crowbar noderoles set $NODE_ROLE_ID attrib dns-forwarders to "{ \"value\": [ \"10.14.20.50\" ] }"

Greg Althaus

unread,
Feb 13, 2015, 9:24:14 AM2/13/15
to openc...@googlegroups.com
Hmmm -- sigh.  I'll try it again and see what is happening. 

Can you show me the lldpd error?

Thanks,
Greg

Soheil Eizadi

unread,
Feb 13, 2015, 11:42:21 AM2/13/15
to openc...@googlegroups.com
Here is the lldpd error, same as earlier. -Soheil

---------- Forwarded message ----------
From: Soheil Eizadi <s.ei...@gmail.com>
Date: Wed, Feb 11, 2015 at 11:01 PM
Subject: Re: Two Errors After installation
To: openc...@googlegroups.com


....

# ping www.google.com
ping: unknown host www.google.com
RuntimeError: Chef jig run for system: bl02.atg.inca.infoblox.com: network-lldpd failed
Out: [2015-02-11T10:48:06-08:00] INFO: Forking chef instance to converge...
[2015-02-11T10:48:06-08:00] WARN: 
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
.....
[2015-02-11T10:48:17-08:00] INFO: Processing directory[/etc/chef/ohai_plugins] action create (ohai::default line 23)
[2015-02-11T10:48:17-08:00] INFO: Processing remote_directory[/etc/chef/ohai_plugins] action create (ohai::default line 32)
[2015-02-11T10:48:17-08:00] INFO: Processing cookbook_file[/etc/chef/ohai_plugins/crowbar.rb] action create (dynamically defined)
[2015-02-11T10:48:17-08:00] INFO: Processing cookbook_file[/etc/chef/ohai_plugins/README] action create (dynamically defined)
[2015-02-11T10:48:17-08:00] WARN: [DEPRECATION] Plugin at /etc/chef/ohai_plugins/crowbar.rb is a version 6 plugin. Version 6 plugins will not be supported in future releases of Ohai. Please upgrade your plugin to version 7 plugin syntax. For more information visit here: docs.opscode.com/ohai_custom.html
[2015-02-11T10:48:28-08:00] INFO: Processing directory[/etc/chef/ohai_plugins] action nothing (ohai::default line 23)
[2015-02-11T10:48:28-08:00] INFO: Processing remote_directory[/etc/chef/ohai_plugins] action nothing (ohai::default line 32)
[2015-02-11T10:48:28-08:00] INFO: Processing log[running on OS:[centos] on ProLiant BL460c G7 hardware ] action write (utils::default line 28)
[2015-02-11T10:48:28-08:00] INFO: running on OS:[centos] on ProLiant BL460c G7 hardware 
[2015-02-11T10:48:28-08:00] INFO: Processing package[lldpd] action install (network::lldpd line 20)
[2015-02-11T11:03:45-08:00] ERROR: /opt/chef/embedded/lib/ruby/gems/1.9.1/gems/chef-11.16.4/lib/chef/provider/package/yum-dump.py exceeded timeout 900
 [0m
================================================================================ [0m
 [31mError executing action `install` on resource 'package[lldpd]' [0m
================================================================================ [0m

Soheil Eizadi

unread,
Feb 13, 2015, 2:27:32 PM2/13/15
to openc...@googlegroups.com
I tried to manually fix the forwarders again and rerun the failed lldpd and dns-server tasks. The dns-server died in installing bind9! Looks like the problem is different now.I tracked this down to following line in the yum.conf
I don't know how it got put in there it is not the an IP Address I specified anywhere.

From yum.conf
proxy=http://10.49.5.20:8123

# env | fgrep -i proxy
http_proxy=http://10.49.5.20:8123
https_proxy=http://10.49.5.20:8123

Where does this come from? I removed them and everything is working again.

-Soheil


# yum install bind9
Freeing read locks for locker 0x20a: 30627/140527800747776
Freeing read locks for locker 0x20c: 30627/140527800747776
Loaded plugins: downloadonly, fastestmirror, priorities, security
Setting up Install Process
Loading mirror speeds from cached hostfile
Could not get metalink https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=x86_64 error was
14: PYCURL ERROR 56 - "Proxy CONNECT aborted"
 * base: mirror.supremebytes.com
 * centosplus: centos.sonn.com
 * contrib: mirrors.xmission.com
 * epel: linux.mirrors.es.net
 * extras: linux.mirrors.es.net
 * updates: mirrors.usc.edu
http://mirror.supremebytes.com/centos/6.6/os/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://mirror.supremebytes.com/centos/6.6/os/x86_64/repodata/repomd.xml: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds')
Trying other mirror.
http://dallas.tx.mirror.xygenhosting.com/CentOS/6.6/os/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://dallas.tx.mirror.xygenhosting.com/CentOS/6.6/os/x86_64/repodata/repomd.xml: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds')
Trying other mirror.
http://repos.lax.quadranet.com/centos/6.6/os/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://repos.lax.quadranet.com/centos/6.6/os/x86_64/repodata/repomd.xml: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds')

Greg Althaus

unread,
Feb 13, 2015, 2:54:34 PM2/13/15
to openc...@googlegroups.com
Ummm - what is that address in your environment?  We try and setup a proxy.  Is that the admin node's default address?

That is interesting.

Thanks,
Greg

Soheil Eizadi

unread,
Feb 13, 2015, 3:14:47 PM2/13/15
to openc...@googlegroups.com
Now I am getting stuck on the provisioner, if I go into shell and run wget on that file I can download it without any problem. Not sure why the recipe is failing. After I manually downloaded it, I was able to get past this point. You would blame this on the internet, but after proxy problems, which I initially blamed on the internet, I don't know!
-Soheil

RuntimeError: Chef jig run for system: bl02.atg.inca.infoblox.com: provisioner-base-images failed
Out: [2015-02-13T03:54:06-08:00] INFO: Forking chef instance to converge...
[2015-02-13T03:54:06-08:00] WARN: 
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
SSL validation of HTTPS requests is disabled. HTTPS connections are still
encrypted, but chef is not able to detect forged replies or man in the middle
attacks.

To fix this issue add an entry like this to your configuration file:

```
  # Verify all HTTPS connections (recommended)
  ssl_verify_mode :verify_peer

  # OR, Verify only connections to chef-server
  verify_api_cert true
```

To check your SSL configuration, or troubleshoot errors, you can use the
`knife ssl check` command like so:

```
  knife ssl check -c /etc/chef/client.rb
```

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

[2015-02-13T03:54:06-08:00] INFO: *** Chef 11.16.4 ***
[2015-02-13T03:54:06-08:00] INFO: Chef-client pid: 9884
[2015-02-13T03:54:07-08:00] INFO: Run List is [role[crowbar-bl02_atg_inca_infoblox_com]]
[2015-02-13T03:54:07-08:00] INFO: Run List expands to [barclamp, ohai, utils, provisioner::setup_base_images]
[2015-02-13T03:54:07-08:00] INFO: Starting Chef Run for bl02.atg.inca.infoblox.com
[2015-02-13T03:54:07-08:00] INFO: Running start handlers
[2015-02-13T03:54:07-08:00] INFO: Start handlers complete.
[2015-02-13T03:54:07-08:00] INFO: Loading cookbooks [barc...@0.9.5, crowba...@0.1.0, oh...@0.9.0, provi...@1.0.0, ut...@0.9.5]
[2015-02-13T03:54:07-08:00] INFO: ohai plugins will be at: /etc/chef/ohai_plugins
[2015-02-13T03:54:07-08:00] INFO: Processing directory[/etc/chef/ohai_plugins] action create (ohai::default line 23)
[2015-02-13T03:54:07-08:00] INFO: Processing remote_directory[/etc/chef/ohai_plugins] action create (ohai::default line 32)
[2015-02-13T03:54:07-08:00] INFO: Processing cookbook_file[/etc/chef/ohai_plugins/crowbar.rb] action create (dynamically defined)
[2015-02-13T03:54:07-08:00] INFO: Processing cookbook_file[/etc/chef/ohai_plugins/README] action create (dynamically defined)
[2015-02-13T03:54:07-08:00] WARN: [DEPRECATION] Plugin at /etc/chef/ohai_plugins/crowbar.rb is a version 6 plugin. Version 6 plugins will not be supported in future releases of Ohai. Please upgrade your plugin to version 7 plugin syntax. For more information visit here: docs.opscode.com/ohai_custom.html
[2015-02-13T03:54:08-08:00] INFO: Provisioner: raw server data {"access_keys"=>{"admin-1"=>"ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAt44xWVojn5PY87hivBkuZuF+7x6W/8tc0FB2qCRKXfk9fEOoLI/CJIFrw4KUrx6nxKcliqIXOu0ooePLLt3jtCWLWklDpQH/BMw7CWSMTHd7mFaE6iZlq0+gklIgBvZVsEewW/FgrJ45LGIg20+HpFCH4cQtMXst/QeJPGniSHiX0a7Q8N/JAZnEdcAAWz5NVo19fCKP0LNLemZ2mFyWkZXrtrcSiTyAlnwzafOWSQZd00Kn0EGZ4TJoZTS51QA5Pv6lY+ihRQjiuYo2DB/cYFXjVRRS8jx0lNzgBlDuRWzwjqUSXRn7X4FzLtKFRxWwbGwZxrIP/bHvauw9vQFtUQ== cro...@bl02.atg.inca.infoblox.com", "admin-2"=>"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCdcuQhH9LCvp/XSzA1B1IYPMDXGMGqVay+KUDOryEeDPO2GieOjWe/VO+rdNIm8WGGC7TVLjmM3+ZS8Z5SJ9t7NEV8eiSQrlkl5TVmmS8zc4SqAIUMG6ev9JNc2FczX2jozh7z43FDQ2L0vdQSyWABXTr7e3PjV6CwCyx3Mha/uYKDfnhuGZfTKW70fLZJxVSDkORhssiNvohj+h/XZOXnqo7HqkkhFfWJdnej3eDuG05z2yojlsWYTHp02RY0r7c7DDcjd/0oWOvaDgRNDyiS5iJ7yoQiqQ57Ma0Fz4CoY834lg1S3wMZVn22PlaQskZ7gJw5FGlBcUBX7uu36sXP sei...@infoblox.com"}, "default_os"=>"ubuntu-12.04", "default_password_hash"=>"$1$BDC3UwFr$/VqOWN1Wi6oM0jiMOjaPb.", "default_user"=>"crowbar", "name"=>"bl02.atg.inca.infoblox.com", "online"=>true, "proxy"=>"10.49.5.20:8123", "root"=>"/tftpboot", "supported_oses"=>{"centos-6.5"=>{"append"=>"method=%os_install_site%", "initrd"=>"images/pxeboot/initrd.img", "iso_file"=>"CentOS-6.5-x86_64-bin-DVD1.iso", "kernel"=>"images/pxeboot/vmlinuz", "online_mirror"=>"http://mirrors.kernel.org/centos/6/"}, "centos-6.6"=>{"append"=>"method=%os_install_site%", "initrd"=>"images/pxeboot/initrd.img", "iso_file"=>"CentOS-6.6-x86_64-bin-DVD1.iso", "kernel"=>"images/pxeboot/vmlinuz", "online_mirror"=>"http://mirrors.kernel.org/centos/6/"}, "centos-7.0.1406"=>{"append"=>"method=%os_install_site% inst.geoloc=0", "initrd"=>"images/pxeboot/initrd.img", "iso_file"=>"CentOS-7.0-1406-x86_64-DVD.iso", "kernel"=>"images/pxeboot/vmlinuz", "online_mirror"=>"http://mirrors.kernel.org/centos/7/"}, "fedora-20"=>{"append"=>"method=%os_install_site% inst.geoloc=0", "initrd"=>"images/pxeboot/initrd.img", "iso_file"=>"Fedora-20-x86_64-DVD.iso", "kernel"=>"images/pxeboot/vmlinuz", "online_mirror"=>"http://mirrors.kernel.org/fedora/releases/20/Fedora/x86_64/os/"}, "redhat-6.5"=>{"append"=>"method=%os_install_site%", "initrd"=>"images/pxeboot/initrd.img", "iso_file"=>"RHEL6.5-20131111.0-Server-x86_64-DVD1.iso", "kernel"=>"images/pxeboot/vmlinuz"}, "redhat-7.0"=>{"append"=>"method=%os_install_site% inst.geoloc=0", "initrd"=>"images/pxeboot/initrd.img", "iso_file"=>"rhel-server-7.0-x86_64-dvd.iso", "kernel"=>"images/pxeboot/vmlinuz"}, "suse-11.2"=>{"append"=>"install=%os_install_site%", "initrd"=>"boot/x86_64/loader/initrd", "kernel"=>"boot/x86_64/loader/linux"}, "suse-12.2"=>{"append"=>"install=%os_install_site%", "initrd"=>"boot/x86_64/loader/initrd", "kernel"=>"boot/x86_64/loader/linux"}, "ubuntu-12.04"=>{"append"=>"debian-installer/locale=en_US.utf8 console-setup/layoutcode=us keyboard-configuration/layoutcode=us netcfg/dhcp_timeout=120 netcfg/choose_interface=auto root=/dev/ram rw quiet --", "codename"=>"precise", "initrd"=>"install/netboot/ubuntu-installer/amd64/initrd.gz", "iso_file"=>"ubuntu-12.04.5-server-amd64.iso", "kernel"=>"install/netboot/ubuntu-installer/amd64/linux", "online_mirror"=>"http://us.archive.ubuntu.com/ubuntu/"}, "ubuntu-14.04"=>{"append"=>"debian-installer/locale=en_US.utf8 console-setup/layoutcode=us keyboard-configuration/layoutcode=us netcfg/dhcp_timeout=120 netcfg/choose_interface=auto root=/dev/ram rw quiet --", "codename"=>"trusty", "initrd"=>"install/netboot/ubuntu-installer/amd64/initrd.gz", "iso_file"=>"ubuntu-14.04.1-server-amd64.iso", "kernel"=>"install/netboot/ubuntu-installer/amd64/linux", "online_mirror"=>"http://us.archive.ubuntu.com/ubuntu/"}}, "upstream_proxy"=>nil, "use_local_security"=>true, "use_serial_console"=>false, "v4addr"=>"10.49.5.20", "v6addr"=>"fca0:5212:8b1b:1:d926:3fce:3944:e7a2", "web_port"=>8091, "webserver"=>"http://10.49.5.20:8091"}
[2015-02-13T03:54:08-08:00] INFO: Processing directory[/etc/chef/ohai_plugins] action nothing (ohai::default line 23)
[2015-02-13T03:54:08-08:00] INFO: Processing remote_directory[/etc/chef/ohai_plugins] action nothing (ohai::default line 32)
[2015-02-13T03:54:08-08:00] INFO: Processing log[running on OS:[centos] on ProLiant BL460c G7 hardware ] action write (utils::default line 28)
[2015-02-13T03:54:08-08:00] INFO: running on OS:[centos] on ProLiant BL460c G7 hardware 
[2015-02-13T03:54:08-08:00] INFO: Processing bash[Set up selinux contexts for /tftpboot] action run (provisioner::setup_base_images line 63)
[2015-02-13T03:54:21-08:00] INFO: bash[Set up selinux contexts for /tftpboot] ran successfully
[2015-02-13T03:54:21-08:00] INFO: Processing directory[/tftpboot/nodes] action create (provisioner::setup_base_images line 81)
[2015-02-13T03:54:21-08:00] INFO: Processing cookbook_file[/tftpboot/nodes/start-up.sh] action create (provisioner::setup_base_images line 86)
[2015-02-13T03:54:21-08:00] INFO: Processing template[/tftpboot/boot/grub/grub.cfg] action create (provisioner::setup_base_images line 91)
[2015-02-13T03:54:21-08:00] INFO: Processing bash[Extract CentOS-6.6-x86_64-bin-DVD1.iso] action run (provisioner::setup_base_images line 122)
[2015-02-13T03:54:21-08:00] INFO: Processing bash[Rewrite package repo metadata for CentOS-6.6-x86_64-bin-DVD1.iso] action run (provisioner::setup_base_images line 138)
[2015-02-13T03:54:21-08:00] INFO: Processing directory[/tftpboot/centos-6.6/crowbar-extra/raw_pkgs] action create (provisioner::setup_base_images line 176)
[2015-02-13T03:54:21-08:00] INFO: Processing bash[Delete /tftpboot/centos-6.6/crowbar-extra/raw_pkgs/gen_meta] action nothing (provisioner::setup_base_images line 181)
[2015-02-13T03:54:21-08:00] INFO: Processing bash[Update package metadata in /tftpboot/centos-6.6/crowbar-extra/raw_pkgs] action nothing (provisioner::setup_base_images line 186)
[2015-02-13T03:54:21-08:00] INFO: Processing file[/tftpboot/centos-6.6/crowbar-extra/raw_pkgs/gen_meta] action nothing (provisioner::setup_base_images line 197)
[2015-02-13T03:54:21-08:00] INFO: Processing bash[/tftpboot/centos-6.6/crowbar-extra/raw_pkgs: Fetch http://opscode-omnibus-packages.s3.amazonaws.com/el/6/x86_64/chef-11.16.4-1.el6.x86_64.rpm] action run (provisioner::setup_base_images line 204)
[2015-02-13T03:54:21-08:00] INFO: Processing ruby_block[Index the current local package repositories for centos-6.6] action run (provisioner::setup_base_images line 215)
[2015-02-13T03:54:21-08:00] INFO: ruby_block[Index the current local package repositories for centos-6.6] called
[2015-02-13T03:54:21-08:00] INFO: Processing ruby_block[Set up local base OS install repos for centos-6.6] action run (provisioner::setup_base_images line 274)
[2015-02-13T03:54:21-08:00] INFO: ruby_block[Set up local base OS install repos for centos-6.6] called
[2015-02-13T03:54:21-08:00] INFO: Processing bash[Extract CentOS-7.0-1406-x86_64-DVD.iso] action run (provisioner::setup_base_images line 122)
[2015-02-13T03:54:21-08:00] INFO: Processing bash[Rewrite package repo metadata for CentOS-7.0-1406-x86_64-DVD.iso] action run (provisioner::setup_base_images line 138)
[2015-02-13T03:54:21-08:00] INFO: Processing directory[/tftpboot/centos-7.0.1406/crowbar-extra/raw_pkgs] action create (provisioner::setup_base_images line 176)
[2015-02-13T03:54:21-08:00] INFO: Processing bash[Delete /tftpboot/centos-7.0.1406/crowbar-extra/raw_pkgs/gen_meta] action nothing (provisioner::setup_base_images line 181)
[2015-02-13T03:54:21-08:00] INFO: Processing bash[Update package metadata in /tftpboot/centos-7.0.1406/crowbar-extra/raw_pkgs] action nothing (provisioner::setup_base_images line 186)
[2015-02-13T03:54:21-08:00] INFO: Processing file[/tftpboot/centos-7.0.1406/crowbar-extra/raw_pkgs/gen_meta] action nothing (provisioner::setup_base_images line 197)
[2015-02-13T03:54:21-08:00] INFO: Processing bash[/tftpboot/centos-7.0.1406/crowbar-extra/raw_pkgs: Fetch http://opscode-omnibus-packages.s3.amazonaws.com/el/6/x86_64/chef-11.16.4-1.el6.x86_64.rpm] action run (provisioner::setup_base_images line 204)
 [0m
================================================================================ [0m
 [31mError executing action `run` on resource 'bash[/tftpboot/centos-7.0.1406/crowbar-extra/raw_pkgs: Fetch http://opscode-omnibus-packages.s3.amazonaws.com/el/6/x86_64/chef-11.16.4-1.el6.x86_64.rpm]' [0m
================================================================================ [0m

 [0mMixlib::ShellOut::ShellCommandFailed [0m

# wget http://opscode-omnibus-packages.s3.amazonaws.com/el/6/x86_64/chef-11.16.4-1.el6.x86_64.rpm
--2015-02-13 04:02:19--  http://opscode-omnibus-packages.s3.amazonaws.com/el/6/x86_64/chef-11.16.4-1.el6.x86_64.rpm
Resolving opscode-omnibus-packages.s3.amazonaws.com... 54.231.0.105
Connecting to opscode-omnibus-packages.s3.amazonaws.com|54.231.0.105|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 31866545 (30M) [application/x-redhat-package-manager]
Saving to: “chef-11.16.4-1.el6.x86_64.rpm”

100%[=================================================================================================================================>] 31,866,545  2.35M/s   in 19s    

2015-02-13 04:02:40 (1.58 MB/s) - “chef-11.16.4-1.el6.x86_64.rpm” saved [31866545/31866545]

Soheil Eizadi

unread,
Feb 13, 2015, 3:22:23 PM2/13/15
to openc...@googlegroups.com
The admin network range is .20 to .21, but the server I brought up is .21, I don't know if you have HA support yet, but I left .20 for another admin server.
-Soheil

Soheil Eizadi

unread,
Feb 13, 2015, 3:28:22 PM2/13/15
to openc...@googlegroups.com
I looked at the OpenCrowbar GUI and the admin for admin node is .20, if I do ifconfig it shows up as .21, if I do a dig on FQDN I get .21, so not sure where this came from! How do I change it to the right value?
-Soheil

Soheil Eizadi

unread,
Feb 13, 2015, 3:41:50 PM2/13/15
to openc...@googlegroups.com
OK more on the trail of this looks like my ifcfg-eth0 is modified to .20 also there is .1 address, from the BMC which I found unexpected. Fortunately does not look like the network service was not restarted (which is odd it seems like the recipe would do that after changing ifcfg-eth0) it is a good thing it did not get called otherwise nothing would work. Looks like I need revisit my single network configuration, also looks like there is an implicit assumption that first address is allocated to OpenCrowbar.
-Soheil

# Managed by Crowbar.
# Do not edit.
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
IPADDR=10.49.5.1
PREFIX=22
IPADDR2=10.49.5.20
PREFIX2=22
IPV6ADDR=fca0:5212:8b1b:1:d926:3fce:3944:e7a2/64

GATEWAY=10.49.4.1


Greg Althaus

unread,
Feb 13, 2015, 4:03:52 PM2/13/15
to openc...@googlegroups.com
Actually, I think this is getting to the core of the problems, you've been seeing.  We should review this and see what is going on.  What are your crowbar-config networking changes you make?

Thanks,
Greg

Soheil Eizadi

unread,
Feb 13, 2015, 11:04:43 PM2/13/15
to openc...@googlegroups.com
Looks like there are two problems with my networking. First one was that OpenCrowbar is allocating addresses for itself at the top of the range, so if the admin range was 10.x.x.20-10.x.x.21 it was picking 10.x.x.20 and caused a conflict as I had programmed the server for 10.x.x.21. The other problem I ran into in another run, after fixing the first problem, was that I was trying to share admin and bmc on the same network with different ranges. This wont work as multiple IP Address for same network get created in ifcfg-eth0 and cause problems. So at the end I ended up reconfiguring my lab to create different networks for bmc and admin sharing the same eth0 physical interface.

That is what I was getting ready to test, but looks like something else is broken now before I get to that point, the more detailed log below about the problem. I can do a gem install of net-http-digest_auth from bash prompt, not sure what changed to cause this problem.
-Soheil

ArgumentError: Gem sources must be absolute. You provided 'build.net-http-digest_auth/'.
An error occurred while installing net-http-digest_auth (1.4), and Bundler
cannot continue.
Make sure that `gem install net-http-digest_auth -v '1.4'` succeeds before
bundling.


https://gist.github.com/seizadi/79e7900c8181b27ed660

Soheil Eizadi

unread,
Feb 14, 2015, 4:15:12 PM2/14/15
to openc...@googlegroups.com
The problem with the gem install is due to bundler see:
https://github.com/bundler/bundler/issues/3398

To get around it downgrade to v1.8.0, I did this and can make progress with my install:

# gem uninstall bundler
Remove executables:
    bundle, bundler

in addition to the gem? [Yn]  y
Removing bundle
Removing bundler
Successfully uninstalled bundler-1.8.1

# gem install bundler -v '1.8.0'
Fetching: bundler-1.8.0.gem (100%)
Successfully installed bundler-1.8.0
1 gem installed

To test fix:
# su - crowbar
$ cd /opt/opencrowbar/core/rails
$ bundle install

I restarted the ./production.sh install again and now it fails some where else :( I will come back and debug this problem...

Error executing action `run` on resource 'bash[Install sqitch for database management]'

See this for more detail:
https://gist.github.com/seizadi/3d92efce020814f6a10b

Greg Althaus

unread,
Feb 14, 2015, 5:29:47 PM2/14/15
to openc...@googlegroups.com
You are a tenacious beast!  It shouldn't be this hard, but the bundler thing is bad, but seems to already have a patch thankfully. 

Thanks,
Greg

Soheil Eizadi

unread,
Feb 15, 2015, 4:05:25 PM2/15/15
to openc...@googlegroups.com
I don't think I have ever been called that before :)

The  problem with cpanminus I did not debug in detail,  looked like a transient problem, I removed the /root/.cpanm directory and reran it and it worked.

What part of the system is using Perl?

I am happy to report I finally made it to all green check marks.

DNS Forwarders still don't work, I ran the script (below). I restarted the DNS workflow, but it does not seem to be working properly.

I have changed /etc/resolv.conf for now to get it to work. I will debug the DNS/Bind9 workflow later. More detail log on this below:
The good news the crowbar sub-zone is created and the Bind Server responds to it.
-Soheil

Script to fix DNS Forwarders:

ROLE_ID=`crowbar roles show dns-server | grep '"id"'`
ROLE_ID=${ROLE_ID##*:}
ROLE_ID=${ROLE_ID%,}
NODE_ROLE_ID=`crowbar noderoles list | grep -B2 -A2 "\"role_id\":$ROLE_ID" | grep -B3 -A2 '"node_id": 2' | grep \"id\"`
NODE_ROLE_ID=${NODE_ROLE_ID##*:}
NODE_ROLE_ID=${NODE_ROLE_ID%,}
crowbar noderoles set $NODE_ROLE_ID attrib dns-forwarders to "{ \"value\": [ \"10.49.2.5\" ] }"


# dig @10.49.5.21 crowbar.acme.com

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.30.rc1.el6_6.1 <<>> @10.49.5.21 crowbar.acme.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 63871
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;crowbar.acme.com.    IN    A

;; AUTHORITY SECTION:
acme.com.    300    IN    SOA    bl02.acme.com. support.localhost.localdomain.acme.com. 1 86400 7200 2419200 300

;; Query time: 0 msec
;; SERVER: 10.49.5.21#53(10.49.5.21)
;; WHEN: Sun Feb 15 04:14:20 2015
;; MSG SIZE  rcvd: 118

[root@bl02 core]# dig @10.49.5.21 www.google.com

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.30.rc1.el6_6.1 <<>> @10.49.5.21 www.google.com
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

# dig  www.google.com

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.30.rc1.el6_6.1 <<>> www.google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 812
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;www.google.com.            IN    A

;; ANSWER SECTION:
www.google.com.        300    IN    A    74.125.28.99
www.google.com.        300    IN    A    74.125.28.147
www.google.com.        300    IN    A    74.125.28.104
www.google.com.        300    IN    A    74.125.28.103
www.google.com.        300    IN    A    74.125.28.105
www.google.com.        300    IN    A    74.125.28.106

;; Query time: 25 msec
;; SERVER: 10.49.2.5#53(10.49.2.5)
;; WHEN: Sun Feb 15 04:23:23 2015
;; MSG SIZE  rcvd: 128

Greg Althaus

unread,
Feb 15, 2015, 5:26:51 PM2/15/15
to openc...@googlegroups.com
Awesome!  I meant it as a complement.  :-)

I hope to get some time to play with the forwarders here shortly.

Thanks,
greg

Greg Althaus

unread,
Feb 16, 2015, 10:33:07 AM2/16/15
to openc...@googlegroups.com
The script I sent you for the forwarder is busted.  I mistyped something.  It needs to have dns-domain changed to dns-forwarders.  This is for the forwarder section and not the dns-domain section.  cut and paste issue on my part.  I testing it now and will update the pull request/code in the tree shortly.

Thanks,
Greg

Soheil Eizadi

unread,
Feb 16, 2015, 10:44:10 AM2/16/15
to openc...@googlegroups.com
Hi Greg,
Thanks for the update. I verified that the /etc/named.conf forwarders is empty. When I manually add the server there it does work.
-Soheil

Greg Althaus

unread,
Feb 16, 2015, 10:46:54 AM2/16/15
to openc...@googlegroups.com
I just typo'ed what I sent you and what I checked in.  I just tested it and it works fine after that.  Sigh.  Sorry for the run around.  Trying to help too many people too quickly.

Thanks,
Greg

Soheil Eizadi

unread,
Feb 16, 2015, 11:25:40 AM2/16/15
to openc...@googlegroups.com
Powered up my first blade, it gets an IP address from the DHCP Server but gets an error and drops to grub rescue mode. Error:
"error: timeout: could not resolve hardware address."
-Soheil

Greg Althaus

unread,
Feb 16, 2015, 11:30:20 AM2/16/15
to openc...@googlegroups.com
Are your blades 10GB only?  We are just now seeing some issues with 10GB nics and grub.  We may need to revert the grub changes or make them configurable.  We REALLY want to use grub, but it may not be best for all environments.

Thanks,
Greg

Soheil Eizadi

unread,
Feb 16, 2015, 11:33:13 AM2/16/15
to openc...@googlegroups.com
Yes all the blades are the same as the one I sent you config for earlier. Emulex 10GE.
-Soheil

Greg Althaus

unread,
Feb 16, 2015, 11:37:09 AM2/16/15
to openc...@googlegroups.com
Well, you didn't crash.  :-)  The one I was trying grub crash and the system went into a reboot cycle.

It will take me some time to unwind it a bit.  Sorry. 

Again, thanks for fighting through these issues.  They've helped OpenCrowbar.  I hope you are getting use out of this soon.

Thanks,
Greg

Soheil Eizadi

unread,
Feb 16, 2015, 11:40:53 AM2/16/15
to openc...@googlegroups.com
Thanks Greg, -Soheil

I have two issues after the initial installation:1). An error with DNS Servers, I see that Dns-Domain is set but not Dns-Servers.2). Error with SSL verify turned off, on Chef Client. I see that the Client.rb file is missing the line. I assume I can ignore this message for test, more for production.At this point the installation guide notes that I should have all green check marks, so not sure what went wrong with my installation.

More detail logs below.

Soheil Eizadi

unread,
Feb 17, 2015, 5:01:34 PM2/17/15
to openc...@googlegroups.com
I have a Dell C6100 (XS23-TY3) - 2U / 4 Node box, Intel Xeon/Intel 82576 NIC GE ports, 100M management port. I was trying to figure out if this will be a better alternative to test with Opencrowbar? (It looks like a no-go with HP/BladeServer.)

I was going to run OpenCrowbar on one blade and use the other 3 nodes for staging.
-Soheil



I have two issues after the initial installation:1). An error with DNS Servers, I see that Dns-Domain is set but not Dns-Servers.2). Error with SSL verify turned off, on Chef Client. I see that the Client.rb file is missing the line. I assume I can ignore this message for test, more for production.At this point the installation guide notes that I should have all green check marks, so not sure what went wrong with my installation.

More detail logs below.

Is there a section on how DNS is going to work with Opencrowbar. Looks like there is recipe to run Bind server.
-Soheil

Available AttributesValue
Dns serversNot set
Dns-domaintest.acme.com

Greg Althaus

unread,
Feb 18, 2015, 8:38:35 AM2/18/15
to openc...@googlegroups.com
That piece of gear is better known to crowbar.  Use the onboard LOM for the admin network to start with.

Thanks,
Greg

I have two issues after the initial installation:1). An error with DNS Servers, I see that Dns-Domain is set but not Dns-Servers.2). Error with SSL verify turned off, on Chef Client. I see that the Client.rb file is missing the line. I assume I can ignore this message for test, more for production.At this point the installation guide notes that I should have all green check marks, so not sure what went wrong with my installation.More detail logs below. Is there a section on how DNS is going to work with Opencrowbar. Looks like there is recipe to run Bind server.-SoheilAvailable AttributesValueDns serversNot setDns-domaintest.acme.comRuntimeError: dns-service not available Backtrace: /opt/opencrowbar/core/rails/app/models/service.rb:64:in `internal_do_transition' /opt/opencrowbar/core/rails/app/models/barclamp_dns/service.rb:19:in `do_transition' /opt/opencrowbar/core/rails/app/models/barclamp_crowbar/role_provided_jig.rb:23:in `run' /opt/opencrowbar/core/rails/app/models/jig.rb:155:in `block in run_job' /opt/opencrowbar/core/rails/app/models/jig.rb:148:in `loop' /opt/opencrowbar/core/rails/app/models/jig.rb:148:in `run_job' /var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/performable_method.rb:30:in `perform' /var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/backend/base.rb:94:in `block in invoke_job' /var/cache/crowbar/gems/ruby/2.1.0/

Soheil Eizadi

unread,
Feb 18, 2015, 12:53:35 PM2/18/15
to openc...@googlegroups.com
I am confused, doesn't the LOM port go on the BMC network and Eth0 to the Admin network?
-Soheil

Hi Greg/Rob,The DNS Service is not running I checked Consul (http://[crowbar admin]:8500) and it confirms what I see in Opencrowbar (http://[crowbar admin]:3000) that DNS Service did not come up.I have not modified any cookbooks just trying to run the standard build w/ --develop option.I looked at the production log and I see where the service is flagged as not running but don't know where to look why it is not running. I searched for errors in the log and posted any network related errors and DNS Service error.-Soheil^[[0m[2015-02-08T21:51:51-08:00] ERROR: Running exception handlers[2015-02-08T21:51:51-08:00] ERROR: Exception handlers complete[2015-02-08T21:51:51-08:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out[2015-02-08T21:51:51-08:00] INFO: Sending resource update report (run-id: 0d5dd332-eafd-40e8-84e2-9769f6f0d56c)[2015-02-08T21:51:51-08:00] ERROR: Cannot resolve conduit 1g0 with known interfaces ["eth0", "eth1"][2015-02-08T21:51:51-08:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)^[[0;37m2015-02-08 22:33:01.235^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36mSQL (0.8ms)^[[0m  ^[[1mUPDATE "node_roles" SET "runlog" = $1, "updated_at" = $2 WHERE "node_roles"."id" = 1^[[0m  [["runlog", "RuntimeError: dns-service not available\nBacktrace:\n/opt/opencrowbar/core/rails/app/models/service.rb:64:in `internal_do_transition'\n/opt/opencrowbar/core/rails/app/models/barclamp_dns/service.rb:19:in `do_transition'\n/opt/opencrowbar/core/rails/app/models/barclamp_crowbar/role_provided_jig.rb:23:in `run'\n/opt/opencrowbar/core/rails/app/models/jig.rb:155:in `block in run_job'\n/opt/opencrowbar/core/rails/app/models/jig.rb:148:in `loop'\n/opt/opencrowbar/core/rails/app/models/jig.rb:148:in `run_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/performable_method.rb:30:in `perform'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/backend/base.rb:94:in `block in invoke_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `block in initialize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `execute'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:40:in `run_callbacks'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/backend/base.rb:91:in `invoke_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:204:in `block (2 levels) in run'\n/usr/lib64/ruby/2.1.0/timeout.rb:91:in `block in timeout'\n/usr/lib64/ruby/2.1.0/timeout.rb:101:in `call'\n/usr/lib64/ruby/2.1.0/timeout.rb:101:in `timeout'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:204:in `block in run'\n/usr/lib64/ruby/2.1.0/benchmark.rb:294:in `realtime'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:203:in `run'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:280:in `block in reserve_and_run_one_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `block in initialize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `execute'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:40:in `run_callbacks'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:280:in `reserve_and_run_one_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:187:in `block in work_off'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:186:in `times'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:186:in `work_off'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:150:in `block (4 levels) in start'\n/usr/lib64/ruby/2.1.0/benchmark.rb:294:in `realtime'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:149:in `block (3 levels) in start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `block in initialize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `execute'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:40:in `run_callbacks'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:148:in `block (2 levels) in start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:147:in `loop'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:147:in `block in start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/plugins/clear_locks.rb:7:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/plugins/clear_locks.rb:7:in `block (2 levels) in <class:ClearLocks>'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:79:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:79:in `block (2 levels) in add'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `block in initialize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:79:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:79:in `block in add'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `execute'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:40:in `run_callbacks'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:146:in `start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:124:in `run'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:112:in `block in run_process'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/application.rb:255:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/application.rb:255:in `block in start_proc'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/daemonize.rb:82:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/daemonize.rb:82:in `call_as_daemon'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/application.rb:259:in `start_proc'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/application.rb:296:in `start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/controller.rb:70:in `run'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons.rb:197:in `block in run_proc'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/cmdline.rb:109:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/cmdline.rb:109:in `catch_exceptions'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons.rb:196:in `run_proc'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:110:in `run_process'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:91:in `block in daemonize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:89:in `times'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:89:in `daemonize'\nscript/delayed_job:5:in `<main>'"], ["updated_at", Mon, 09 Feb 2015 06:33:01 UTC +00:00]]^[[0;37m2015-02-08 22:33:01.236^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35mNodeRole Load (0.6ms)^[[0m  SELECT "node_roles".* FROM "node_roles" WHERE "node_roles"."id" = $1 LIMIT 1  [["id", 1]]^[[0;37m2015-02-08 22:33:01.239^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36mSQL (1.5ms)^[[0m  ^[[1mUPDATE "node_roles" SET "state" = $1, "updated_at" = $2 WHERE "node_roles"."id" = 1^[[0m  [["state", -1], ["updated_at", Mon, 09 Feb 2015 06:33:01 UTC +00:00]]^[[0;37m2015-02-08 22:33:01.256^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35mSQL (2.6ms)^[[0m  UPDATE "node_roles" SET "state" = 3 WHERE "node_roles"."id" IN (SELECT "node_roles"."id" FROM "node_roles" INNER JOIN "node_role_all_pcms" ON "node_roles"."id" = "node_role_all_pcms"."child_id" WHERE "node_role_all_pcms"."parent_id" = $1 AND (state NOT IN(4,2)) ORDER BY cohort ASC)  [["parent_id", 1]]^[[0;37m2015-02-08 22:33:01.260^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36m (3.4ms)^[[0m  ^[[1mCOMMIT^[[0m^[[0;37m2015-02-08 22:33:01.260^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35m (0.1ms)^[[0m  BEGIN^[[0;37m2015-02-08 22:33:01.262^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36mRole Load (0.5ms)^[[0m  ^[[1mSELECT "roles".* FROM "roles" WHERE "roles"."id" = $1 LIMIT 1^[[0m  [["id", 35]]^[[0;37m2015-02-08 22:33:01.263^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35m (0.2ms)^[[0m  COMMIT^[[0;37m2015-02-08 22:33:01.264^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36mDeployment Load (0.4ms)^[[0m  ^[[1mSELECT "deployments".* FROM "deployments" WHERE "deployments"."id" = $1 LIMIT 1^[[0m  [["id", 1]]^[[0;37m2015-02-08 22:33:01.266^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35mNode Load (0.5ms)^[[0m  SELECT "nodes".* FROM "nodes" WHERE "nodes"."id" = $1 LIMIT 1  [["id", 1]]^[[0;37m2015-02-08 22:33:01.267^[[0m [29784] [^[[0;37mDEBUG^[[0m] NodeRole system: system-phantom.internal.local: dns-service: Calling on_error hook.^[[0;37m2015-02-08 22:33:01.267^[[0m [29784] [^[[0;37mDEBUG^[[0m] No override for BarclampDns::Service.on_error event: dns-service on system-phantom.internal.local^[[0;37m2015-02-08 22:33:01.267^[[0m [29784] [^[[0;37mDEBUG^[[0m] Run: Finished job 1 for system: system-phantom.internal.local: dns-service, exceptions raised.^[[0;37m2015-02-08 22:33:01.267^[[0m [29784] [^[[31mERROR^[[0m] RuntimeError: dns-service not available
On Wed, Feb 11, 2015 at 6:27 AM, Greg Althaus <galtha...@gmail.com> wrote:This error is because the dns-service node role that watches consul for the dns-server to show up didn't in a timely manner.  There may be additional messages in /var/log/crowbar/production.log.  At one point, you mentioned that you modified the bind9 cookbook to get dns to work.  Is that still true?I need to update the docs about the new services layout.  The system wants to setup a dns-server through OpenCrowbar.  This runs on the admin node.  The completion of this step is supposed to register the service with consul.  This should wake the spinning dns-service up to make progress.  This should enable the dns-client and other blocked roles to continue.  As Rob, just pointed out, you can check consul by checking its UI.Thanks,GregOn Wed, Feb 11, 2015 at 12:13 AM, Soheil Eizadi <s.ei...@gmail.com> wrote:I have two issues after the initial installation:1). An error with DNS Servers, I see that Dns-Domain is set but not Dns-Servers.2). Error with SSL verify turned off, on Chef Client. I see that the Client.rb file is missing the line. I assume I can ignore this message for test, more for production.At this point the installation guide notes that I should have all green check marks, so not sure what went wrong with my installation.More detail logs below. Is there a section on how DNS is going to work with Opencrowbar. Looks like there is recipe to run Bind server.-SoheilAvailable AttributesValueDns serversNot setDns-domaintest.acme.comRuntimeError: dns-service not available Backtrace: /opt/opencrowbar/core/rails/app/models/service.rb:64:in `internal_do_transition' /opt/opencrowbar/core/rails/app/models/barclamp_dns/service.rb:19:in `do_transition' /opt/opencrowbar/core/rails/app/models/barclamp_crowbar/role_provided_jig.rb:23:in `run' /opt/opencrowbar/core/rails/app/models/jig.rb:155:in `block in run_job' /opt/opencrowbar/core/rails/app/models/jig.rb:148:in `loop' /opt/opencrowbar/core/rails/app/models/jig.rb:148:in `run_job' /var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/performable_method.rb:30:in `perform' /var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/backend/base.rb:94:in `block in invoke_job' /var/cache/crowbar/gems/ruby/2.1.0/ -- You received this message because you are subscribed to the Google Groups "Crowbar" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencrowbar...@googlegroups.com. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "Crowbar" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencrowbar...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.

Greg Althaus

unread,
Feb 18, 2015, 1:19:56 PM2/18/15
to openc...@googlegroups.com
Usually the LOM and the BMC are (or were) separate.  I don't know anymore though for sure.

Thanks,
Greg

Greg Althaus

unread,
Feb 18, 2015, 1:22:43 PM2/18/15
to openc...@googlegroups.com
For a sled in a 6100, it used to have at least 2 NIC ports on the sled.  One was a 1Gb LOM port and the other was a 100Mb manage port.  There could have been a third port for a second LOM but I don't remember completely.

I know for the original 6100s those two functions were separate.  The system didn't have a DRAC originally.  I'm not sure what the FX2 platforms do.

Thanks,
Greg

Hi Greg,It is an HP BladeServer I am running OpenCrowbar on one of the blades. I am hoping that it will detect ILO/Power-Managment attributes also about the BladeServer environment in doing discovery.The details about the blade shown blow for the test setup. For the next test setup I will migrating away from this HP BladeServer to a Dell environment using Dell iDRAC7 and Dell R420 and R720(R430) servers. That system is being built out I was hoping to get a test setup with the HP BladeServer meanwhile. The BIOS is pretty old I could upgrade it is you think that be a cause.-Soheil

ModelNameIPMgmt NameMgmt IPPart NoSNBIOSCPUMemoryHDDBL460cG7bl02x.x.x.xyyyyyx.x.x.x637391-B21MXQ2520PXZI27 12/03/2012Intel(R) Xeon(R) CPU E5649 @ 2.53GHz (6 Cores)24GB (2x8,4,2x2)2x300GB# ethtool eth0Settings for eth0:    Supported ports: [ Backplane ]    Supported link modes:   10000baseKR/Full     Supported pause frame use: Symmetric    Supports auto-negotiation: Yes    Advertised link modes:  10000baseKR/Full     Advertised pause frame use: Symmetric    Advertised auto-negotiation: Yes    Speed: 1000Mb/s    Duplex: Full    Port: Other    PHYAD: 0    Transceiver: internal    Auto-negotiation: on    Supports Wake-on: g    Wake-on: g    Current message level: 0x00002000 (8192)                   hw    Link detected: yes

On Wed, Feb 11, 2015 at 2:16 PM, Greg Althaus <galtha...@gmail.com> wrote:
This line is the worrying one to me:[2015-02-08T21:51:51-08:00] ERROR: Cannot resolve conduit 1g0 with known interfaces ["eth0", "eth1"]Can you describe the hardware you are running on?  It seems like the system has two interfaces, but neither at 1G nics.  That can currently cause issues with the services coming up.  That is fixable.  Like changing the 1g0 in the crowbar-config.sh file to ?1g0.Thanks,Greg
On Wed, Feb 11, 2015 at 4:08 PM, Soheil Eizadi <s.ei...@gmail.com> wrote:Hi Greg/Rob,The DNS Service is not running I checked Consul (http://[crowbar admin]:8500) and it confirms what I see in Opencrowbar (http://[crowbar admin]:3000) that DNS Service did not come up.I have not modified any cookbooks just trying to run the standard build w/ --develop option.I looked at the production log and I see where the service is flagged as not running but don't know where to look why it is not running. I searched for errors in the log and posted any network related errors and DNS Service error.-Soheil^[[0m[2015-02-08T21:51:51-08:00] ERROR: Running exception handlers[2015-02-08T21:51:51-08:00] ERROR: Exception handlers complete[2015-02-08T21:51:51-08:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out[2015-02-08T21:51:51-08:00] INFO: Sending resource update report (run-id: 0d5dd332-eafd-40e8-84e2-9769f6f0d56c)[2015-02-08T21:51:51-08:00] ERROR: Cannot resolve conduit 1g0 with known interfaces ["eth0", "eth1"][2015-02-08T21:51:51-08:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)^[[0;37m2015-02-08 22:33:01.235^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36mSQL (0.8ms)^[[0m  ^[[1mUPDATE "node_roles" SET "runlog" = $1, "updated_at" = $2 WHERE "node_roles"."id" = 1^[[0m  [["runlog", "RuntimeError: dns-service not available\nBacktrace:\n/opt/opencrowbar/core/rails/app/models/service.rb:64:in `internal_do_transition'\n/opt/opencrowbar/core/rails/app/models/barclamp_dns/service.rb:19:in `do_transition'\n/opt/opencrowbar/core/rails/app/models/barclamp_crowbar/role_provided_jig.rb:23:in `run'\n/opt/opencrowbar/core/rails/app/models/jig.rb:155:in `block in run_job'\n/opt/opencrowbar/core/rails/app/models/jig.rb:148:in `loop'\n/opt/opencrowbar/core/rails/app/models/jig.rb:148:in `run_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/performable_method.rb:30:in `perform'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/backend/base.rb:94:in `block in invoke_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `block in initialize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `execute'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:40:in `run_callbacks'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/backend/base.rb:91:in `invoke_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:204:in `block (2 levels) in run'\n/usr/lib64/ruby/2.1.0/timeout.rb:91:in `block in timeout'\n/usr/lib64/ruby/2.1.0/timeout.rb:101:in `call'\n/usr/lib64/ruby/2.1.0/timeout.rb:101:in `timeout'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:204:in `block in run'\n/usr/lib64/ruby/2.1.0/benchmark.rb:294:in `realtime'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:203:in `run'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:280:in `block in reserve_and_run_one_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `block in initialize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `execute'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:40:in `run_callbacks'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:280:in `reserve_and_run_one_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:187:in `block in work_off'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:186:in `times'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:186:in `work_off'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:150:in `block (4 levels) in start'\n/usr/lib64/ruby/2.1.0/benchmark.rb:294:in `realtime'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:149:in `block (3 levels) in start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `block in initialize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `execute'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:40:in `run_callbacks'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:148:in `block (2 levels) in start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:147:in `loop'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:147:in `block in start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/plugins/clear_locks.rb:7:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/plugins/clear_locks.rb:7:in `block (2 levels) in <class:ClearLocks>'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:79:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:79:in `block (2 levels) in add'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `block in initialize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:79:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:79:in `block in add'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `execute'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:40:in `run_callbacks'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:146:in `start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:124:in `run'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:112:in `block in run_process'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/application.rb:255:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/application.rb:255:in `block in start_proc'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/daemonize.rb:82:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/daemonize.rb:82:in `call_as_daemon'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/application.rb:259:in `start_proc'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/application.rb:296:in `start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/controller.rb:70:in `run'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons.rb:197:in `block in run_proc'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/cmdline.rb:109:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/cmdline.rb:109:in `catch_exceptions'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons.rb:196:in `run_proc'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:110:in `run_process'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:91:in `block in daemonize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:89:in `times'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:89:in `daemonize'\nscript/delayed_job:5:in `<main>'"], ["updated_at", Mon, 09 Feb 2015 06:33:01 UTC +00:00]]^[[0;37m2015-02-08 22:33:01.236^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35mNodeRole Load (0.6ms)^[[0m  SELECT "node_roles".* FROM "node_roles" WHERE "node_roles"."id" = $1 LIMIT 1  [["id", 1]]^[[0;37m2015-02-08 22:33:01.239^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36mSQL (1.5ms)^[[0m  ^[[1mUPDATE "node_roles" SET "state" = $1, "updated_at" = $2 WHERE "node_roles"."id" = 1^[[0m  [["state", -1], ["updated_at", Mon, 09 Feb 2015 06:33:01 UTC +00:00]]^[[0;37m2015-02-08 22:33:01.256^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35mSQL (2.6ms)^[[0m  UPDATE "node_roles" SET "state" = 3 WHERE "node_roles"."id" IN (SELECT "node_roles"."id" FROM "node_roles" INNER JOIN "node_role_all_pcms" ON "node_roles"."id" = "node_role_all_pcms"."child_id" WHERE "node_role_all_pcms"."parent_id" = $1 AND (state NOT IN(4,2)) ORDER BY cohort ASC)  [["parent_id", 1]]^[[0;37m2015-02-08 22:33:01.260^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36m (3.4ms)^[[0m  ^[[1mCOMMIT^[[0m^[[0;37m2015-02-08 22:33:01.260^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35m (0.1ms)^[[0m  BEGIN^[[0;37m2015-02-08 22:33:01.262^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36mRole Load (0.5ms)^[[0m  ^[[1mSELECT "roles".* FROM "roles" WHERE "roles"."id" = $1 LIMIT 1^[[0m  [["id", 35]]^[[0;37m2015-02-08 22:33:01.263^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35m (0.2ms)^[[0m  COMMIT^[[0;37m2015-02-08 22:33:01.264^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36mDeployment Load (0.4ms)^[[0m  ^[[1mSELECT "deployments".* FROM "deployments" WHERE "deployments"."id" = $1 LIMIT 1^[[0m  [["id", 1]]^[[0;37m2015-02-08 22:33:01.266^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35mNode Load (0.5ms)^[[0m  SELECT "nodes".* FROM "nodes" WHERE "nodes"."id" = $1 LIMIT 1  [["id", 1]]^[[0;37m2015-02-08 22:33:01.267^[[0m [29784] [^[[0;37mDEBUG^[[0m] NodeRole system: system-phantom.internal.local: dns-service: Calling on_error hook.^[[0;37m2015-02-08 22:33:01.267^[[0m [29784] [^[[0;37mDEBUG^[[0m] No override for BarclampDns::Service.on_error event: dns-service on system-phantom.internal.local^[[0;37m2015-02-08 22:33:01.267^[[0m [29784] [^[[0;37mDEBUG^[[0m] Run: Finished job 1 for system: system-phantom.internal.local: dns-service, exceptions raised.^[[0;37m2015-02-08 22:33:01.267^[[0m [29784] [^[[31mERROR^[[0m] RuntimeError: dns-service not availableOn Wed, Feb 11, 2015 at 6:27 AM, Greg Althaus <galtha...@gmail.com> wrote:This error is because the dns-service node role that watches consul for the dns-server to show up didn't in a timely manner.  There may be additional messages in /var/log/crowbar/production.log.  At one point, you mentioned that you modified the bind9 cookbook to get dns to work.  Is that still true?I need to update the docs about the new services layout.  The system wants to setup a dns-server through OpenCrowbar.  This runs on the admin node.  The completion of this step is supposed to register the service with consul.  This should wake the spinning dns-service up to make progress.  This should enable the dns-client and other blocked roles to continue.  As Rob, just pointed out, you can check consul by checking its UI.Thanks,GregOn Wed, Feb 11, 2015 at 12:13 AM, Soheil Eizadi <s.ei...@gmail.com> wrote:I have two issues after the initial installation:1). An error with DNS Servers, I see that Dns-Domain is set but not Dns-Servers.2). Error with SSL verify turned off, on Chef Client. I see that the Client.rb file is missing the line. I assume I can ignore this message for test, more for production.At this point the installation guide notes that I should have all green check marks, so not sure what went wrong with my installation.More detail logs below. Is there a section on how DNS is going to work with Opencrowbar. Looks like there is recipe to run Bind server.-SoheilAvailable AttributesValueDns serversNot setDns-domaintest.acme.comRuntimeError: dns-service not available Backtrace: /opt/opencrowbar/core/rails/app/models/service.rb:64:in `internal_do_transition' /opt/opencrowbar/core/rails/app/models/barclamp_dns/service.rb:19:in `do_transition' /opt/opencrowbar/core/rails/app/models/barclamp_crowbar/role_provided_jig.rb:23:in `run' /opt/opencrowbar/core/rails/app/models/jig.rb:155:in `block in run_job' /opt/opencrowbar/core/rails/app/models/jig.rb:148:in `loop' /opt/opencrowbar/core/rails/app/models/jig.rb:148:in `run_job' /var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/performable_method.rb:30:in `perform' /var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/backend/base.rb:94:in `block in invoke_job' /var/cache/crowbar/gems/ruby/2.1.0/ -- You received this message because you are subscribed to the Google Groups "Crowbar" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencrowbar...@googlegroups.com. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "Crowbar" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencrowbar...@googlegroups.com. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "Crowbar" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencrowbar...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.

Soheil Eizadi

unread,
Feb 18, 2015, 1:31:09 PM2/18/15
to openc...@googlegroups.com
There is a dedicated LOM, Eth0/1 and another daughter card slot (I have 4 more GE ports). Right now I have wired up LOM, Eth0/1 to 3 separate networks.
-Soheil

You can see a diagram of the box I/O on page 16:
http://downloads.dell.com/Manuals/all-products/esuprt_ser_stor_net/esuprt_cloud_products/poweredge-c6100_Owner%27s%20Manual_en-us.pdf


The first problem is that it only has 10G nics. :-)  YOu need to find the 1g0 strings in the crowbar-config.sh and change them to ?1g0.  This will allow the system to use the 10g nics as the admin networks.  This will require you to rerun production.sh which will mean you should probably start over.  We can try and walk you through changing it, but I'm not sure that will work cleanly.  It should, but I suspect we may have bugs in that path.  To try that, you would go into the admin network page and change the conduit from 1g0 to ?1g0.  You would then need to rerun all the node roles from the network-admin node role on.  There may not be that many so it might just work.  In fact, try that first.1. Click Networks , Click admin, change 1g0 to ?1g0 and click the first update.2. Click the OpenCrowbar logo and click the network-admin node role in the lower right layer cake piece.  CLick the retry button.  I think that should "go".You may hit another problem with IPMI configuration under ilo.  I saw some bizarre interactions with some HP hardware, but couldn't believe it was standard.  We may need to add a quirk to deal with it in the IPMI hardware code.  We'd love to work with you on that.  In fact, to check that out, can you, if you are comfortable with it, send me output from "crowbar nodes list" to gr...@rackn.com.  I think that will tell me what hardware you have.  I think you may be far enough along to have that info.  I'm basically looking for the output from "ipmitool mcinfo"Thanks,Greg

On Wed, Feb 11, 2015 at 4:41 PM, Soheil Eizadi <s.ei...@gmail.com> wrote:
Hi Greg,It is an HP BladeServer I am running OpenCrowbar on one of the blades. I am hoping that it will detect ILO/Power-Managment attributes also about the BladeServer environment in doing discovery.The details about the blade shown blow for the test setup. For the next test setup I will migrating away from this HP BladeServer to a Dell environment using Dell iDRAC7 and Dell R420 and R720(R430) servers. That system is being built out I was hoping to get a test setup with the HP BladeServer meanwhile. The BIOS is pretty old I could upgrade it is you think that be a cause.-SoheilModelNameIPMgmt NameMgmt IPPart NoSNBIOSCPUMemoryHDDBL460cG7bl02x.x.x.xyyyyyx.x.x.x637391-B21MXQ2520PXZI27 12/03/2012Intel(R) Xeon(R) CPU E5649 @ 2.53GHz (6 Cores)24GB (2x8,4,2x2)2x300GB# ethtool eth0Settings for eth0:    Supported ports: [ Backplane ]    Supported link modes:   10000baseKR/Full     Supported pause frame use: Symmetric    Supports auto-negotiation: Yes    Advertised link modes:  10000baseKR/Full     Advertised pause frame use: Symmetric    Advertised auto-negotiation: Yes    Speed: 1000Mb/s    Duplex: Full    Port: Other    PHYAD: 0    Transceiver: internal    Auto-negotiation: on    Supports Wake-on: g    Wake-on: g    Current message level: 0x00002000 (8192)                   hw    Link detected: yes

On Wed, Feb 11, 2015 at 2:16 PM, Greg Althaus <galtha...@gmail.com> wrote:
This line is the worrying one to me:[2015-02-08T21:51:51-08:00] ERROR: Cannot resolve conduit 1g0 with known interfaces ["eth0", "eth1"]Can you describe the hardware you are running on?  It seems like the system has two interfaces, but neither at 1G nics.  That can currently cause issues with the services coming up.  That is fixable.  Like changing the 1g0 in the crowbar-config.sh file to ?1g0.Thanks,GregOn Wed, Feb 11, 2015 at 4:08 PM, Soheil Eizadi <s.ei...@gmail.com> wrote:Hi Greg/Rob,The DNS Service is not running I checked Consul (http://[crowbar admin]:8500) and it confirms what I see in Opencrowbar (http://[crowbar admin]:3000) that DNS Service did not come up.I have not modified any cookbooks just trying to run the standard build w/ --develop option.I looked at the production log and I see where the service is flagged as not running but don't know where to look why it is not running. I searched for errors in the log and posted any network related errors and DNS Service error.-Soheil^[[0m[2015-02-08T21:51:51-08:00] ERROR: Running exception handlers[2015-02-08T21:51:51-08:00] ERROR: Exception handlers complete[2015-02-08T21:51:51-08:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out[2015-02-08T21:51:51-08:00] INFO: Sending resource update report (run-id: 0d5dd332-eafd-40e8-84e2-9769f6f0d56c)[2015-02-08T21:51:51-08:00] ERROR: Cannot resolve conduit 1g0 with known interfaces ["eth0", "eth1"][2015-02-08T21:51:51-08:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)^[[0;37m2015-02-08 22:33:01.235^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36mSQL (0.8ms)^[[0m  ^[[1mUPDATE "node_roles" SET "runlog" = $1, "updated_at" = $2 WHERE "node_roles"."id" = 1^[[0m  [["runlog", "RuntimeError: dns-service not available\nBacktrace:\n/opt/opencrowbar/core/rails/app/models/service.rb:64:in `internal_do_transition'\n/opt/opencrowbar/core/rails/app/models/barclamp_dns/service.rb:19:in `do_transition'\n/opt/opencrowbar/core/rails/app/models/barclamp_crowbar/role_provided_jig.rb:23:in `run'\n/opt/opencrowbar/core/rails/app/models/jig.rb:155:in `block in run_job'\n/opt/opencrowbar/core/rails/app/models/jig.rb:148:in `loop'\n/opt/opencrowbar/core/rails/app/models/jig.rb:148:in `run_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/performable_method.rb:30:in `perform'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/backend/base.rb:94:in `block in invoke_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `block in initialize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `execute'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:40:in `run_callbacks'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/backend/base.rb:91:in `invoke_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:204:in `block (2 levels) in run'\n/usr/lib64/ruby/2.1.0/timeout.rb:91:in `block in timeout'\n/usr/lib64/ruby/2.1.0/timeout.rb:101:in `call'\n/usr/lib64/ruby/2.1.0/timeout.rb:101:in `timeout'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:204:in `block in run'\n/usr/lib64/ruby/2.1.0/benchmark.rb:294:in `realtime'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:203:in `run'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:280:in `block in reserve_and_run_one_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `block in initialize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `execute'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:40:in `run_callbacks'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:280:in `reserve_and_run_one_job'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:187:in `block in work_off'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:186:in `times'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:186:in `work_off'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:150:in `block (4 levels) in start'\n/usr/lib64/ruby/2.1.0/benchmark.rb:294:in `realtime'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:149:in `block (3 levels) in start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `block in initialize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `execute'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:40:in `run_callbacks'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:148:in `block (2 levels) in start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:147:in `loop'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:147:in `block in start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/plugins/clear_locks.rb:7:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/plugins/clear_locks.rb:7:in `block (2 levels) in <class:ClearLocks>'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:79:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:79:in `block (2 levels) in add'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:61:in `block in initialize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:79:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:79:in `block in add'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:66:in `execute'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/lifecycle.rb:40:in `run_callbacks'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/worker.rb:146:in `start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:124:in `run'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:112:in `block in run_process'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/application.rb:255:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/application.rb:255:in `block in start_proc'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/daemonize.rb:82:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/daemonize.rb:82:in `call_as_daemon'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/application.rb:259:in `start_proc'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/application.rb:296:in `start'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/controller.rb:70:in `run'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons.rb:197:in `block in run_proc'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/cmdline.rb:109:in `call'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons/cmdline.rb:109:in `catch_exceptions'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/daemons-1.1.9/lib/daemons.rb:196:in `run_proc'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:110:in `run_process'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:91:in `block in daemonize'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:89:in `times'\n/var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/command.rb:89:in `daemonize'\nscript/delayed_job:5:in `<main>'"], ["updated_at", Mon, 09 Feb 2015 06:33:01 UTC +00:00]]^[[0;37m2015-02-08 22:33:01.236^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35mNodeRole Load (0.6ms)^[[0m  SELECT "node_roles".* FROM "node_roles" WHERE "node_roles"."id" = $1 LIMIT 1  [["id", 1]]^[[0;37m2015-02-08 22:33:01.239^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36mSQL (1.5ms)^[[0m  ^[[1mUPDATE "node_roles" SET "state" = $1, "updated_at" = $2 WHERE "node_roles"."id" = 1^[[0m  [["state", -1], ["updated_at", Mon, 09 Feb 2015 06:33:01 UTC +00:00]]^[[0;37m2015-02-08 22:33:01.256^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35mSQL (2.6ms)^[[0m  UPDATE "node_roles" SET "state" = 3 WHERE "node_roles"."id" IN (SELECT "node_roles"."id" FROM "node_roles" INNER JOIN "node_role_all_pcms" ON "node_roles"."id" = "node_role_all_pcms"."child_id" WHERE "node_role_all_pcms"."parent_id" = $1 AND (state NOT IN(4,2)) ORDER BY cohort ASC)  [["parent_id", 1]]^[[0;37m2015-02-08 22:33:01.260^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36m (3.4ms)^[[0m  ^[[1mCOMMIT^[[0m^[[0;37m2015-02-08 22:33:01.260^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35m (0.1ms)^[[0m  BEGIN^[[0;37m2015-02-08 22:33:01.262^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36mRole Load (0.5ms)^[[0m  ^[[1mSELECT "roles".* FROM "roles" WHERE "roles"."id" = $1 LIMIT 1^[[0m  [["id", 35]]^[[0;37m2015-02-08 22:33:01.263^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35m (0.2ms)^[[0m  COMMIT^[[0;37m2015-02-08 22:33:01.264^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[36mDeployment Load (0.4ms)^[[0m  ^[[1mSELECT "deployments".* FROM "deployments" WHERE "deployments"."id" = $1 LIMIT 1^[[0m  [["id", 1]]^[[0;37m2015-02-08 22:33:01.266^[[0m [29784] [^[[0;37mDEBUG^[[0m] ^[[1m^[[35mNode Load (0.5ms)^[[0m  SELECT "nodes".* FROM "nodes" WHERE "nodes"."id" = $1 LIMIT 1  [["id", 1]]^[[0;37m2015-02-08 22:33:01.267^[[0m [29784] [^[[0;37mDEBUG^[[0m] NodeRole system: system-phantom.internal.local: dns-service: Calling on_error hook.^[[0;37m2015-02-08 22:33:01.267^[[0m [29784] [^[[0;37mDEBUG^[[0m] No override for BarclampDns::Service.on_error event: dns-service on system-phantom.internal.local^[[0;37m2015-02-08 22:33:01.267^[[0m [29784] [^[[0;37mDEBUG^[[0m] Run: Finished job 1 for system: system-phantom.internal.local: dns-service, exceptions raised.^[[0;37m2015-02-08 22:33:01.267^[[0m [29784] [^[[31mERROR^[[0m] RuntimeError: dns-service not availableOn Wed, Feb 11, 2015 at 6:27 AM, Greg Althaus <galtha...@gmail.com> wrote:This error is because the dns-service node role that watches consul for the dns-server to show up didn't in a timely manner.  There may be additional messages in /var/log/crowbar/production.log.  At one point, you mentioned that you modified the bind9 cookbook to get dns to work.  Is that still true?I need to update the docs about the new services layout.  The system wants to setup a dns-server through OpenCrowbar.  This runs on the admin node.  The completion of this step is supposed to register the service with consul.  This should wake the spinning dns-service up to make progress.  This should enable the dns-client and other blocked roles to continue.  As Rob, just pointed out, you can check consul by checking its UI.Thanks,GregOn Wed, Feb 11, 2015 at 12:13 AM, Soheil Eizadi <s.ei...@gmail.com> wrote:I have two issues after the initial installation:1). An error with DNS Servers, I see that Dns-Domain is set but not Dns-Servers.2). Error with SSL verify turned off, on Chef Client. I see that the Client.rb file is missing the line. I assume I can ignore this message for test, more for production.At this point the installation guide notes that I should have all green check marks, so not sure what went wrong with my installation.More detail logs below. Is there a section on how DNS is going to work with Opencrowbar. Looks like there is recipe to run Bind server.-SoheilAvailable AttributesValueDns serversNot setDns-domaintest.acme.comRuntimeError: dns-service not available Backtrace: /opt/opencrowbar/core/rails/app/models/service.rb:64:in `internal_do_transition' /opt/opencrowbar/core/rails/app/models/barclamp_dns/service.rb:19:in `do_transition' /opt/opencrowbar/core/rails/app/models/barclamp_crowbar/role_provided_jig.rb:23:in `run' /opt/opencrowbar/core/rails/app/models/jig.rb:155:in `block in run_job' /opt/opencrowbar/core/rails/app/models/jig.rb:148:in `loop' /opt/opencrowbar/core/rails/app/models/jig.rb:148:in `run_job' /var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/performable_method.rb:30:in `perform' /var/cache/crowbar/gems/ruby/2.1.0/gems/delayed_job-4.0.6/lib/delayed/backend/base.rb:94:in `block in invoke_job' /var/cache/crowbar/gems/ruby/2.1.0/ -- You received this message because you are subscribed to the Google Groups "Crowbar" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencrowbar...@googlegroups.com. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "Crowbar" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencrowbar...@googlegroups.com. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "Crowbar" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencrowbar...@googlegroups.com. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "Crowbar" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencrowbar...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages