Hello,
I have installed the Puppi module on both my Puppet Enterprise master instance, and on a client instance. I am able to invoke the puppi command, and deploy test application WAR files.
However, I have not been able to get mc-puppi installed. I have tried a manifest with both:
include puppi::mcollective::client
include puppi::mcollective::server
Using the puppet apply command:
[root@ip-10-0-10-193 puppet]# puppet apply install_puppi.pp
Notice: Compiled catalog for ip-10-0-10-193.ec2.internal in environment production in 0.24 seconds
Error: Could not find dependency Class[Mcollective] for File[/usr/libexec/mcollective/mcollective/agent/puppi.ddl] at /etc/puppetlabs/puppet/modules/puppi/manifests/mcollective/server.pp:23
I did notice that the params.pp file had a comment referencing Puppet Enterprise, so I made the change and tried again, with no luck.
Edit the params.pp file:
modules/puppi/manifests/params.pp
To reflect Puppet Enterprise
# Mcollective paths
# TODO: Add Paths for Puppet Enterprise:
# /opt/puppet/libexec/mcollective/mcollective/
$mcollective = $::operatingsystem ? {
debian => '/usr/share/mcollective/plugins/mcollective',
ubuntu => '/usr/share/mcollective/plugins/mcollective',
centos => '/usr/libexec/mcollective/mcollective',
redhat => '/usr/libexec/mcollective/mcollective',
#default => '/usr/libexec/mcollective/mcollective',
default => '/opt/puppet/libexec/mcollective/mcollective',
}
Am I missing something obvious to get this to run on Puppet Enterprise? The mcollective feature is critical to the approach I'd like to take for deploying apps within our Puppet framework.
Thanks for any help you can provide.
-Daren
Here are the results from facter run on the client:
architecture => x86_64
augeasversion => 1.1.0
bios_release_date => 06/02/2014
bios_vendor => Xen
bios_version => 4.2.amazon
blockdevice_xvda_size => 8589934592
blockdevices => xvda
concat_basedir => /var/opt/lib/pe-puppet/concat
custom_auth_conf => false
domain => ec2.internal
facterversion => 1.7.5
filesystems => ext4
fqdn => ip-10-0-20-203.ec2.internal
hardwareisa => x86_64
hardwaremodel => x86_64
hostname => ip-10-0-20-203
id => root
interfaces => eth0,lo
ip6tables_version => 1.4.18
ipaddress => 10.0.20.203
ipaddress_eth0 => 10.0.20.203
ipaddress_lo => 127.0.0.1
iptables_version => 1.4.18
is_pe => true
is_virtual => true
kernel => Linux
kernelmajversion => 3.10
kernelrelease => 3.10.42-52.145.amzn1.x86_64
kernelversion => 3.10.42
last_run => Wed Aug 6 14:39:31 UTC 2014
macaddress => 0E:1B:CA:B2:10:1D
macaddress_eth0 => 0E:1B:CA:B2:10:1D
manufacturer => Xen
memoryfree => 1.20 GB
memoryfree_mb => 1226.32
memorysize => 1.96 GB
memorysize_mb => 2004.62
memorytotal => 1.96 GB
mtu_eth0 => 9001
mtu_lo => 65536
netmask => 255.255.255.0
netmask_eth0 => 255.255.255.0
netmask_lo => 255.0.0.0
network_eth0 => 10.0.20.0
network_lo => 127.0.0.0
operatingsystem => Amazon
operatingsystemmajrelease => 3
operatingsystemrelease => 3.10.42-52.145.amzn1.x86_64
osfamily => RedHat
path => /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin:/root/bin
pe_build => 3.3.0
pe_major_version => 3
pe_minor_version => 3
pe_patch_version => 0
pe_postgres_default_version => unknown
pe_version => 3.3.0
physicalprocessorcount => 1
platform_tag => el-3-x86_64
postgres_default_version => unknown
processor0 => Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz
processorcount => 1
productname => HVM domU
ps => ps -ef
puppet_vardir => /var/opt/lib/pe-puppet
puppetversion => 3.6.2 (Puppet Enterprise 3.3.0)
puppi_projects => myapp
root_home => /root
rubysitedir => /opt/puppet/lib/ruby/site_ruby/1.9.1
rubyversion => 1.9.3
selinux => false
serialnumber => ec2a8290-8235-f172-632c-dbb3d98bb076
staging_http_get => curl
swapfree => 0.00 MB
swapfree_mb => 0.00
swapsize => 0.00 MB
swapsize_mb => 0.00
timezone => UTC
type => Other
uniqueid => 000acb14
uptime => 4 days
uptime_days => 4
uptime_hours => 117
uptime_seconds => 424174
uuid => EC2A8290-8235-F172-632C-DBB3D98BB076
virtual => xenhvm
$ mco rpc puppi check -I ip-10-0-20-203.ec2.internal
This command produces the following output:* [ ==========================================================> ] 1 / 1
ip-10-0-20-203.ec2.internal
"ip-10-0-20-203 check: 10-Connected_Users\e[75G[\e[0;32m OK \e[0;39m]\r\nUSERS OK - 1 users currently logged in |users=1;5;10;0\n\nip-10-0-20-203 check: 10-Disks_Usage\e[75G[\e[0;32m OK \e[0;39m]\r\nDISK OK - free space: / 5858 MB (74% inode=88%); /dev 989 MB (99% inode=99%);| /=1977MB;6346;7139;0;7933 /dev=0MB;791;890;0;989\n\nip-10-0-20-203 check: 10-Local_Mail_Queue\e[75G[\e[0;32m OK \e[0;39m]\r\nOK: mailq is empty|unsent=0;2;5;0\n\nip-10-0-20-203 check: 10-System_Load\e[75G[\e[0;32m OK \e[0;39m]\r\nOK - load average: 0.00, 0.01, 0.05|load1=0.000;15.000;30.000;0; load5=0.010;10.000;25.000;0; load15=0.050;5.000;20.000;0; \n\nip-10-0-20-203 check: 10-Zombie_Processes\e[75G[\e[0;32m OK \e[0;39m]\r\nPROCS OK: 0 processes with STATE = Z\n\nip-10-0-20-203 check: 15-DNS_Resolution\e[75G[\e[0;32m OK \e[0;39m]\r\nDNS OK: 0.004 seconds response time. example.com returns 93.184.216.119|time=0.004467s;;;0.000000\n\nip-10-0-20-203 check: 99-NTP_Sync\e[75G[\e[0;32m OK \e[0;39m]\r\nNTP OK: Offset -0.003214240074 secs|offset=-0.003214s;60.000000;120.000000;\n"
Finished processing 1 / 1 hosts in 5434.91 ms
I can run all of the commands except a puppi deploy:
mco rpc puppi deploy project=petstore -I ip-10-0-20-203.ec2.internal
I see that the mcollective connection to the client was good (via the /var/log/pe-mcollective/mcollective-audit.log):
[2014-08-12 14:12:31 UTC] reqid=9c149ad1255257f9aeaa3cb774e9d72d: reqtime=1407852760 caller=cert=peadmin-public@ip-10-0-10-193 agent=puppi action=deploy data={:project=>"petstore", :process_results=>true}
I see the following error in the /var/log/pe-mcollective/mcollective.log
E, [2014-08-12T14:22:31.942927 #22508] ERROR -- : agent.rb:108:in `rescue in handlemsg' puppi#deploy failed: #<Class:0x007f64743fb1c0>: execution expired
E, [2014-08-12T14:22:31.943086 #22508] ERROR -- : agent.rb:109:in `rescue in handlemsg' /opt/puppet/libexec/mcollective/mcollective/agent/puppi.rb:47:in ``'
/opt/puppet/libexec/mcollective/mcollective/agent/puppi.rb:47:in `deploy_action'
/opt/puppet/lib/ruby/site_ruby/1.9.1/mcollective/rpc/agent.rb:86:in `handlemsg'
/opt/puppet/lib/ruby/site_ruby/1.9.1/mcollective/agents.rb:126:in `block (2 levels) in dispatch'
/opt/puppet/lib/ruby/1.9.1/timeout.rb:69:in `timeout'
/opt/puppet/lib/ruby/site_ruby/1.9.1/mcollective/agents.rb:125:in `block in dispatch'
I'm not sure what is timing out. Here is the project config:
more /etc/puppi/projects/petstore/config
# File Managed by Puppet
# This is the base configuration file for project petstore
# During a puppi deploy it's copied into the runtime configuration
# used by the scripts executed by puppi
#
# Do not edit this file. You can modify these variables:
# Permanently: directly on your puppi manifests (When you use the puppi:project:: defines)
# Temporarily: using the puppi option -o to override them.
# example: puppi deploy $name -o "source=http://alt.com/file deploy_root=/var/tmp"
# Common variables for project defines
project="petstore"
e6/1.0/petstoreee6-1.0.war"
deploy_root="/opt/glassfish4/glassfish/domains/domain1/autodeploy/"
user="root"
predeploy_customcommand=""
postdeploy_customcommand=""
init_script=""
disable_services=""
firewall_src_ip=""
firewall_dst_port="0"
report_email="daren....@aurotechcorp.com"
enable="true"
# Variables used by project::files
files_prefix=""
source_baseurl=""
# Variables used by project::maven
document_root=""
config_root=""
# Variables added during runtime puppi operations
Is my download of the war file from my artifactory repository timing out?
Thanks again for your help!
Daren
--
You received this message because you are subscribed to the Google Groups "Example42 Puppet Modules" group.
To unsubscribe from this group and stop receiving emails from it, send an email to example42-puppet-m...@googlegroups.com.
To post to this group, send email to example42-pu...@googlegroups.com.
Visit this group at http://groups.google.com/group/example42-puppet-modules.
For more options, visit https://groups.google.com/d/optout.