Restarting puppet agent after upgrade

72 views
Skip to first unread message

JonY

unread,
Oct 2, 2014, 10:07:42 AM10/2/14
to puppet...@googlegroups.com
I've been having problems (documented here) with upgrading my clients from p-ver 3.5.1 and facter 1.7.5 to 3.7 and 2.2 respectively. (TL;DR - the client gives a facter error with every run and is essentially wedged).

I filed a bug but was told "restarting the puppet service will clear up this error". Ok - that's great. But how? 

I tried adding this line to my puppet agent manifest (in the agent 'service' definition): restart  => '/usr/bin/nohup /etc/init.d/puppet restart &'. My hope was that bouncing the client wouldn't interfere with the running of said client. No dice. 

Supposedly the client will restart if there is a change to the puppet.conf file - truth? Do I need to add some bogus values to prod the client into action?

Other suggestions?


jcbollinger

unread,
Oct 3, 2014, 10:48:46 AM10/3/14
to puppet...@googlegroups.com


I suppose you are trying to update Puppet and Facter via the Puppet agent itself (otherwise you would use the same mechanism to restart the service that you do to install the update).  If you contemplate something like that, then your manifests should declare a signaling relationship between the package and the service, for instance by use of a chain operator:

    Package['facter'] ~> Service['puppet']
    Package['puppet'] ~> Service['puppet']

Equivalently, you could have the Service 'subscribe' to the Packages, or have the Packages 'notify' the Service.

It is unlikely that you need to specify a custom 'restart' command to your Service.  It very likely knows how to do a restart already; it's just not seeing any reason to do one.

If you want to try to trigger a service restart via the conf file, then the first thing to do would be to just 'touch' the file to change its modification timestamp without changing any of its contents.  I have lost track of whether conf changes actually do trigger the agent to restart, but surely that would be easy for you to test.


John

Felix Frank

unread,
Oct 28, 2014, 9:03:56 AM10/28/14
to puppet...@googlegroups.com
On 10/03/2014 04:48 PM, jcbollinger wrote:
>
>
> I suppose you are trying to update Puppet and Facter via the Puppet
> agent itself (otherwise you would use the same mechanism to restart the
> service that you do to install the update). If you contemplate
> something like that, then your manifests should declare a signaling
> relationship between the package and the service, for instance by use of
> a chain operator:
>
> Package['facter'] ~> Service['puppet']
> Package['puppet'] ~> Service['puppet']
>
> Equivalently, you could have the Service 'subscribe' to the Packages, or
> have the Packages 'notify' the Service.
>
> It is unlikely that you need to specify a custom 'restart' command to
> your Service. It very likely knows how to do a restart already; it's
> just not seeing any reason to do one.
>
> If you want to try to trigger a service restart via the conf file, then
> the first thing to do would be to just 'touch' the file to change its
> modification timestamp without changing any of its contents. I have
> lost track of whether conf changes actually do trigger the agent to
> restart, but surely that would be easy for you to test.
>
>
> John

Depending on the package in use, this can be ineffective.

If the package's post-installation routine triggers a restart of the
agent process, you will likely end up with a half-configured package and
an unhappy package management system. I know I was bitten by that quite
severely in the past.

Note that any signals emitted by the package resource are of no
consequence in this scenario. Since the process that triggered the
package update (the very puppet agent) is terminated by the postinst
script, *no* other resources can be synchronized.

HTH,
Felix

Byron Miller

unread,
Oct 28, 2014, 10:55:24 AM10/28/14
to puppet...@googlegroups.com
I used to use Mcollective to orchestrate puppet updates but I have since drifted away from installing MCO in my environments seeing that it seems fairly inactive and causes its own problems (random nodes dropping out in a large virtual machine environment)  curious as to how others have solved this with a high effective success rate.

-byron



On Thursday, October 2, 2014 9:07:42 AM UTC-5, JonY wrote:
Reply all
Reply to author
Forward
0 new messages