Automatically remove deactivated host from icinga/nagios config

1,155 views
Skip to first unread message

Kai Timmer

unread,
Oct 9, 2014, 7:52:48 AM10/9/14
to puppet...@googlegroups.com
Hello,
I'm using this snippet to build my icinga configuration out of my exported facts

  #Collect the nagios_host resources
  Nagios_host <<||>> {
    target  => "/etc/icinga/puppet.d/hosts.cfg",
    require => File["/etc/icinga/puppet.d/hosts.cfg"],
    notify => Service[icinga],
  }

If I now deactivate a host on my puppetdb with:

puppet node deactivate fqdn.of.host

I would assume that on the next run my hosts.cfg should be without the deactivated host. But this doesn't work. The host stays in the file. I can only remove it, if I delete the hosts.cfg file and than let puppet run again.
Did I miss something or is it not possible to automatically remove the host?

Ken Barber

unread,
Oct 9, 2014, 8:10:49 AM10/9/14
to Puppet Users
Nope it should work in theory, are you using PuppetDB for this? If so
in the puppetdb.log you should see a corresponding log entry for the
deactivate command for that node. Can you grep against your
puppetdb.log to see if this arrives when you send the `puppet node
deactivate {foo}` command.

Also check to make sure no new commands have come in for that node, it
will tell you in the log if this has happened. We reactivate a node on
any new data, so this is worth checking.

After deactivation you should be able to query the node data with
something like:

curl 'http://localhost:8080/v3/nodes/node_name'

And you should see a date next to 'deactivated' that indicates when it
was deactivated. If its deactivated then it should not collect.

Finally, try using a tool like this to analyze what exports still exist:

https://forge.puppetlabs.com/zack/exports

This will help you understand if the node you are trying to deactivate
is still exporting this data, or if its coming from another place.
This is important, sometimes there is data coming from another node,
and its often what users don't expect. Make sure you check for typos
on the node name most importantly, this is the biggest contributing
factor to confusion around this :-).

ken.

Kai Timmer

unread,
Oct 9, 2014, 8:51:33 AM10/9/14
to puppet...@googlegroups.com


Am Donnerstag, 9. Oktober 2014 14:10:49 UTC+2 schrieb Ken Barber:
Nope it should work in theory, are you using PuppetDB for this? If so
in the puppetdb.log you should see a corresponding log entry for the
deactivate command for that node. Can you grep against your
puppetdb.log to see if this arrives when you send the `puppet node
deactivate {foo}` command.

Yes, I'm using puppetdb and the command arrives at the database:
2014-10-09 13:31:37,302 INFO  [c.p.p.command] [55a70ff1-3491-4885-9614-589b35756883] [deactivate node] fqdn.of.host
 
Also check to make sure no new commands have come in for that node, it
will tell you in the log if this has happened. We reactivate a node on
any new data, so this is worth checking.

After deactivation you should be able to query the node data with
something like:

curl 'http://localhost:8080/v3/nodes/node_name'

And you should see a date next to 'deactivated' that indicates when it
was deactivated. If its deactivated then it should not collect.

This should output the same as puppet node status fqdn.of.host, right?

fqdn.of.host
Deactivated at 2014-10-09T11:31:37.300Z
Last catalog: 2014-10-07T15:11:36.671Z
Last facts: 2014-10-07T15:11:34.151Z


What caught my attention here is that the Timestamp here is different from the timestamp in the log. Maybe the is something going wrong there? PuppetDB and Puppetmaster are running on the same host, so there should be no time difference.
 
Finally, try using a tool like this to analyze what exports still exist:

https://forge.puppetlabs.com/zack/exports

This will help you understand if the node you are trying to deactivate
is still exporting this data, or if its coming from another place.
This is important, sometimes there is data coming from another node,
and its often what users don't expect. Make sure you check for typos
on the node name most importantly, this is the biggest contributing
factor to confusion around this :-).

The node doesn't show up with puppet node exports
But a puppet agent -t run on the icinga node still doesn't remove the node.

Maybe I should say that I am using foreman. But I also deactivated the node in foreman. So my guess is that I'm good there.

Best regards,
Kai

Ken Barber

unread,
Oct 9, 2014, 8:57:19 AM10/9/14
to Puppet Users
>> Nope it should work in theory, are you using PuppetDB for this? If so
>> in the puppetdb.log you should see a corresponding log entry for the
>> deactivate command for that node. Can you grep against your
>> puppetdb.log to see if this arrives when you send the `puppet node
>> deactivate {foo}` command.
>
>
> Yes, I'm using puppetdb and the command arrives at the database:
> 2014-10-09 13:31:37,302 INFO [c.p.p.command]
> [55a70ff1-3491-4885-9614-589b35756883] [deactivate node] fqdn.of.host

Good.

> What caught my attention here is that the Timestamp here is different from
> the timestamp in the log. Maybe the is something going wrong there? PuppetDB
> and Puppetmaster are running on the same host, so there should be no time
> difference.

The timestamp is in ISO-8601 format, which means its got a timezone
associated with it, in this case UTC. Could this be the cause of
confusion?

>> Finally, try using a tool like this to analyze what exports still exist:
>>
>> https://forge.puppetlabs.com/zack/exports
>>
>> This will help you understand if the node you are trying to deactivate
>> is still exporting this data, or if its coming from another place.
>> This is important, sometimes there is data coming from another node,
>> and its often what users don't expect. Make sure you check for typos
>> on the node name most importantly, this is the biggest contributing
>> factor to confusion around this :-).
>
>
> The node doesn't show up with puppet node exports
> But a puppet agent -t run on the icinga node still doesn't remove the node.
>
> Maybe I should say that I am using foreman. But I also deactivated the node
> in foreman. So my guess is that I'm good there.

Wait, are you actually purging the resources somewhere? If it becomes
unmanaged, that doesn't mean it cleans up after itself unless you are
purging also.

ken.

Kai Timmer

unread,
Oct 9, 2014, 9:07:08 AM10/9/14
to puppet...@googlegroups.com

2014-10-09 14:56 GMT+02:00 Ken Barber <k...@puppetlabs.com>:
Wait, are you actually purging the resources somewhere? If it becomes
unmanaged, that doesn't mean it cleans up after itself unless you are
purging also.

I did this:
Inline-Bild 1

Not sure if I can follow you though?!

What happens if I manually add a (fake) host to my hosts.cfg file. The host doesn't exist in the puppetdb, because it was never "alive". On the next puppet run, puppet should remove this "false" entry in my hosts.cfg, right?

Best Regards,
-- Kai Timmer / em...@kaitimmer.de

Ken Barber

unread,
Oct 9, 2014, 9:20:48 AM10/9/14
to Puppet Users
>> Wait, are you actually purging the resources somewhere? If it becomes
>> unmanaged, that doesn't mean it cleans up after itself unless you are
>> purging also.
>
>
> I did this:
>
> Not sure if I can follow you though?!
>
> What happens if I manually add a (fake) host to my hosts.cfg file. The host doesn't exist in the puppetdb, because it was never "alive". On the next puppet run, puppet should remove this "false" entry in my hosts.cfg, right?

No not necessarily, you need to enable resource purging with resources
like nagios_host:

resources { "nagios_host":
purge => true,
}

ken.

Kai Timmer

unread,
Oct 9, 2014, 10:12:13 AM10/9/14
to puppet...@googlegroups.com
2014-10-09 15:20 GMT+02:00 Ken Barber <k...@puppetlabs.com>:
No not necessarily, you need to enable resource purging with resources
like nagios_host:

resources { "nagios_host":
  purge => true,
}

Oh, I just did not now that. My manifest now looks like this:

  resources { ["nagios_host", "nagios_service"]:
    purge => true,
  }

  #Collect the nagios_host resources
  Nagios_host <<||>> {
    target  => "/etc/icinga/puppet.d/hosts.cfg",
    require => File["/etc/icinga/puppet.d/hosts.cfg"],
    notify => Service[icinga],
  } 

But the entries don't get purged. Looks like I'm still missing something :/

Best regards,
Kai

Jonathan Gazeley

unread,
Oct 9, 2014, 11:00:46 AM10/9/14
to puppet...@googlegroups.com
I think you are running into this:

"You can purge Nagios resources using the resources type, but only in the default file locations. This is an architectural limitation."

https://docs.puppetlabs.com/references/latest/type.html#nagioscommand

i.e. if you set the target parameter, you lose the ability to purge.

Ken Barber

unread,
Oct 9, 2014, 11:06:50 AM10/9/14
to Puppet Users
>> No not necessarily, you need to enable resource purging with resources
>> like nagios_host:
>>
>> resources { "nagios_host":
>> purge => true,
>> }
>
>
> Oh, I just did not now that. My manifest now looks like this:
>
> resources { ["nagios_host", "nagios_service"]:
> purge => true,
> }
>
> #Collect the nagios_host resources
> Nagios_host <<||>> {
> target => "/etc/icinga/puppet.d/hosts.cfg",
> require => File["/etc/icinga/puppet.d/hosts.cfg"],
> notify => Service[icinga],
> }
>
> But the entries don't get purged. Looks like I'm still missing something :/
>
>
>
> I think you are running into this:
>
> "You can purge Nagios resources using the resources type, but only in the
> default file locations. This is an architectural limitation."
>
> https://docs.puppetlabs.com/references/latest/type.html#nagioscommand
>
> i.e. if you set the target parameter, you lose the ability to purge.

Well spotted Jonathan ... :-).

So Kai, you can provide fake this with soft-links to the icigna dir
from the expected nagios configuration directory. Or soft-link the
files themselves, up to you.

ken.

Kai Timmer

unread,
Oct 9, 2014, 11:32:39 AM10/9/14
to puppet...@googlegroups.com

2014-10-09 17:06 GMT+02:00 Ken Barber <k...@puppetlabs.com>:
So Kai, you can provide fake this with soft-links to the icigna dir
from the expected nagios configuration directory. Or soft-link the
files themselves, up to you.

Thank you both a lot.

Now the (fake-)host gets removed but for some reason the icinga service doesn't get reloaded. Do I need to add something else to the manifest? The notification works fine when I add a host. 

Thank you,
Reply all
Reply to author
Forward
0 new messages