> I'm using Opsviewmonitored module to manage my servers inside opsview, > exporting the resource to a central server and making sure all of the > available servers are monitored and exist in opsview.
> I have around 30 servers that are being managed by opsview just fine, but
> there is a specific group of servers which keeps disappearing from puppet
> as a managed resource, causing puppet to constantly try to apply the
> resource and reloading my opsview server endlessly.
This module is pretty basic, and doesn't do any error checking after
sending the config to Opsview. If you specify a non-existant
hostemplate or hostgroup, then the resources will appear to be applied
by puppet, but they won't actually be written to Opsview. As a result
you'll end up seeing the resources get applied on every puppet run.
> # puppet resource opsviewmonitored proxy3
> opsviewmonitored { 'proxy3':
> ensure => 'present',
>
> }
I would expect the output of "puppet resource" to list the hostgroup,
ip, and hostemplates parameters for this node if they exist. In this
case they're blank, so it's probably not setup properly.
So, is the 'proxy3' host actually present in Opsview, with the proper
settings? I'd double check to make sure that the hosttemplate and
hostgroup you're specifying are actually setup.
Btw - you might want to check out a fork of the opsview module, which
allows you to manage a whole lot more than just nodes (hostgroups,
servicechecks, contacts, etc):
http://forge.puppetlabs.com/cparedes/puppet_opsview
-devon