Avoiding duplicate exported resource

592 views
Skip to first unread message

Daniel Urist

unread,
Mar 3, 2016, 12:11:59 PM3/3/16
to Puppet
I've created a module to configure a caching nginx proxy. I am running several of these proxies behind a load balancer. They all proxy the same external address. I'd like to export a nagios host/service for monitoring the external address, which will then be collected on my nagios server. The problem is, since I have several instances of the proxy managed by puppet, and the exported host/service is identical on each, I end up with duplicate resources. I could give the resources unique names (e.g. by appending the proxy's hostname to the resource name), but then I end up with multiple identical hosts/services in nagios, which doesn't work.

The puppet stdlib module has an "ensure_resource" function, but there doesn't seem to be a way to use this on an exported resource collector.

I guess one workaround would be to set a parameter in the proxy module, for example "export_address" and have that default to "false", and only set it to "true" for one node, but that's kind of ugly since one node then needs to be special.

Surely this isn't an uncommon use case-- what's the best way to work around this?  


Ken Barber

unread,
Mar 3, 2016, 12:51:13 PM3/3/16
to Puppet Users
Exported resources won't handle any resource de-duplication. You can
get around this by simply not using that to collect data, dalen's
puppetdb-query will help with this, and in PDB 4.0 we're introducing a
function for this purpose also into core. Once you have the data, then
you can do anything with it, using Puppet's latest iterator support
this should become easier also, as you can reduce the results.

As you mention, the other option is to put your proxied/external
address test, somewhere else, in one place - like on the nagios
machine itself or something like that. Then its not a duplicate. Model
wise, it probably doesn't belong on _all_ your nodes like you say, but
then again it shouldn't belong to one of the cluster nodes either,
later on if you remove that node it all stops working.

Model wise, in an ideal world, the proxied/virtual address would be a
'node' of sorts, and have that entry, but if no box exists to compile
that catalog, well then we're just talking crazy :-).

Whatever you're solution, the problem will repeat itself if you have
other virtual addresses, I'd make sure you're happy with whatever
solution for multiple clusters, at least then you have continuity,
people will know where to go to do look into problems etc.

ken.
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to puppet-users...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/CAEo6%3DKbwrCYz3ovqo-E0u7EH-Jep%2BwrxEW3FhV1v8-G%3DRr%2B80w%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.

Reinhard Vicinus

unread,
Mar 4, 2016, 4:53:30 PM3/4/16
to Puppet Users
We use a wrapper resource to ensure that only one resource is created. It's not perfect, because you need an additional resource, but it works. Here is an example:

Multiple nodes want to ensure that the following resource is created on a node:

file { '/tmp/at_least_one_node_exists.txt':
}

so we create a wrapper resource:

define export_file (
  filename,
) {
  ensure_resource('file', $filename, {})
}

and export the wrapper resource (here you need different names for all nodes, so we use fqdn in the name):

@@export_file { "${fqdn}_one_node":
  filename => '/tmp/at_least_one_node_exists.txt',
}

and also collect the wrapper resource on the destination node:

Export_file <<| |>>

jcbollinger

unread,
Mar 7, 2016, 10:21:11 AM3/7/16
to Puppet Users


Almost any use of the ensure_resource() function is a kludge.  It's entire purpose is to circumvent Puppet's checks for duplicate resources, which themselves serve an important purpose.  Even in that respect, usage of that function is typically reliable in only for resources that are declared only by that means, for an ordinary declaration of the same resource that is evaluated later will still fail as a duplicate declaration.

Where it really falls down, however, is the case where the various ensure_resource() declarations of a given resource are not all identical.  In that case, no catalog can be built that is consistent with all the declarations, but ensure_resource() prevents the catalog builder from recognizing that.  A catalog is therefore built, based on some random one of the conflicting declarations, and the system state consequently applied by the agent is then likely to be inconsistent.

The fact is that if you have a means to be confident that you have no inconsistent ensure_resource() declarations, then usually that is also a means to avoid ensure_resource() declarations altogether.  Generally, you would do that by factoring out the duplicate declarations to a single (ordinary) declaration somewhere else.

Your particular example, however, happens to find a crevice in which to lodge because, as presented, the exported resources involved encode only the titles of the wrapped resources (no resource properties), yet they afford the possibility of resources of multiple different titles being wrapped that way.  If they also encoded properties for the wrapped resources then they would be potentially unsafe, and if they wrapped a resource or resources of consistent title then there would be good alternatives that do not require ensure_resource().


John

jcbollinger

unread,
Mar 7, 2016, 10:27:55 AM3/7/16
to Puppet Users


On Thursday, March 3, 2016 at 11:51:13 AM UTC-6, Ken Barber wrote:
 
Model wise, in an ideal world, the proxied/virtual address would be a
'node' of sorts, and have that entry, but if no box exists to compile
that catalog, well then we're just talking crazy :-).



Well no, if the proxied / virtual address is not a property specific to any individual node, then it is a property of the overall site configuration.  Puppet therefore does not need to determine this from the nodes; instead, it needs to *apply* it to them.  As such, it ought to be recorded in the Hiera data repository from which Puppet is working.  If it's in the data, then it does not need to be communicated between nodes via exported resources.  Rather, Puppet should draw it from the same source for all nodes that need it for any purpose.


John

Ken Barber

unread,
Mar 7, 2016, 11:26:22 AM3/7/16
to Puppet Users
I think you've missed my modelling point or perspective, I was simply
expressing that if you could do it, you would record the intended
exported resource to a virtual node that maps to the virtual address,
but this isn't possible today. Hiera isn't part of the resulting
model, its just input that creates the graph. This is academic though,
its not possible anyway.

Irrespective of this imaginary world, one could store the data in
hiera to be consumed, if one chose to - or somewhere else, it matters
little for the resulting graph.

ken.

Daniel Urist

unread,
Mar 7, 2016, 6:34:51 PM3/7/16
to puppet-users
I've managed to solve this with query_resources() from puppedbquery to generate an array of the resources and ensure_resource() to only create a single instance. That seems to be the cleanest way at the moment to handle this.

--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages