On 12/04/2013 08:22 AM, Felix Frank wrote:
> Hi,
>
> I must be missing an essential piece here.
>
> All three of your puppet stack nodes must be present in each instance,
> no? The production master manages all three masters, normally. To change
> monitoring of either of them, you update the production manifests.
What do you mean by "present in each instance"? Each stack is
self-contained - it has its own ENC, PuppetDB, and Master. Each stack's
master manages itself and its stack, out of a "production" environment
(git master). The logic behind this is that if the production stack
suffers an outage (i.e. hardware failure) the ENC data is imported to
the test stack, and nodes are seamlessly moved over. Yes, I'm aware of
the tradeoff that certain things aren't constrained by environment, and
bad code in one environment on the dev or test masters could bring down
that stack.
I'm not just worried about monitoring the puppet stack itself, I'm also
worried about monitoring the nodes. I.e. the dev stack has a bunch of
VMs that exist to test code, and they need to be monitored in Nagios.
>
> Of course, if you implement some new monitoring feature on the dev
> master, you must have that node run puppet against its local dev master
> to export resources, then the nagios server also against the dev master
> to import them. But that is just the usual dev workflow, I assume.
Yeah, that's understood. But what about the production monitoring? I'd
need to run all of the nodes in the environment against the production
master to actually export the nagios configs to the nagios server... or
else I'd need (what I'm asking about) some way of exporting the Nagios
configs from the dev and test masters to the Nagios server, but only for
one environment (broken in puppet... exported resources don't work this
way) or only when manually requested...