What you're suggesting is a differentiation that has never existed in
this context (afaik). I'm not sure I feel good about opening this door -
I can easily see it become a gateway for lots of unintended effects to
trip users up.
As for your original problem, I don't see a good way of safeguarding
against this in puppet. Personally, I refrain from having puppet restart
services unless they are quite "safe", i.e. unlikely to stop working for
I am also very disconcerted about the issues involved in setting up newfiles. You can never, ever, EVER change the mode of a newly installedfile without restarting services on all existing machines. That doesn'tmake any sense.
I don't really understand your scenario. There is a new config file for
service X. It gets installed from a package, presumably the X package
itself. How are service restarts immediately after package installation
problematic?
So the "right" approach here is to ignore the mode in puppet, and adjust
your provisioning process to take care of it.
This idea makes me somewhat unconfortable. I get the feeling that this
change would be a lot more fundamental than one might think.
To puppet, each and every resource has one (more or less complex) state,
and puppet either accepts this state or sees need to change it. If
changed, fire a notify to all subscribers. That's it.
What you're suggesting is a differentiation that has never existed in
this context (afaik). I'm not sure I feel good about opening this door -
I can easily see it become a gateway for lots of unintended effects to
trip users up.
As for your original problem, I don't see a good way of safeguarding
against this in puppet.
When something changes the service has to be notified.
When the service should not be restarted, puppet should not be running or the Service%restart parameter should be set to /bin/true.
--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To post to this group, send email to puppet...@googlegroups.com.
To unsubscribe from this group, send email to puppet-users...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
If finer grained event-handling behavior is desired, then it should be implemented as a general-purpose facility instead of as a one-off special case. For instance, it is conceivable that a future version of Puppet would allow for some kind of filter to be installed on notification and subscription relationships, to control which events are passed through based on which resource properties changed. I don't imagine that could happen before v3.1, however, if then.
No. I'm saying that either you need to manage (outside of puppet) when your services restart OR you don't care when your services restart.
In the first case I'd want to run puppet with --noop for consequence-checking and only run it "hot" in a maintenance window. In the second case the whole discussion is moot anyways.
You could conceivably combine that general idea with tags, so as to apply only changes considered safe on most puppet runs, but allow everything to be applied together in maintenance windows. Getting the tags (only) in the right places could be tricky, and you would need to carefully weigh the consequences, but it should be possible to do what you want this way.
Like most other posters so far I think that this would be such a fundamental change that it should come in a major version if anything. I wouldn't be opposed to the idea of being able to filter on parameters when doing a subscribe/notify, maybe a filter meta-parameter along the line of filter => ['source', 'owner' ], but like most people I feel this adds a lot of complexity for very little gain. I would prefer to simply schedule the puppet run that changes the mode
and causes a service restart to occur out of hours and take the restart downtime. I feel it keeps things simple to retain the existing concept of notify.
but like most people I feel this adds a lot of complexity for very little gain.
On Jun 15, 2012, at 7:16 AM, jcbollinger wrote:You could conceivably combine that general idea with tags, so as to apply only changes considered safe on most puppet runs, but allow everything to be applied together in maintenance windows. Getting the tags (only) in the right places could be tricky, and you would need to carefully weigh the consequences, but it should be possible to do what you want this way.Unfortunately you can't use tags to limit whether or not something happens. I have a bug open on this. I'd love to say "only take this action when a tag is set" but this doesn't work.
The only thing you can do is put *EVERY* tag inside the puppet.conf for every host except the tags you want to use to limit actions. At even this fairly small install I'm working on right now, the total tag list is well over 200 entries and growing 8-10 a day. Maintaining that tag list for puppet.conf isn't impossible, but would be very difficult.
On Jun 15, 2012, at 12:35 AM, David Schmitt wrote:No. I'm saying that either you need to manage (outside of puppet) when your services restart OR you don't care when your services restart.
I find this odd, since more than 90% of the parameters that puppet provides for configuration management meet the same basic need that you are saying shouldn't be done. I could easily rewrite your statement as: [...]
But your main argument is:It's an odd phenomena, in that this wouldn't affect anyone not using the filter at all, but because they don't see a need for it they will object to someone else having the functionality.but like most people I feel this adds a lot of complexity for very little gain.
What you describe would indeed be unreasonably difficult to maintain, but you don't need to do that. You can instead assign some for-purpose tag of your choosing (e.g. "safe") to all resources you consider safe to modify at any time. Then you only have to worry about that one tag. Moreover, you then control what is considered safe from your manifests, instead of from nodes' local config files, and you can even switch safety on and off for specific resources if you should ever need to do.
John
--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/6BaMhZBvO60J.
To post to this group, send email to puppet...@googlegroups.com.
To unsubscribe from this group, send email to puppet-users...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
You seem to be interpreting many of the responses as assertions that you shouldn't want what you're asking for. I don't think anyone is saying that, at least not at the level of generality at which you responded to David. On the other hand, several people, myself included, have expressed valid concerns about the specific way you suggest enabling your desired behavior. I cannot speak for the other participants, but so far you have not addressed those concerns to my satisfaction.
I figure I should clarify a little bit. Unless my understanding of Puppet internals is way off it would be quite a lot of work to add the filter as it stands. A lot of code would have to change internally to make it capable of filtering on parameters. By complexity I meant code wise, not puppet manifest/syntax wise. I just meant that it would take a lot of development to get it working consistently and properly as a metaparam throughout the codebase. It would be nice to hear from someone at puppetlabs just how difficult this would be to add based on the 3.0 codebase.
--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
I've looked at the code. It requires changing the triggers to have attributes, which is honestly a fairly trivial change and likely backwards compatible with anything today, too. Look at Puppet::Relationship.match? () -- it's accepts the idea of event types, but only has triggers for :ALL_EVENTS or :NONE.That or else could be replaced with specific event types being passed quite easily. Then each object would have to be modified to accept a revised syntax in the situation (optional) where only specific types are desired.It really doesn't appear to be a large change.
As has already been suggested, make the change yourself, submit a patch. You have indicated several times that this isn't that big a change. Further, it's a change that is clearly only of benefit to you at this point, so trying to convince others to do it for you is pointless. If you *really* want Puppetlabs to do the change for you, then buy a support contract and have them do it.
this thread is starting to confuse me, I am no longer sure what you're
suggesting, precisely.
a) Make it possible to nullify notifications under certain circumstances.
b) Make it possible to ignore file owner/mode for files that exist already.
While (b) is rather tasteless to me (as is the whole "replace"
parameter), it is certainly well in line of what's possible today, so I
wouldn't object too much.
(a) is a nightmare I hope you're not invested in.
I'm thinking of sites where you edit a policy and two seconds latersomeone on a different team "kicks" the host for an entirely differentreason. And perhaps they should have used a tag to limit what theykicked, but perhaps they forgot. Or perhaps their module depends onyours so they so added your module as a tag.
This statement in itself is interesting as well. I believe that most
sites, large or small, don't face this particular problem at all,
because most of its incarnations are handled by manifest code control.
If a change goes to production, it better work, otherwise whoever pushed
to production has to answer for the breakage.
If you have a service that should under no circumstances be restarted
unattended, then for crying out loud, do not make it consume resource
notifications from puppet. That's begging for trouble.
I say "running puppet *hot* on a system *when* restarting a service might create a booboo is a bad idea." Emphasis on *hot* and *when*. For both emphasises, there are solutions (noop, cron, schedules, mcollective, dssh). Using a different CM is not likely to solve that unless you're willing to go the build-freeze-scrap route.
The core of this runs into organizational realms like "Change Management", which are not in scope for the puppet master/agent.
At the clients I work for, Rule #1 is "do not push into production." Even some of the outward-facing "test" systems have sensitive times when clients are testing. Developing changes and actually applying them are two VERY separate activities. You might want to look into git-flow to dis-entangle development, teams, and integration.
*boggle* Um, so your configuration management system is not part of your change management implementation? That's what you just said, and it makes no sense.
On Monday, June 18, 2012 at 1:07 PM, Jo Rhett wrote:*boggle* Um, so your configuration management system is not part of your change management implementation? That's what you just said, and it makes no sense.
Not to speak for David but in general - the point is that they are different components of an interlocking whole.
In the cases you are discussing (conflict between multiple groups working on related systems) you need to implement better change control or CI. While Puppet works quite well with those technologies, we don't yet provide tooling around git/svn/choose your tool VCS, nor do we provide the workflow itself.
In terms of conflicting changes, you really need code review, CI, or preferably both. That's how large organizations handle scale. Trying to force the tool to solve development problems isn't going to work. Look at how development teams solve the same problems. They don't do it by refusing to build a new daily snapshot of parts of the software.
Well, a in the service of b -- but as a general point, I think that every notify/subscribe should be tune-able as to which things changing will cause the action to take place.
Not to continue this thread longer than it needs to go, but wouldn't your suggested paradigm permit you to bite yourself in the following scenario:
- change the ownership or mode of a file to the point that the required application could no longer access the file
- exclude this change from notifying or triggering the application that the permissions or ownership of its' config file have changed.
This change will go unnoticed until:
o Some random point in time in the future wherein the service "should" restart according to the method you propose.
o Some part of the application needs to re-read it's configuration file
o The server reboots
Suddenly things don't work. You don't have a smoking gun for the culprit change as that "clean" deployment happened [hours,days,weeks] ago with some other "unrelated" change by some other team that this service was set to ignore.
Just my $.02, but if a file's ownership shouldn't change, and it belongs to a specific module, and there becomes a reason to change that ownership, without impacting existing modules, does it make sense to create a different module, and manage the dissimilar needs via that route?
Same software, same management functions, same configs… just a different permissions change on new installations. Should I really duplicate the entire module, and manage all changes in both places? And forever try to manage which host should be in which generation?
No, it's not the easiest way to break your environment. but it is a direct line to future obfuscated breakage, red herrings, and new and exciting ways to waste lots of engineers' time trying to suss out what in the the last N changes broke $package.
Not taking that into account will arguably lead one to making bad design choices. Aren't we supposed to be lazy and try to NOT shoot ourselves in the foot unexpectedly?
Same software, same management functions, same configs… just a different permissions change on new installations. Should I really duplicate the entire module, and manage all changes in both places? And forever try to manage which host should be in which generation?
There's many ways of doing this… A case statement tied to a version number, or some other fact will get you this..
Aren't you pretty clearly stating that this old generation IS different than the next generation? How is puppet supposed to KNOW the difference between the two?
I've yet to see a satisfactory implementation of 'do what I mean, not what I say'.. but then again, I think that's why we're driving the computers and not the other way around…
No - if it's that small and simple, the data bout which host is in which should be in your source of truth, CMDB, etc - and Puppet should read that data and determine which attribute or set of attributes (or resources) is applied based on that.You can do this today with hiera and conditionals.
--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To post to this group, send email to puppet...@googlegroups.com.
To unsubscribe from this group, send email to puppet-users...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Okay back to the original problem. It's become a bit hard to follow.Without code change to puppet you're stumped. But without knowing your systems a combination of package based deployments and excluding mode or owner maybe will get you by?But with puppet code change, would a way of doing what you are proposing be something like this:file { title:owner => name,mode => 0755,content => content,notify.=> service[name],notify_on => ['owner', 'content'],}Now I don't know how easy that is to code into puppet, but I think that would sound like a useful feature to me.Regards,Den
You don't need to use Hiera. You can use any data lookup tool you like to do the same thing.
Instead of making the case that someone else should do this, I recommend that you code up a solution and issue a pull request. If the code looks valuable it can then be managed like any other feature or code request in Puppet.
Code wins arguments.