| Ok, so yeah, I agree, pluginsync doesn't make sense. You'd lose half the stuff that's making the cached catalog work in the first place. That's fine. But, as the large customer in question, I don't agree that this wouldn't be a valuable feature. Yes, thanks to PUP-7779 we are able to now get fresh facts on all of our nodes in cached catalog, however, if we wanted to act on those facts in some sort of capacity we'd have to feed the facts to some sort of event management system, then write rules there to trigger a call back to the agent to replace the catalog. Now don't get me wrong. There's easier ways too, like I could make an exec that runs a facter command every puppet run, uses some sketchy bash logic, and then flips the value of use_cached_catalog to false for the next run, but that's not at all performant, and is, of course, a total hack. The problem for us is that with the amount of change control nightmares and regulatory scrutiny that we have, we can't avoid running our fleet in cached catalog. And when you run a fleet of tens of thousands in cached catalog, you quickly find yourself quite limited. Being able to leverage some sort of control over those nodes running cached catalogs would make our lives so much easier, and give us back at least some of the flexibility we lost when we made the move to managing most of our fleet in cached_catalog. Another thing this could potentially enable is things like phased deployments. If we could set rules in facts that determine when a system is supposed to get a new catalog, which some of our teams are currently doing for their own servers via other hacks, we could do things like planned catalog refreshes, instead of our nodes sitting on an old catalog until the end of time because they got missed in the last rollout. This is probably the biggest use-case for us at the moment, but with the ability for puppet to respond to facts returned to us, I'm sure we'd find others very quickly. In short, can we can keep this around? |