Caching catalogs

75 views
Skip to first unread message

Raphaël Pinson

unread,
Jun 29, 2015, 11:43:45 AM6/29/15
to puppe...@googlegroups.com
Hello,


I've activated caching on our Puppetservers, using the admin API to invalidate the cache upon deploying new environments. However, this only caches manifests, and catalogs still need to be compiled for every request.

I'm thinking (at least in our case) it wouldn't be totally crazy to cache catalogs on the master so long as:

* manifests are not changed (this is taken care of by the r10k hook + admin API)
* data do not change (same, since we deploy hiera data with r10k)
* facts do not change.


Obviously, *some* facts always change (uptime, memoryfree, swapfree, etc.), but most of them don't. So the idea would be to add a parameter in puppet.conf with the name of these facts that should be used as a basis for invalidating the catalog, and use the other facts to decide when a catalog should be recompiled.

Is there already some kind of code doing that, or any opinion/feedback on this idea?


Cheers,

Raphaël



Erik Dalén

unread,
Jun 29, 2015, 11:47:54 AM6/29/15
to puppe...@googlegroups.com

If you have any exported resource collections or query any external system for data it won't work (puppetdbquery, DNS, LDAP for example).

But for the subset you can make those guarantees I suppose it will give a speed boost.


--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/a1b2f2f0-3392-4c01-a195-c60bb81b60b2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Raphaël Pinson

unread,
Jun 29, 2015, 11:55:05 AM6/29/15
to puppe...@googlegroups.com

On Monday, June 29, 2015 at 5:47:54 PM UTC+2, Erik Dalén wrote:

If you have any exported resource collections or query any external system for data it won't work (puppetdbquery, DNS, LDAP for example).

But for the subset you can make those guarantees I suppose it will give a speed boost.



That is quite true. Most of the time in our case, new exported resources mean either a change of code, or a newly classified node (which means new hiera data in our case), so that would actually work.

Luke Kanies

unread,
Jun 29, 2015, 1:02:17 PM6/29/15
to puppe...@googlegroups.com
This is something that our team at Puppet Labs has been working on a ton. It’s beneficial in the short term, for the kind of performance and other benefits you describe, but it’s also key in a bunch of other cool stuff we’re doing. The short answer is that in some ways it’s quite easy, but it also requires some changes to the core that aren’t necessarily as easy.

Eric Sorenson is lead on the work (code-named Direct Puppet), so hopefully he’ll chime in with more details. The basic idea, though, is that we do a few things, all together (note that this is from memory, and I’m sure I’m missing pieces or getting some of them wrong):

* Make cached catalogs more valuable by changing reference-by-url of files to reference-by-content (so updated files on the server don’t change catalog behavior)

* Have the client always check to see if its catalog is still valid, or if it should download a new one; this will result in the client defaulting to reusing its catalog in most situations

* Provide simple mechanisms on the server for indicating when catalogs are out of date. I believe in the first release or so we’re providing a big huge boolean that just resets all catalog staleness (e.g., if you push your code to the server, there will be a command you run on that server that resets the ‘latest catalog’ date, so all existing catalogs will be considered stale)

All of this together should mean that clients only download catalogs when you’ve pushed code (or made some other change, and then reset catalog freshness), which means you can often dramatically simplify your canary testing mechanisms for some use cases (push code to server; update catalogs on a couple of hosts; update on all hosts if it works), you’ll get a huge performance boost because you’ll only compile when needed, and everything will just make more sense.

AFAIK all of the above is in the core product and thus is OSS, so its lack of openness is just laziness on our part (it’s hard enough to explain it to 5 people, much less the whole list), but maybe this will prompt us to publish it a bit more. We’re also relatively early on getting it all lined up, so it’s still a bit in flux.

There are some other pieces necessary to put it all together, but we’re not quite ready to talk about those yet. Hopefully the above is exciting enough. :)

Chris Price

unread,
Jul 3, 2015, 3:25:15 AM7/3/15
to puppe...@googlegroups.com


On Monday, June 29, 2015 at 6:02:17 PM UTC+1, Luke Kanies wrote:
On Jun 29, 2015, at 8:43 AM, Raphaël Pinson <raphael...@camptocamp.com> wrote:
>
> Hello,
>
>
> I've activated caching on our Puppetservers, using the admin API to invalidate the cache upon deploying new environments. However, this only caches manifests, and catalogs still need to be compiled for every request.
>
> I'm thinking (at least in our case) it wouldn't be totally crazy to cache catalogs on the master so long as:
>
> * manifests are not changed (this is taken care of by the r10k hook + admin API)
> * data do not change (same, since we deploy hiera data with r10k)
> * facts do not change.
>
>
> Obviously, *some* facts always change (uptime, memoryfree, swapfree, etc.), but most of them don't. So the idea would be to add a parameter in puppet.conf with the name of these facts that should be used as a basis for invalidating the catalog, and use the other facts to decide when a catalog should be recompiled.
>
> Is there already some kind of code doing that, or any opinion/feedback on this idea?

This is something that our team at Puppet Labs has been working on a ton.  It’s beneficial in the short term, for the kind of performance and other benefits you describe, but it’s also key in a bunch of other cool stuff we’re doing.  The short answer is that in some ways it’s quite easy, but it also requires some changes to the core that aren’t necessarily as easy.

Eric Sorenson is lead on the work (code-named Direct Puppet), so hopefully he’ll chime in with more details.  The basic idea, though, is that we do a few things, all together (note that this is from memory, and I’m sure I’m missing pieces or getting some of them wrong):

This is indeed something we've been putting a lot of thought and effort into lately.

I have a question / thought experiment related to this, and would really love to hear some feedback from the community:

What would you think about a setup where your master never saw any of your code changes at all, until you ran a specific command (e.g. 'puppet deploy')?  In other words, you hack away on the modules / manifests / hiera data in your code tree as much as you like but your master keeps compiling catalogs from the 'last known good' setup, until you run this 'deploy' command?  At that point, all of your current code becomes the new 'last known good' and that is what your master compiles off of until you do another deploy.

We could also provide an HTTP endpoint to accomplish the same behavior.  And we could theoretically make this new behavior entirely opt-in, but, by opting-in to it, you'd get access to new features similar to what Raphaël and Luke were hinting at.

Again, this is just a thought experiment at the moment.  Curious how this would impact people's workflows.

 


 

Erik Dalén

unread,
Jul 3, 2015, 4:50:36 AM7/3/15
to puppe...@googlegroups.com
On Fri, 3 Jul 2015 at 09:25 Chris Price <ch...@puppetlabs.com> wrote:


On Monday, June 29, 2015 at 6:02:17 PM UTC+1, Luke Kanies wrote:
On Jun 29, 2015, at 8:43 AM, Raphaël Pinson <raphael...@camptocamp.com> wrote:
>
> Hello,
>
>
> I've activated caching on our Puppetservers, using the admin API to invalidate the cache upon deploying new environments. However, this only caches manifests, and catalogs still need to be compiled for every request.
>
> I'm thinking (at least in our case) it wouldn't be totally crazy to cache catalogs on the master so long as:
>
> * manifests are not changed (this is taken care of by the r10k hook + admin API)
> * data do not change (same, since we deploy hiera data with r10k)
> * facts do not change.
>
>
> Obviously, *some* facts always change (uptime, memoryfree, swapfree, etc.), but most of them don't. So the idea would be to add a parameter in puppet.conf with the name of these facts that should be used as a basis for invalidating the catalog, and use the other facts to decide when a catalog should be recompiled.
>
> Is there already some kind of code doing that, or any opinion/feedback on this idea?

This is something that our team at Puppet Labs has been working on a ton.  It’s beneficial in the short term, for the kind of performance and other benefits you describe, but it’s also key in a bunch of other cool stuff we’re doing.  The short answer is that in some ways it’s quite easy, but it also requires some changes to the core that aren’t necessarily as easy.

Eric Sorenson is lead on the work (code-named Direct Puppet), so hopefully he’ll chime in with more details.  The basic idea, though, is that we do a few things, all together (note that this is from memory, and I’m sure I’m missing pieces or getting some of them wrong):

This is indeed something we've been putting a lot of thought and effort into lately.

I have a question / thought experiment related to this, and would really love to hear some feedback from the community:

What would you think about a setup where your master never saw any of your code changes at all, until you ran a specific command (e.g. 'puppet deploy')?  In other words, you hack away on the modules / manifests / hiera data in your code tree as much as you like but your master keeps compiling catalogs from the 'last known good' setup, until you run this 'deploy' command?  At that point, all of your current code becomes the new 'last known good' and that is what your master compiles off of until you do another deploy.

Keeps compiling or keeps serving a cached copy?
 

We could also provide an HTTP endpoint to accomplish the same behavior.  And we could theoretically make this new behavior entirely opt-in, but, by opting-in to it, you'd get access to new features similar to what Raphaël and Luke were hinting at.

Again, this is just a thought experiment at the moment.  Curious how this would impact people's workflows.


Well, it would be useful to be able to atomically switch to a new version of manifests. At the moment the best you can do is to checkout the new version somewhere else and move/relink it into place, so you get all of the new environment at the same time but there might still be ongoing compiles that get half of the old environment and half of the new.

But it would really have to be per environment (and optionally all of them).

For consistency this would be good. When it comes to speed improvements I think there's other areas that need more focus. In my experience catalog application (even with no changes applied) takes about five times longer than catalog compilation (Puppet 4.2 improved this somewhat though).

/Erik

Chris Price

unread,
Jul 3, 2015, 5:05:08 AM7/3/15
to puppe...@googlegroups.com
On Fri, Jul 3, 2015 at 9:50 AM, Erik Dalén <erik.gus...@gmail.com> wrote:


On Fri, 3 Jul 2015 at 09:25 Chris Price <ch...@puppetlabs.com> wrote:


On Monday, June 29, 2015 at 6:02:17 PM UTC+1, Luke Kanies wrote:
On Jun 29, 2015, at 8:43 AM, Raphaël Pinson <raphael...@camptocamp.com> wrote:
>
> Hello,
>
>
> I've activated caching on our Puppetservers, using the admin API to invalidate the cache upon deploying new environments. However, this only caches manifests, and catalogs still need to be compiled for every request.
>
> I'm thinking (at least in our case) it wouldn't be totally crazy to cache catalogs on the master so long as:
>
> * manifests are not changed (this is taken care of by the r10k hook + admin API)
> * data do not change (same, since we deploy hiera data with r10k)
> * facts do not change.
>
>
> Obviously, *some* facts always change (uptime, memoryfree, swapfree, etc.), but most of them don't. So the idea would be to add a parameter in puppet.conf with the name of these facts that should be used as a basis for invalidating the catalog, and use the other facts to decide when a catalog should be recompiled.
>
> Is there already some kind of code doing that, or any opinion/feedback on this idea?

This is something that our team at Puppet Labs has been working on a ton.  It’s beneficial in the short term, for the kind of performance and other benefits you describe, but it’s also key in a bunch of other cool stuff we’re doing.  The short answer is that in some ways it’s quite easy, but it also requires some changes to the core that aren’t necessarily as easy.

Eric Sorenson is lead on the work (code-named Direct Puppet), so hopefully he’ll chime in with more details.  The basic idea, though, is that we do a few things, all together (note that this is from memory, and I’m sure I’m missing pieces or getting some of them wrong):

This is indeed something we've been putting a lot of thought and effort into lately.

I have a question / thought experiment related to this, and would really love to hear some feedback from the community:

What would you think about a setup where your master never saw any of your code changes at all, until you ran a specific command (e.g. 'puppet deploy')?  In other words, you hack away on the modules / manifests / hiera data in your code tree as much as you like but your master keeps compiling catalogs from the 'last known good' setup, until you run this 'deploy' command?  At that point, all of your current code becomes the new 'last known good' and that is what your master compiles off of until you do another deploy.

Keeps compiling or keeps serving a cached copy?

Well, both.  :)  In cases where the catalog didn't need to be re-compiled, we wouldn't, but in cases where we do need to do a compile (say, brand new node checks in or something), we'd do the compile based on the 'last known good' code rather than the current contents of the code tree.
 

We could also provide an HTTP endpoint to accomplish the same behavior.  And we could theoretically make this new behavior entirely opt-in, but, by opting-in to it, you'd get access to new features similar to what Raphaël and Luke were hinting at.

Again, this is just a thought experiment at the moment.  Curious how this would impact people's workflows.


Well, it would be useful to be able to atomically switch to a new version of manifests. At the moment the best you can do is to checkout the new version somewhere else and move/relink it into place, so you get all of the new environment at the same time but there might still be ongoing compiles that get half of the old environment and half of the new.

Yep; atomicity would be one of the major goals.
 
But it would really have to be per environment (and optionally all of them).

That makes sense and seems doable.

For consistency this would be good. When it comes to speed improvements I think there's other areas that need more focus. In my experience catalog application (even with no changes applied) takes about five times longer than catalog compilation (Puppet 4.2 improved this somewhat though).

Fair!  Thanks for the feedback.

Hopefully we've got enough developers now to where we can be working on client and server optimizations in parallel, though, so for this thread I'm most interested in teasing out the feasibility of introducing a 'deploy' step on the server side; it'd give us some atomicity and open the door for a lot of future features and optimizations, but I don't have a great understanding of whether or not it might break some workflows that people rely on today.  Erik, it sounds like in your case, it wouldn't cause you any issues w/rt workflow?

Erik Dalén

unread,
Jul 3, 2015, 5:13:36 AM7/3/15
to puppe...@googlegroups.com
Well, do you have any plans on how to solve queries to external systems and updates in them? For example a new node checks in a exported resource that some other node should collect, but it already has a cached catalog. Require a cache invalidation to be trigger each time you update external systems?
It might be tricky with some external systems to do that, possibly better to be able to flag functions that have side effects and always recompile catalogs that call on such functions.

Also will PuppetDB be used as the catalog cache so it would work with multiple puppet masters behind a load balancer or SRV records?

/Erik

Chris Price

unread,
Jul 3, 2015, 5:25:51 AM7/3/15
to puppe...@googlegroups.com
Yeah, dealing with side-effect inputs to catalogs is a tricky issue.  We're still batting around ideas on that.
 

Also will PuppetDB be used as the catalog cache so it would work with multiple puppet masters behind a load balancer or SRV records?

That conversation is still ongoing as well.  Storing catalogs in PuppetDB is definitely an option that has been discussed.  In any case, a solid multi-master story will definitely be considered a pre-requisite to any final implementation choices.

Putting aside catalog caching for the moment, though... if we added a mechanism for atomically deploying new code (even if we still did a full catalog compile on every agent checkin), there are still a lot of other kinds of optimizations we could build on top of this for Puppet Server (e.g., it could render the 'environment_timeout' setting irrelevant).  But before we get too far down the path of mapping out those kinds of optimizations, we've got to sort out whether or not the introduction of the extra step to 'deploy' code would cause problems for people.

As I'm typing this I'm realizing that I've kind of hijacked this thread, since, at least at first, I'm more interested in talking about workflows / atomic code deployment than about the actual details around caching.  Maybe I should break this off into a new thread?  Sorry about that!

Peter Meier

unread,
Jul 3, 2015, 12:34:25 PM7/3/15
to puppe...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi All,

> Is there already some kind of code doing that, or any
> opinion/feedback on this idea?

I have written once a change management guard, that does some of the
stuff you mentioned and as a side-effect also includes caching - or I
abused caching for change management...

https://github.com/duritong/puppet-cm_guard

Its main target was a 2.7 installation and it's still being used there
(afaik). Nevertheless, it should also work on a 3.x installation, no
idea about 4.x. I never tested that to the extend and didn't touch the
code since 2 years, so I don't give any guarantees. Still I think it
might be interesting for the current discussion.

I see 2 main points:

1. invalidations: As you mentioned there are tons of effects that can
play into invalidating the cache and imho it is highly depending on
the specific puppet setup and all the different features it uses. So,
for the above implementation, I took the approach that there is a
simple method that needs to be implemented and the contract is, that
puppet passes all the information it has about such a node (mainly all
the facts the client sent) and the method should either say true or
false to indicate if a recompile is needed.
And you as a user of that plugin would be able to do whatever you want
within that invalidatior, e.g. measuring outside temperature or roll a
dice.
For a potential general solution I see such a method of contributing
to the invalidation process as crucial. There should be a sane and
useable default, that works in 80% of the cases, but it is freely
tuneable to also be able to address the other 20% (or at least 18%).
Providing a way of hooking into the decision and contributing to the
decision making process with whatever you like, is the solution we
should imho aim for.

2. not everything is within the catalog: plugins, file sources, result
or side-effects of function calls.

2.1 Plugins are synced at a time when the catalog is not yet in the
game, nevertheless they potentially affect catalog compilation
(new facts, new types, etc.). For my use case I was able to
ignore that fact, as it was enough to make people aware of that
problem. But still the fact that with my solution the cached
catalog you get and the plugins you might have gotten earlier can
be out of sync might be a problem in some cases.

2.2 file sources: They are not part of the catalog and I
tried to use the static compiler to get the content of the
sources into the catalog so we just have one huge static blob.
However, this might not work in all cases or at least is tricky
(e.g. recursive file resources).
Not being able to reference a certain file source version to a
certain catalog might change a file without doing other required
changes, that would be within the new catalog and hence cause
other problems. Imho this is one of the biggest problems and we
should definitely try to adress it. Unfortunately, it is also
quite a hard problem.

2.3 result/side-effect of function calls: They are also contributing
to side-effects in a similar way as new plugins, but the change
of a result might also be predicatable through an external
invalidation hook. E.g. hiera files that changed etc.

The more you dive into the topic the more very tricky problems are
showing up and imho the topic of caching catalogs is one of the harder
problems to solve. However, I think these hard parts are because we
would like to address all/most possible cases that we can think of at
the moment. So maybe, we should not try to come up with the
complete/ideal solution from the beginning on, but rather a solution
that works in most ideal cases, that have clear outlined restrictions
and limitations and work from there on forward tackling the much
harder problems, while still getting feedback from the early usage of
the already released but minimized feature set of such a solution.
Also making it possible for people to hook into that process will
allow them to play around with/extend the feature by themselves and
empowers them to contribute to the final solution.

best

peter
-----BEGIN PGP SIGNATURE-----

iEYEARECAAYFAlWWuYIACgkQbwltcAfKi38o6gCgsfOb7pzGolocFtwYgMZFoLK9
7+kAn1+hZdTxOG0JyFc3MjZmx0KF4l4I
=pi3j
-----END PGP SIGNATURE-----

Trevor Vaughan

unread,
Jul 3, 2015, 1:15:28 PM7/3/15
to puppe...@googlegroups.com
I was curious while reading this, it seems like we're touching on a few different potentials.

1) Sending catalog diffs.
2) Asynchronous catalog compilation.

I think that #2 might be critical to scaling and sort of mirrors what Peter was doing with the CM Guard. Basically, instead of the Puppet client sending facts to the server when requested, you have a fact collector that the clients send their facts to on a regular basis. The compiler then uses the last good set of facts to compile the catalog (compiler farm?) and then ship it off to the delivery cache. The client would be requested to update its facts if it were outside of some set good threshold.

I think that this would be amazing if implemented simply and without adding any new data transport layers.

Thanks,

Trevor

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Trevor Vaughan
Vice President, Onyx Point, Inc
(410) 541-6699

-- This account not approved for unencrypted proprietary information --

John Bollinger

unread,
Jul 6, 2015, 10:55:15 AM7/6/15
to puppe...@googlegroups.com

On Friday, July 3, 2015 at 2:25:15 AM UTC-5, Chris Price wrote:
 
I have a question / thought experiment related to this, and would really love to hear some feedback from the community:

What would you think about a setup where your master never saw any of your code changes at all, until you ran a specific command (e.g. 'puppet deploy')?  In other words, you hack away on the modules / manifests / hiera data in your code tree as much as you like but your master keeps compiling catalogs from the 'last known good' setup, until you run this 'deploy' command?  At that point, all of your current code becomes the new 'last known good' and that is what your master compiles off of until you do another deploy.


I like that pretty well.  If Puppet moved in this direction, though, then it would be nice to protect against "last known good" turning out to not be so good after all by making it a blessed configuration that has actually proven good. That way, if a fresh code deployment turns out to be bad then there is a genuine known good configuration that can quickly be restored.  In other words, I'm suggesting three configurations instead of two: undeployed, deployed, and known good.


John

Chris Price

unread,
Jul 7, 2015, 11:28:42 AM7/7/15
to puppe...@googlegroups.com
Any thoughts on what the commands might look like there?  Particularly the command to flag something as 'last known good'?

Also, Erik mentioned that he'd expect this to work on a per-environment level... I'm trying to think about what 'last known good' would look like in that context.
 

Eric Sorenson

unread,
Jul 7, 2015, 12:34:37 PM7/7/15
to puppe...@googlegroups.com
On Mon, 6 Jul 2015, John Bollinger wrote:

> I like that pretty well. If Puppet moved in this direction, though, then
> it would be nice to protect against "last known good" turning out to not be
> so good after all by making it a blessed configuration that has actually
> proven good. That way, if a fresh code deployment turns out to be bad then
> there is a genuine known good configuration that can quickly be restored.
> In other words, I'm suggesting three configurations instead of two:
> undeployed, deployed, and known good.

Is that really the domain of the compiler/catalog cache, though? Seems more
like the purview of the code testing and promotion workflow, because it
requires a test/feedback/fix interaction loop, not an on/off flag.

Eric Sorenson - eric.s...@puppetlabs.com - freenode #puppet: eric0
puppet platform // coffee // techno // bicycles

Trevor Vaughan

unread,
Jul 7, 2015, 12:56:24 PM7/7/15
to puppe...@googlegroups.com
I was thinking exactly the same as Eric.

It seems like we're repeating what Git does.

Trevor

John Bollinger

unread,
Jul 7, 2015, 2:14:35 PM7/7/15
to puppe...@googlegroups.com


On Tuesday, July 7, 2015 at 10:28:42 AM UTC-5, Chris Price wrote:
On Mon, Jul 6, 2015 at 3:55 PM, John Bollinger <john.bo...@stjude.org> wrote:

On Friday, July 3, 2015 at 2:25:15 AM UTC-5, Chris Price wrote:
 
I have a question / thought experiment related to this, and would really love to hear some feedback from the community:

What would you think about a setup where your master never saw any of your code changes at all, until you ran a specific command (e.g. 'puppet deploy')?  In other words, you hack away on the modules / manifests / hiera data in your code tree as much as you like but your master keeps compiling catalogs from the 'last known good' setup, until you run this 'deploy' command?  At that point, all of your current code becomes the new 'last known good' and that is what your master compiles off of until you do another deploy.


I like that pretty well.  If Puppet moved in this direction, though, then it would be nice to protect against "last known good" turning out to not be so good after all by making it a blessed configuration that has actually proven good. That way, if a fresh code deployment turns out to be bad then there is a genuine known good configuration that can quickly be restored.  In other words, I'm suggesting three configurations instead of two: undeployed, deployed, and known good.

Any thoughts on what the commands might look like there?  Particularly the command to flag something as 'last known good'?


'puppet bless', 'puppet approve', 'puppet accept', 'puppet keep', puppet 'mark_good', puppet 'is_good', 'puppet ftw', ...

 

Also, Erik mentioned that he'd expect this to work on a per-environment level... I'm trying to think about what 'last known good' would look like in that context.
 


I totally understand such a request, but I'm not sure how it would look.  The potential for environments to share resources (especially, but not limited to, modules in the global module path), presents considerable complication for per-environment caching.  Perhaps it would work only in conjunction with directory environments, and then only for code and data physically under the environment directory (physically == not following symlinks).


John

John Bollinger

unread,
Jul 7, 2015, 2:19:00 PM7/7/15
to puppe...@googlegroups.com


On Tuesday, July 7, 2015 at 11:56:24 AM UTC-5, Trevor Vaughan wrote:
I was thinking exactly the same as Eric.

It seems like we're repeating what Git does.


Perhaps I am too influenced by the "last known good" designation for something that in fact is *not* necessarily known to be good.  Nevertheless, if Puppet is going to perform caching along the lines that Chris described at all, then maintaining a cache with genuine known goodness (or at least affirmatively asserted goodness) seems a natural extension that wouldn't require much more work.


John

Trevor Vaughan

unread,
Jul 7, 2015, 2:41:01 PM7/7/15
to puppe...@googlegroups.com
Why not just have a general caching system?

How many catalogs do  you want to preserve? 5

Fallback Catalog? 3

Fallback would be preserved outside of the 5 limit (so there would be 6 total if a fallback is assigned).

In my mind, it's acting like a recovery installation for a router. Hit the magic button for 15 seconds (but not 30) and you go to the last known good configuration.

However, I think that this sort of thing *must* be exportable for those of us with dreams of clustered puppetmasters. Perhaps it's just a directory structure with symlinks?

Arbitrary labels would not be a bad thing with anything that is labeled being preserved indefinitely.

I would suggest calling the arbitrary labels "tags".

Then, you could do fancy things like:

puppet catalog list

- Catalog1 *current
- Catalog2
- Catalog3

puppet catalog tag Catalog1 fallback

- Catalog1 saved as 'fallback'

puppet catalog tag Catalog3 bob

- Catalog3 saved as 'bob'

puppet catalog diff bob current

- Use the magic catalog diff thing to output a diff in some format

puppet catalog rm Catalog3

- Deleted saved catalog 'Catalog3'

That's how it works in my head anyway....

Trevor

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages