If you have any exported resource collections or query any external system for data it won't work (puppetdbquery, DNS, LDAP for example).
But for the subset you can make those guarantees I suppose it will give a speed boost.
--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/a1b2f2f0-3392-4c01-a195-c60bb81b60b2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
If you have any exported resource collections or query any external system for data it won't work (puppetdbquery, DNS, LDAP for example).
But for the subset you can make those guarantees I suppose it will give a speed boost.
On Jun 29, 2015, at 8:43 AM, Raphaël Pinson <raphael...@camptocamp.com> wrote:
>
> Hello,
>
>
> I've activated caching on our Puppetservers, using the admin API to invalidate the cache upon deploying new environments. However, this only caches manifests, and catalogs still need to be compiled for every request.
>
> I'm thinking (at least in our case) it wouldn't be totally crazy to cache catalogs on the master so long as:
>
> * manifests are not changed (this is taken care of by the r10k hook + admin API)
> * data do not change (same, since we deploy hiera data with r10k)
> * facts do not change.
>
>
> Obviously, *some* facts always change (uptime, memoryfree, swapfree, etc.), but most of them don't. So the idea would be to add a parameter in puppet.conf with the name of these facts that should be used as a basis for invalidating the catalog, and use the other facts to decide when a catalog should be recompiled.
>
> Is there already some kind of code doing that, or any opinion/feedback on this idea?
This is something that our team at Puppet Labs has been working on a ton. It’s beneficial in the short term, for the kind of performance and other benefits you describe, but it’s also key in a bunch of other cool stuff we’re doing. The short answer is that in some ways it’s quite easy, but it also requires some changes to the core that aren’t necessarily as easy.
Eric Sorenson is lead on the work (code-named Direct Puppet), so hopefully he’ll chime in with more details. The basic idea, though, is that we do a few things, all together (note that this is from memory, and I’m sure I’m missing pieces or getting some of them wrong):
On Monday, June 29, 2015 at 6:02:17 PM UTC+1, Luke Kanies wrote:On Jun 29, 2015, at 8:43 AM, Raphaël Pinson <raphael...@camptocamp.com> wrote:
>
> Hello,
>
>
> I've activated caching on our Puppetservers, using the admin API to invalidate the cache upon deploying new environments. However, this only caches manifests, and catalogs still need to be compiled for every request.
>
> I'm thinking (at least in our case) it wouldn't be totally crazy to cache catalogs on the master so long as:
>
> * manifests are not changed (this is taken care of by the r10k hook + admin API)
> * data do not change (same, since we deploy hiera data with r10k)
> * facts do not change.
>
>
> Obviously, *some* facts always change (uptime, memoryfree, swapfree, etc.), but most of them don't. So the idea would be to add a parameter in puppet.conf with the name of these facts that should be used as a basis for invalidating the catalog, and use the other facts to decide when a catalog should be recompiled.
>
> Is there already some kind of code doing that, or any opinion/feedback on this idea?
This is something that our team at Puppet Labs has been working on a ton. It’s beneficial in the short term, for the kind of performance and other benefits you describe, but it’s also key in a bunch of other cool stuff we’re doing. The short answer is that in some ways it’s quite easy, but it also requires some changes to the core that aren’t necessarily as easy.
Eric Sorenson is lead on the work (code-named Direct Puppet), so hopefully he’ll chime in with more details. The basic idea, though, is that we do a few things, all together (note that this is from memory, and I’m sure I’m missing pieces or getting some of them wrong):
This is indeed something we've been putting a lot of thought and effort into lately.
I have a question / thought experiment related to this, and would really love to hear some feedback from the community:
What would you think about a setup where your master never saw any of your code changes at all, until you ran a specific command (e.g. 'puppet deploy')? In other words, you hack away on the modules / manifests / hiera data in your code tree as much as you like but your master keeps compiling catalogs from the 'last known good' setup, until you run this 'deploy' command? At that point, all of your current code becomes the new 'last known good' and that is what your master compiles off of until you do another deploy.
We could also provide an HTTP endpoint to accomplish the same behavior. And we could theoretically make this new behavior entirely opt-in, but, by opting-in to it, you'd get access to new features similar to what Raphaël and Luke were hinting at.
Again, this is just a thought experiment at the moment. Curious how this would impact people's workflows.
On Fri, 3 Jul 2015 at 09:25 Chris Price <ch...@puppetlabs.com> wrote:
On Monday, June 29, 2015 at 6:02:17 PM UTC+1, Luke Kanies wrote:On Jun 29, 2015, at 8:43 AM, Raphaël Pinson <raphael...@camptocamp.com> wrote:
>
> Hello,
>
>
> I've activated caching on our Puppetservers, using the admin API to invalidate the cache upon deploying new environments. However, this only caches manifests, and catalogs still need to be compiled for every request.
>
> I'm thinking (at least in our case) it wouldn't be totally crazy to cache catalogs on the master so long as:
>
> * manifests are not changed (this is taken care of by the r10k hook + admin API)
> * data do not change (same, since we deploy hiera data with r10k)
> * facts do not change.
>
>
> Obviously, *some* facts always change (uptime, memoryfree, swapfree, etc.), but most of them don't. So the idea would be to add a parameter in puppet.conf with the name of these facts that should be used as a basis for invalidating the catalog, and use the other facts to decide when a catalog should be recompiled.
>
> Is there already some kind of code doing that, or any opinion/feedback on this idea?
This is something that our team at Puppet Labs has been working on a ton. It’s beneficial in the short term, for the kind of performance and other benefits you describe, but it’s also key in a bunch of other cool stuff we’re doing. The short answer is that in some ways it’s quite easy, but it also requires some changes to the core that aren’t necessarily as easy.
Eric Sorenson is lead on the work (code-named Direct Puppet), so hopefully he’ll chime in with more details. The basic idea, though, is that we do a few things, all together (note that this is from memory, and I’m sure I’m missing pieces or getting some of them wrong):
This is indeed something we've been putting a lot of thought and effort into lately.
I have a question / thought experiment related to this, and would really love to hear some feedback from the community:
What would you think about a setup where your master never saw any of your code changes at all, until you ran a specific command (e.g. 'puppet deploy')? In other words, you hack away on the modules / manifests / hiera data in your code tree as much as you like but your master keeps compiling catalogs from the 'last known good' setup, until you run this 'deploy' command? At that point, all of your current code becomes the new 'last known good' and that is what your master compiles off of until you do another deploy.Keeps compiling or keeps serving a cached copy?
We could also provide an HTTP endpoint to accomplish the same behavior. And we could theoretically make this new behavior entirely opt-in, but, by opting-in to it, you'd get access to new features similar to what Raphaël and Luke were hinting at.
Again, this is just a thought experiment at the moment. Curious how this would impact people's workflows.Well, it would be useful to be able to atomically switch to a new version of manifests. At the moment the best you can do is to checkout the new version somewhere else and move/relink it into place, so you get all of the new environment at the same time but there might still be ongoing compiles that get half of the old environment and half of the new.
But it would really have to be per environment (and optionally all of them).
For consistency this would be good. When it comes to speed improvements I think there's other areas that need more focus. In my experience catalog application (even with no changes applied) takes about five times longer than catalog compilation (Puppet 4.2 improved this somewhat though).
Also will PuppetDB be used as the catalog cache so it would work with multiple puppet masters behind a load balancer or SRV records?
--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/5596B987.5090900%40immerda.ch.
For more options, visit https://groups.google.com/d/optout.
I have a question / thought experiment related to this, and would really love to hear some feedback from the community:
What would you think about a setup where your master never saw any of your code changes at all, until you ran a specific command (e.g. 'puppet deploy')? In other words, you hack away on the modules / manifests / hiera data in your code tree as much as you like but your master keeps compiling catalogs from the 'last known good' setup, until you run this 'deploy' command? At that point, all of your current code becomes the new 'last known good' and that is what your master compiles off of until you do another deploy.
On Mon, Jul 6, 2015 at 3:55 PM, John Bollinger <john.bo...@stjude.org> wrote:
On Friday, July 3, 2015 at 2:25:15 AM UTC-5, Chris Price wrote:I have a question / thought experiment related to this, and would really love to hear some feedback from the community:
What would you think about a setup where your master never saw any of your code changes at all, until you ran a specific command (e.g. 'puppet deploy')? In other words, you hack away on the modules / manifests / hiera data in your code tree as much as you like but your master keeps compiling catalogs from the 'last known good' setup, until you run this 'deploy' command? At that point, all of your current code becomes the new 'last known good' and that is what your master compiles off of until you do another deploy.
I like that pretty well. If Puppet moved in this direction, though, then it would be nice to protect against "last known good" turning out to not be so good after all by making it a blessed configuration that has actually proven good. That way, if a fresh code deployment turns out to be bad then there is a genuine known good configuration that can quickly be restored. In other words, I'm suggesting three configurations instead of two: undeployed, deployed, and known good.Any thoughts on what the commands might look like there? Particularly the command to flag something as 'last known good'?
Also, Erik mentioned that he'd expect this to work on a per-environment level... I'm trying to think about what 'last known good' would look like in that context.
I was thinking exactly the same as Eric.It seems like we're repeating what Git does.
--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/996bc122-d677-40bf-9e13-0ea965beec5f%40googlegroups.com.