--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/CANs%2BFoWGzUtfJW-oz62ByjPGFMye%2BCU8XbT2j3YRtoqgHRY-1A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/CABy1mMLzyurYms0LLVqW0BiEup%3DrnMmv6FafWQom0-WQDL5%2BNg%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/5490D048.7020702%40Alumni.TU-Berlin.de.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/CANs%2BFoUCo4FmT9QGk_P1kYg0CdEWA9pqhU%3D6jeXjBAr9z7fD9w%40mail.gmail.com.
[16:43:15] <MichaelSmith> +zaphod42: There's a mailing list thread on PUP-3116 that tries to cache the result of reading /prod/mounts
[16:44:06] <MichaelSmith> I'm trying to explore whether there are any existing patterns for caching data we re-use during a catalog run.
[16:45:05] <MichaelSmith> Puppet::Util::Storage kind of covers that, with the added benefit of logging the cached data, but also the cost of writing to PuppetDB.
[16:46:02] <MichaelSmith> And also doesn't work with puppet apply, so that's problematic.
[16:46:51] <+zaphod42> Puppet::Util::Storage writes to puppetdb? I thought it just wrote to a local file
[16:47:40] <+zaphod42> I think henrik's concern about memory leaks really just is about the problems we encounter when the cache is never flushed
[16:47:58] <+zaphod42> the data really just needs to have a clear lifetime
[16:48:09] <MichaelSmith> Oh, I may be confused about Puppet::Util::Storage then.
[16:48:31] <+zaphod42> and based on what I'm seeing, is this really a cache? or is it really just about having some "stash" where providers can store data during a run?
[16:49:28] <MichaelSmith> It would potentially be refreshed if the /proc/mounts gets updated, but that's up to the provider. So just a stash makes sense.
[16:49:37] <+zaphod42> MichaelSmith: yeah, Storage just writes to a local file https://github.com/puppetlabs/puppet/blob/master/lib/puppet/util/storage.rb#L86
[16:50:36] <MichaelSmith> Is using Storage to stash data used during a run something that's been discouraged in the past?
[16:50:44] <+zaphod42> MichaelSmith: in which case, I would think about it as providing a "stash" method for providers. A very simple thing would be it just returns a hash that can be manipulated by the provider
[16:50:55] <+zaphod42> the hash needs to be stored somewhere
[16:51:15] <+zaphod42> that can be handled by the Transaction and it can just throw all of the contents away at the end of a run
[16:51:54] <MichaelSmith> Yeah, sounds like a reasonable API to write. Puppet::Util::Stash, that's cleared after a run and only stored in-memory.
[16:51:57] <+zaphod42> there is also the question about what is the scope of the data. Does just one resource get to see its own data, is it shared across all resources of the same provider, all of the same type, or all of the same run
[16:52:45] <MichaelSmith> Do you have ideas how to enforce those types of restrictions?
[16:53:43] <+zaphod42> Have different stashes for each set? So for every resource it has its own stash, the type has a stash, and the transaction has a stash and they are all accessed independently
[16:54:14] <+zaphod42> the biggest problem is threading it through the APIs. Ideally they would be something that fits in nicely, but I have a feeling it will just be another global somewhere
[16:54:52] <MichaelSmith> I think the tricky part becomes how to clear them when we have many isolated stashes.
[16:54:59] <MichaelSmith> So they have to register themselves globally somewhere.
[16:56:05] <+zaphod42> or they live as instance variables on some objects that get thrown away
[16:56:18] <+zaphod42> so the resource stash is just an instance variable on a resource
[16:56:26] <+zaphod42> provider stash is on a provider
[16:56:41] <+zaphod42> (there is a problem there that every resource is an instance of a provider)
[16:56:52] <+zaphod42> there isn't a shared provider instance across the resources
[16:58:13] <+zaphod42> so one way to do it is have a Stashs object that is pushed into the context by the transaction and popped when the transaction is done
[16:58:32] <MichaelSmith> This particular example is being used in a type, and I don't yet see where it creates a persistent instance object. The lifetime might be too short to be useful.
[16:58:39] <+zaphod42> the stashes object holds all of the stashes for all of the resources, types, etc (whatever scopes are deemed correct)
[16:59:18] <+zaphod42> in a type....Types are tricky because they are shared between the master and the agent
[17:01:44] <MichaelSmith> I'm not quite sure of the implications of that. I guess that means lifetime on the master is different.
[17:05:37] <+zaphod42> yeah, how types are used on the master versus the agent is different. I can't ever remember all of the details though
[17:06:40] <+zaphod42> but if you put all of the stashes in a Stashes instance and put that instance in the Context and then use context_push (or better context_override), then it should be fine and not have a memory leak
[17:07:15] <+zaphod42> however, it will end up holding onto data during a transaction longer than it may need to, thus increasing memory usage
[17:07:23] <+zaphod42> but I'm not sure how much of a problem that would be
[17:07:37] <+zaphod42> so long as there is some point at which the objects will be cleaned up
[17:08:01] <MichaelSmith> Is there any advantage of having a Stashes instance that's added via push_context, vs just pushing your hash directly to it?
[17:08:22] <MichaelSmith> I guess the ability to add arbitrary keys after starting.
[17:08:44] <+zaphod42> push_context would just be where some collection of stashes would be held and other things can get to (a global, but with more control)
[17:09:12] <+zaphod42> you should still provide an API on the resources to get to the stashes, instead of having authors go directly to Puppet.lookup
[17:09:29] <MichaelSmith> Yeah, makes sense.
[17:09:55] <+zaphod42> and the other part of the context is that it controls the lifetime of the stashes
[17:10:16] <+zaphod42> once the context is popped, the stashes disappear
[17:10:51] <+zaphod42> I'd much rather have instances of resources and such hold onto their own stashes, but it might be difficult
[17:11:28] <+zaphod42> however, I think you should look into that. Only use the context system if there isn't a more local way of controlling it
[17:11:33] <MichaelSmith> Yeah... not everything seems to have an instance.
[17:12:13] <+zaphod42> which is the sad making part :(
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/CABy1mMJigXCzOi1P1wD4G8kb6Ec3gS3y%2Bw_aANpkdu5s2gOWkw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/CANs%2BFoV9hwQFm8GO7Oxt8VjpDu%2BxDS24z4nSj1LPDo4hkmDTcA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/CABy1mM%2B7z-EBJq94t8cRY9B_JJrQrfQ8%2BbM9TEzv_D2wgKdPGA%40mail.gmail.com.