Implementing the purging of ssh_authorized_keys

97 views
Skip to first unread message

Felix Frank

unread,
Jan 3, 2014, 7:35:57 PM1/3/14
to puppe...@googlegroups.com
Hi,

TL;DR version: Implementing the design in [2] is quite hard apparently.
Ideas welcome.

I'm working on a proof of concept of a solution for PUP-1174 [1]. .
Possible designs were discussed on the user list [2]. John came up with
a scheme that was silently consented as desirable. It's based around a
new parameter to the user type, working title 'purge_ssh_keys'. Its
value can be a list of paths to files from which purging should happen.

Implementing this is trickier than I thought. I pushed a shy attempt
(including some obnoxious printf debugging) to github [3]. The idea is
to generate fake key resources to make puppet touch the files in
question. So far so good.

The trouble is this: Even though puppet will consider additional
authorized_keys files in question during a transaction, their contents
are not included in Ssh_authorized_keys::ProviderParsed.instances (which
would be the requirement for purging from them).

The instances method for ParsedFile providers is based on its targets
method. This method returns (from what I gathered)
1. the default target for the provider
2. the files corresponding to any files opened during the current
transaction
3. if present, the targets of all resources passed as arguments

Since the instances method never passes any existing resources, and
ssh_authorized_key has no default target, the only viable source is (2),
which I hoped would be amended by my fake resources. However, the
resources type (resources { "ssh_authorized_keys": purge => true })
calls the instances method very early during the transaction. The fake
key resources have not come into play at this point yet. No dice.

A different idea I'd had was to extend the
Ssh_authorized_keys::ProviderParsed as well. (Example code has already
been obliterated, sorry.) If the additional targets could be kept in a
member variable, the targets method could be overridden to include them
in its return (list) value. The User type would add to this member
variable instead of generating actual resources.

The issue with this approach (I believe) is that User#generate is called
during compilation, and hence cannot send signals to the
Ssh_authorized_keys::ProviderParsed class (which does its work during
the transaction). In master/agent mode, these things actually happen in
separate ruby processes.

I thought of moving the signal passing to User#eval_generate instead,
but that appears to happen later than the generating of additional
resources for purging anyway.

Yet another idea is to generate additional ssh_authorized_key (with
ensure => absent) resources during User#eval_generate by grepping the
key names from the very files specified. But that seems unspeakably ugly
to me, because it duplicates (a subset of) the parsing logic implemented
in the Ssh_authorized_key::ProviderParsed.

As always, any input is greatly appreciated. But you already have my
thanks for even reading this far ;-)

Cheers,
Felix

[1] https://tickets.puppetlabs.com/browse/PUP-1174
[2]
https://groups.google.com/forum/#!msg/puppet-users/AgvUwA9RMLM/pNkDFpqaTdkJ
[3]
https://github.com/ffrank/puppet/tree/ticket/master/1581-purge-all-ssh-keys

Jeff Bachtel

unread,
Jan 4, 2014, 4:15:25 PM1/4/14
to puppe...@googlegroups.com
Felix,

Given the constraint to not delete keys from the file until all managed
keys are instantiated, how about using concat_fragment to generate the
authorized_keys file in the case of purge management? My understanding
is that this will change the file atomically (eliminating the empty file
problem in the PUP).

Apologies if that's far behind where you're at now,

Jeff

Trevor Vaughan

unread,
Jan 4, 2014, 8:50:17 PM1/4/14
to puppe...@googlegroups.com
You could also have two custom types, one that creates keys and one that purges. The ones that create/manage, can be autorequired by the one that purges.

Alternatively, you can use 'flush' to only purge when the last key management type is executed on the system.

It's a bit cumbersome, but the general idea can be found at https://www.onyxpoint.com/storing-puppet-provider-metadata-for-single-instance-application/.

Trevor


--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/52C879ED.1050409%40bericotechnologies.com.

For more options, visit https://groups.google.com/groups/opt_out.



--
Trevor Vaughan
Vice President, Onyx Point, Inc
(410) 541-6699
tvau...@onyxpoint.com

-- This account not approved for unencrypted proprietary information --

Felix Frank

unread,
Jan 5, 2014, 12:48:17 PM1/5/14
to puppe...@googlegroups.com
Hi,

On 01/04/2014 10:15 PM, Jeff Bachtel wrote:
> Felix,
>
> Given the constraint to not delete keys from the file until all
> managed keys are instantiated, how about using concat_fragment to
> generate the authorized_keys file in the case of purge management? My
> understanding is that this will change the file atomically
> (eliminating the empty file problem in the PUP).
>
> Apologies if that's far behind where you're at now,
>
> Jeff

so, I guess that would imply creating an alternate provider?

I have mixed feelings about the general idea. The ParsedFile base class
poses some limitations, but it allows the management of all properties
of each key rather elegantly...

On 01/05/2014 02:50 AM, Trevor Vaughan wrote:
> You could also have two custom types, one that creates keys and one
> that purges. The ones that create/manage, can be autorequired by the
> one that purges.

Interesting. So the ssh_authorized_key resources would do nothing apart
from each generating two resources of these custom types?

I can't really visualize this yet, but I'm afraid this may suffer from
the same limitation that the current implementation is facing. The hard
part about purging appears to be the need to know the targets at
prefetch time.

The custom types would likely rely on the existing provider to do their
work?

> Alternatively, you can use 'flush' to only purge when the last key
> management type is executed on the system.
>
> It's a bit cumbersome, but the general idea can be found
> at https://www.onyxpoint.com/storing-puppet-provider-metadata-for-single-instance-application/.

Ooh, fascinating read.

I believe that ParsedFile already behaves in a somewhat similar fashion,
although the design is not as refined as you propose in the post. It
implements a .flush method as opposed to a clever #flush.

What I'm taking away is that it may be possible to implement a purging
logic that does not rely on prefetching existing instances and marking
them as 'purging' (i.e. Type::Resources#generate), but work some agent
side magic instead. I hadn't been aware that you could resort to
scanning the catalog as in ProviderDemo#content. Very inspiring.

Or am I missing a whole different point? I fear that the structure of
the demo provider can't solve the prefetching issue (i.e. the lack of
clean prefetching) by itself.

Thanks and regards,
Felix

Trevor Vaughan

unread,
Jan 5, 2014, 4:42:47 PM1/5/14
to puppe...@googlegroups.com
Hi Felix,

Yes, you definitely got the point.

Unfortunately, clean prefetching is just hard when you're modifying what you want to prefetch. This is why I chose to wait for the catalog to figure itself out and then just delve the catalog for whatever I needed.

I've used this pattern a few times with very good results. The only issue is that, since things are happening in the last resource applied, it's not entirely clean in terms of user output.

For instance, if you have 15 resources, you'll only see something happen on the last resource for the purge. Having a separate resource that does the same thing is technically cleaner but gets irritating to maintain since you either have to inject it into the catalog in the 'initialize' routine of your other types (just wrap it in a catalog check) or have the user add it explicitly to their manifests if you want them to pass options.

Thanks,

Trevor


--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/52C99AE1.3090608%40Alumni.TU-Berlin.de.

William Van Hevelingen

unread,
Jan 7, 2014, 12:35:11 PM1/7/14
to [Puppet-dev]
There is a module on the forge that implements what Jeff suggested.

http://forge.puppetlabs.com/nightfly/ssh_keys

Cheers,
William


John Bollinger

unread,
Jan 8, 2014, 12:41:03 PM1/8/14
to puppe...@googlegroups.com


On Saturday, January 4, 2014 7:50:17 PM UTC-6, Trevor Vaughan wrote:
You could also have two custom types, one that creates keys and one that purges. The ones that create/manage, can be autorequired by the one that purges.

Alternatively, you can use 'flush' to only purge when the last key management type is executed on the system.

It's a bit cumbersome, but the general idea can be found at https://www.onyxpoint.com/storing-puppet-provider-metadata-for-single-instance-application/.



Using the group flush approach as per the example, with the agent running in daemon mode, do the added provider class variables (@@demo_classvars in the example) persist from one catalog run to the next?

Also, am I wrong in thinking that if one resource in a group related via such a mechanism is not applied (e.g. because it 'require's a failed resource) then the whole group's final flush will never be triggered?

I'm just trying to understand the implications.


John

Trevor Vaughan

unread,
Jan 8, 2014, 1:31:53 PM1/8/14
to puppe...@googlegroups.com
Hi John,

You just  noticed that I never run in daemon mode :-). I probably need to add a reset to the initialized value in the @@demo_classvars to handle daemon mode because, yes, they will probably persist between runs.

I believe that, during testing, I found that, if any resource in the chain died, the flush would not be called but I'm not quite remembering. I tend to only use this when I'm not inter-weaving a lot of resources. I do know that if a dependency fails, then the entire chain fails, but I'm not sure about a random error in the chain.

Thanks,

Trevor


--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/53453861-0603-4281-b292-90e596cfb269%40googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

John Bollinger

unread,
Jan 9, 2014, 1:10:13 PM1/9/14
to puppe...@googlegroups.com


On Wednesday, January 8, 2014 12:31:53 PM UTC-6, Trevor Vaughan wrote:
Hi John,

You just  noticed that I never run in daemon mode :-). I probably need to add a reset to the initialized value in the @@demo_classvars to handle daemon mode because, yes, they will probably persist between runs.

I believe that, during testing, I found that, if any resource in the chain died, the flush would not be called but I'm not quite remembering. I tend to only use this when I'm not inter-weaving a lot of resources. I do know that if a dependency fails, then the entire chain fails, but I'm not sure about a random error in the chain.



Ok.  The follow up, then, is where reset code would reside.  My first thought was that resetting the long-lived state should be part of the final flush, but if that might not always be triggered then it's not suitable for the purpose.


John

Trevor Vaughan

unread,
Jan 9, 2014, 2:14:56 PM1/9/14
to puppe...@googlegroups.com
The 'content' comparator should always run, so when this section gets hit:

if @@demo_classvars[:num_runs] == @@demo_classvars[:num_demo_resources] then

Then @@demo_classvars can be reset so that the next run will re-initialize.

I'm not certain if there is a portion of the provider code that gets run every time at the end without fail. There might be but I can't check the code right now.

Thanks,

Trevor



--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

Felix Frank

unread,
Jan 11, 2014, 8:09:55 PM1/11/14
to puppe...@googlegroups.com
Hi,

OK, thanks, that looks pretty useful. I still think puppet core should
be capable of doing this as well :-)

Update - inspired by Trevor's demo provider, I cobbled up an actual
implementation of the proposed design. It does commit the sin of
duplicating some parsing logic, but in a very shallow fashion. I hope
this might be just acceptable.

Pull request is here: https://github.com/puppetlabs/puppet/pull/2247

Side question: I can't seem to be able to update the issue status in
Jira anymore. Is this by design?

Cheers,
Felix

badgerious

unread,
Jan 12, 2014, 2:39:38 PM1/12/14
to puppe...@googlegroups.com

On Thursday, January 9, 2014 1:14:56 PM UTC-6, Trevor Vaughan wrote:

I'm not certain if there is a portion of the provider code that gets run every time at the end without fail. There might be but I can't check the code right now.

In Puppet 3.4+, providers can define a ::post_resource_eval method which will unconditionally run after all resources have finished application (not sure how relevant this is to Felix's stuff; I've only skimmed this thread).

Eric

Trevor Vaughan

unread,
Jan 12, 2014, 2:54:25 PM1/12/14
to puppe...@googlegroups.com
Nice! That makes it a LOT easier to clean up the class variables.


--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.
Reply all
Reply to author
Forward
0 new messages