Managing private key files; content=>file vs. binary content.

2,078 views
Skip to first unread message

Chris

unread,
Nov 10, 2010, 7:29:18 AM11/10/10
to Puppet Users
Hi all,

Hi all,

We use puppet for, amongst other things, managing the private-key
files needed for things like SSL certificates for HTTPS web servers.
We have a few constraints on how these are handled, and changes in
recent versions of puppet are making this harder than it perhaps ought
to be to implement, so I'm curious to know how others are handling it.

A site's private key file should obviously be kept private, and only
those nodes which are running the site should have access to it. This
would seem to rule out using something like

file{"/path/to/foo.key": source=>"puppet:///keys/foo.key"} , because
any valid puppet client could access foo.key.

It's possible to secure the file server, but not (as far as I can see)
in a way which is aware of the node's manifest. So either we'd have to
keep updating auth.conf with a list of nodes that were allowed to
access each key file (error-prone, we have hundreds of both, and the
node<=>required keys relationship is many-to-many ), or accept that
other nodes could access keys that they shouldn't be able to.

So, we currently do this:

file{"/path/to/foo.key": content=>file("/keys/foo.key")}

Since (AIUI) nodes can only access the catalog for the FQDN which
matches their certificate, the puppetmaster will ensure that the key
is available only to the hosts that need it.

All good, except that in 0.25 and up (which we're slowly migrating
to), this often doesn't work. The rest APIs require UTF-8 content, and
keys are binary, so catalog requests fail if the key happens to
contain bytes which aren't valid UTF-8. (http://
projects.puppetlabs.com/issues/4832 talks about this a bit, and
includes the observation that "So there’s a design decision after all:
If PSON is to be JSON compatible – no binary data.".

How are other people getting around this? Do you just allow all
clients to access all keys? Is there a native type, or an auth.conf
trick, that I'm missing? Or a more binary-friendly encoding than JSON/
PSON ?

thanks!

Chris

Thomas Bendler

unread,
Nov 10, 2010, 7:44:41 AM11/10/10
to puppet...@googlegroups.com
Hi Chris,

2010/11/10 Chris <chris...@gmail.com>
[...]

file{"/path/to/foo.key": source=>"puppet:///keys/foo.key"} , because
any valid puppet client could access foo.key.
[...]


you are not sticked to the puppet file server, you can also use something like this:

file {
   "/path/to/file":
     source => "/nfs/$host/file";
}

Make an export for each connected server and restrict access to this one. Put all private files on the NFS server and you're done.

Kind regards, Thomas

Chris

unread,
Nov 10, 2010, 9:52:27 AM11/10/10
to Puppet Users
Hi Thomas

On Nov 10, 12:44 pm, Thomas Bendler <thomas.bend...@cimt.de> wrote:
> Hi Chris,
>
> 2010/11/10 Chris <chrisma...@gmail.com>
>
> > [...]
> > file{"/path/to/foo.key": source=>"puppet:///keys/foo.key"} , because
> > any valid puppet client could access foo.key.
> > [...]
>
> you are not sticked to the puppet file server, you can also use something
> like this:
>
> file {
>    "/path/to/file":
>      source => "/nfs/$host/file";
>
> }
>
> Make an export for each connected server and restrict access to this one.
> Put all private files on the NFS server and you're done.
>

Yes, except that approach suffers from the same administrative
problems as using puppet:/// and auth.conf. HTTPS certs aren't
specific to hosts. If I have 20 servers all requiring foo.key (because
they all have the foo-application class in their manifest), then
either I have to copy foo.key into 20 different directories, or else
have one export with 20 allowed hosts. And every time I add the foo-
application class to another host, I need to remember to also expose
the key to that host. With large numbers of keys and hosts, and
moderate levels of churn, this becomes difficult to manage and prone
to errors.

The puppetmaster "knows" which hosts are allowed foo.key - i.e. all
the hosts which include the foo-application class. It seems wrong that
I should have to manually duplicate that information somewhere else,
be it in an NFS exports list or an auth.conf file.

I suppose I could do something hacky with storeconfigs to update the
exports on the NFS server when a new host is brought online - but it
doesn't seem like a very nice solution. It would lead to the first
puppet run failing because the exports weren't yet updated, for one
thing.

Thanks!

Chris

> Kind regards, Thomas

Richard Crowley

unread,
Nov 10, 2010, 10:23:39 AM11/10/10
to puppet...@googlegroups.com
> All good, except that in 0.25 and up (which we're slowly migrating
> to), this often doesn't work. The rest APIs require UTF-8 content, and
> keys are binary, so catalog requests fail if the key happens to
> contain bytes which aren't valid UTF-8. (http://
> projects.puppetlabs.com/issues/4832 talks about this a bit, and
> includes the observation that "So there’s a design decision after all:
> If PSON is to be JSON compatible – no binary data.".
>
> How are other people getting around this? Do you just allow all
> clients to access all keys? Is there a native type, or an auth.conf
> trick, that I'm missing? Or a more binary-friendly encoding than JSON/
> PSON ?

I also suffer from this problem distributing binary GPG private keys.
I would propose Puppet automatically base-64 encode/decode when a
parameter's value (in this case a file's content but it could be
anything) does not contain valid UTF-8 bytes. Would that fix the
problem completely? Would it break anything?

Patrick

unread,
Nov 10, 2010, 11:58:42 AM11/10/10
to puppet...@googlegroups.com
The best solution I can come up with is creating a hack that uses a define and a custom ruby function that will Base64Encode and then have the client Base64Decide (using an exec or custom provider) on the other end.  This comes from something I'lm building, but probably won't be done for a long while.

This is pesudecode which is missing the encode function, decode function, and some of the glew code.  


class binary_embedded_file::setup
{
$temp = '/var/lib/puppet/binary_embedded_file'

file { '/var/lib/puppet/binary_embedded_file':
ensure => directory,
mode => 750,
owner => root,
group => root,
}
file { '/usr/local/bin/base64_decode':
ensure => present,
owner => root,
group => root,
mode => 755,
}
}


define binary_embedded_file($ensure = present,
$server_location = nil, $client_location = nil ) {

include binary_embedded_file::setup

#Syntax might be wrong
require( Class['binary_embedded_file::setup'] )

#Look at puppet_concat example for how to finish these
$client_temp_path = 
$client_temp_path_converted = 


file { "${name}":
ensure => $ensure,
#Add a mode, owner, and group variable
#This syntax might be wrong
source => $client_temp_path_converted
}

file { "${client_temp_path}":
ensure => $ensure,
content => Base64Encode(file($server_location)),
}

#There might be an unintended line wrap here
exec { '/usr/local/bin/base64_decode \"$client_temp_path\" \"$client_temp_path_converted\"':
subscribe_only => true,
subscribe => File["${client_temp_path}"],
before => File["${name}"]
}
}


Patrick

unread,
Nov 10, 2010, 12:00:37 PM11/10/10
to puppet...@googlegroups.com

On Nov 10, 2010, at 4:29 AM, Chris wrote:

> How are other people getting around this? Do you just allow all
> clients to access all keys? Is there a native type, or an auth.conf
> trick, that I'm missing? Or a more binary-friendly encoding than JSON/
> PSON ?


I send a different message with a rather long hack as a workaround, but I would also file a bug for this.

Thomas Bendler

unread,
Nov 10, 2010, 1:01:41 PM11/10/10
to puppet...@googlegroups.com
Hi Chris,

2010/11/10 Chris <chris...@gmail.com>
[...]

Yes, except that approach suffers from the same administrative
problems as using puppet:/// and auth.conf. HTTPS certs aren't
specific to hosts. If I have 20 servers all requiring foo.key (because
they all have the foo-application class in their manifest), then
either I have to copy foo.key into 20 different directories, or else
have one export with 20 allowed hosts. And every time I add the foo-
application class to another host, I need to remember to also expose
the key to that host. With large numbers of keys and hosts, and
moderate levels of churn, this becomes difficult to manage and prone
to errors.

got the point, thought that you need one specific key on each server. So that should be even simpler, use file with content and put the key in the content field:

$myKey = "-----BEGIN RSA PRIVATE KEY-----\nMIICXgIBAAKBgQDTqkVS4/iwKx8LngXQrEShlfSRtcSyOB1IjC5AIGUAJvapq9lz\n..."

file {
  "/path/to/keyFile":
    content => $myKey;
}

Put this into your Webserver class and assign the class only to the Webservers.

Kind regards, Thomas

Richard Crowley

unread,
Nov 10, 2010, 1:26:42 PM11/10/10
to puppet...@googlegroups.com
> got the point, thought that you need one specific key on each server. So
> that should be even simpler, use file with content and put the key in the
> content field:
>
> $myKey = "-----BEGIN RSA PRIVATE
> KEY-----\nMIICXgIBAAKBgQDTqkVS4/iwKx8LngXQrEShlfSRtcSyOB1IjC5AIGUAJvapq9lz\n..."
>
> file {
>   "/path/to/keyFile":
>     content => $myKey;
> }
>
> Put this into your Webserver class and assign the class only to the
> Webservers.

This works perfectly for PEM-formatted keys because they're ASCII,
which is a subset of UTF-8. Binary keys are not (usually) valid UTF-8
and thus can't be crammed into a catalog without some encoding.

Thomas Bendler

unread,
Nov 10, 2010, 1:39:49 PM11/10/10
to puppet...@googlegroups.com
2010/11/10 Richard Crowley <r...@rcrowley.org>
[...]
This works perfectly for PEM-formatted keys because they're ASCII,
which is a subset of UTF-8.  Binary keys are not (usually) valid UTF-8
and thus can't be crammed into a catalog without some encoding.

And why don't you convert the key to a PEM key before putting it into puppet? You can use OpenSSL to convert the binary key to a PEM key:

openssl enc -in some-bin.key -out some-pem.key -a

Kind regards, Thomas

Richard Crowley

unread,
Nov 10, 2010, 2:48:40 PM11/10/10
to puppet...@googlegroups.com
On Wed, Nov 10, 2010 at 10:39 AM, Thomas Bendler <thomas....@cimt.de> wrote:
> 2010/11/10 Richard Crowley <r...@rcrowley.org>
>>
>> [...]
>> This works perfectly for PEM-formatted keys because they're ASCII,
>> which is a subset of UTF-8.  Binary keys are not (usually) valid UTF-8
>> and thus can't be crammed into a catalog without some encoding.
>
> And why don't you convert the key to a PEM key before putting it into
> puppet? You can use OpenSSL to convert the binary key to a PEM key:

In my particular case because its unclear if ASCII encodings of
trusted.gpg and trustdb.gpg are indeed possible.

In the general case, even completely legitimate (and common) Latin-1
text files can cause Puppet problems because some Latin-1 bytes are
not valid UTF-8. In my opinion, the content parameter of a file
resource should be able to handle these cases.

Richard

Patrick

unread,
Nov 10, 2010, 3:16:26 PM11/10/10
to puppet...@googlegroups.com

I think you should file a bug then.

Chris May

unread,
Nov 10, 2010, 5:54:16 PM11/10/10
to puppet...@googlegroups.com
Indeed. I made a mistake in my original post; it's not the key files for apache (which are PEM-formatted ASCII) , but rather those in Java's JKS keystore format, that cause problems for me. I could probably create a workaround by transferring the keys as .PEM format and then converting to JKS on the client, but it would be a pretty fiddly solution compared to the option of a binary-safe file() function.
 
--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To post to this group, send email to puppet...@googlegroups.com.
To unsubscribe from this group, send email to puppet-users...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.


Chris

unread,
Nov 11, 2010, 5:06:00 AM11/11/10
to Puppet Users

> > In the general case, even completely legitimate (and common) Latin-1
> > text files can cause Puppet problems because some Latin-1 bytes are
> > not valid UTF-8.  In my opinion, the content parameter of a file
> > resource should be able to handle these cases.
>
> I think you should file a bug then.


I've raised http://projects.puppetlabs.com/issues/5261

Felix Frank

unread,
Nov 18, 2010, 6:13:50 PM11/18/10
to puppet...@googlegroups.com

Thinking back to the original PSON bug, the workaround back then was to
use YAML serialization. I did notice that that could make clients crash
though, I think the 0.25.5 were the afflicted ones.

You may want to give it shot though, anyway. The YAML encoder seems to
be less picky where encodings are concerned.

Regards,
Felix

Chris May

unread,
Nov 19, 2010, 4:56:47 AM11/19/10
to puppet...@googlegroups.com

Thinking back to the original PSON bug, the workaround back then was to use YAML serialization. I did notice that that could make clients crash though, I think the 0.25.5 were the afflicted ones.

You may want to give it shot though, anyway. The YAML encoder seems to be less picky where encodings are concerned.

Alas, that doesn't work for me either; I get 

err: Could not retrieve catalog from remote server: Could not intern from yaml: can't convert Array into String

Thanks!

Chris
 
Regards,
Felix

Felix Frank

unread,
Nov 19, 2010, 6:01:29 AM11/19/10
to puppet...@googlegroups.com
On 11/19/2010 10:56 AM, Chris May wrote:
>
> Thinking back to the original PSON bug, the workaround back then was
> to use YAML serialization. I did notice that that could make clients
> crash though, I think the 0.25.5 were the afflicted ones.
>
> You may want to give it shot though, anyway. The YAML encoder seems
> to be less picky where encodings are concerned.
>
> Alas, that doesn't work for me either; I get
>
> err: Could not retrieve catalog from remote server: Could not intern
> from yaml: can't convert Array into String

I see, but this is plain stupid. Such things are not supposed to happen.

You may want to find out which part of your catalogue causes this (i.e.,
which subset of your manifest is sufficient to reproduce this behaviour)
and raise a bug.

That is, unless this goes away with newer clients.

Regards,
Felix

Chris May

unread,
Nov 19, 2010, 7:19:19 AM11/19/10
to puppet...@googlegroups.com
This is with 2.6.3 on both client and server; the relevant portion of the catalog (sufficient to cause the error on its own) is 

file{"/tmp/test": content=>"/var/puppet/private/truststore"}

/var/puppet/private/truststore is a JKS-encoded keystore. 

I imagine (though I haven't actually debugged it) that the error is because one element the array being serialized is is the contents of this file, which includes the non-UTF-8 bytes which cause the serialization error in PSON.  If that's true, I would expect that a fix for http://projects.puppetlabs.com/issues/5261 would probably fix the YAML issue as well.

thanks

Chris


Reply all
Reply to author
Forward
0 new messages