Is trusting the agents a bad thing?

64 views
Skip to first unread message

UK_beginner

unread,
Feb 12, 2015, 5:10:12 PM2/12/15
to puppet...@googlegroups.com
I'm new to puppet and have been exploring different ways of configuring manifests, ranging from huge single manifests, through per-node and am currently looking at the role/profile patterns.

One thing I've been looking at is using a mix of puppet and hiera to set up a hierarchy based around server roles (i.e. web. database, logging etc.) using the environment and a custom fact set on the agents using /etc/facter/facts.d. For me this seems an efficient way to describe a configuration since it can avoid a lot of duplication.

However, some of the team I'm working with have concerns around the fact that we're letting the agents dictate what manifests get passed to them since they define the role & environment facts. I've been told that if someone found a 'root-level' exploit they could change the facts to retrieve whatever manifest they want - my response is if we get to that stage, all bets are off since they can just stop puppet, download rpm's etc.

However I wanted to gauge the feeling of the experts - is this a risky solution, or do others feel that as we implement other security factors (firewalls, correct file permissions, certificates) that this is an acceptable route to investigate?

Thanks in advance for you thoughts.

Alex Elman

unread,
Feb 12, 2015, 5:47:08 PM2/12/15
to puppet...@googlegroups.com
I don't think you should limit your agent's ability to dictate what resources should be configured and served. The Puppet client-server trust model is fairly flat and this provides a decent trade-off between flexibility and security. If your agent is owned, then as you mention, you have bigger concerns.

You do have some decently granular control over which agents get access to which resources by using auth.conf. See https://docs.puppetlabs.com/guides/rest_auth_conf.html. Configuring auth.conf on the master side, you can lock down certain resources using allow/deny directives or ACLs for things other than facter facts like hostnames and ip addresses. I would make sure you lock things down against unauthenticated agents at the very least. Also be judicious about how you design your auto-signing configuration if you're worried about rouge agents. See https://docs.puppetlabs.com/puppet/3.7/reference/ssl_autosign.html.

-Alex

--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/b24c11a0-1e0a-459f-9873-de80fccd41a9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Alex Elman

Denmat

unread,
Feb 13, 2015, 12:07:20 AM2/13/15
to puppet...@googlegroups.com
One thing to consider is using hiera e-yaml gpg based on certnames. You can put secrets (db passwords etc) here and they are matched to the SSL certname. In this configuration an attacker can change their role/profile but still cant access secrets for a particular node that doesn't match its certname.

That is not the only way but a fairly reasonable compromise.

Den

jcbollinger

unread,
Feb 13, 2015, 10:02:09 AM2/13/15
to puppet...@googlegroups.com


On Thursday, February 12, 2015 at 4:10:12 PM UTC-6, UK_beginner wrote:
I'm new to puppet and have been exploring different ways of configuring manifests, ranging from huge single manifests, through per-node and am currently looking at the role/profile patterns.

One thing I've been looking at is using a mix of puppet and hiera to set up a hierarchy based around server roles (i.e. web. database, logging etc.) using the environment and a custom fact set on the agents using /etc/facter/facts.d. For me this seems an efficient way to describe a configuration since it can avoid a lot of duplication.

However, some of the team I'm working with have concerns around the fact that we're letting the agents dictate what manifests get passed to them since they define the role & environment facts. I've been told that if someone found a 'root-level' exploit they could change the facts to retrieve whatever manifest they want - my response is if we get to that stage, all bets are off since they can just stop puppet, download rpm's etc.


It is not necessarily true that an attacker with sufficient access to alter the fact values presented by Puppet would have free reign over the system.  Mandatory access controls (SELinux) or other security measures could still circumscribe their capabilities, if your machines are configured so.  Nevertheless, you're right that an assailant having such access is a very serious problem, and it might be true that they would have complete control.  In that case, the issue is not so much about what they could do to the compromised machine, but rather about what sensitive information they could persuade Puppet to provide that would not otherwise be available to them.

In the end, you're looking at a basic security principle on a different scale than you may be used to considering: data access should be limited to those who need it.  So, for example, engineers' workstations should not be able to obtain configuration details specific to your company's public web server or HR database server.

 

However I wanted to gauge the feeling of the experts - is this a risky solution, or do others feel that as we implement other security factors (firewalls, correct file permissions, certificates) that this is an acceptable route to investigate?



That's not an either / or. Yes, there is some risk, but you cannot altogether avoid risks.  Whether your proposed route involves acceptable risks is a call only you can make.  With that said, I think the benefits to be had from allowing machines to self-declare their configuration requirements in a master / agent setup are not very compelling.  There is a convenience factor, but even that's not very strong in most cases.

In particular, if you are relying on Puppet to manage what environment agents will request and what specific fact values they will provide, then you are needlessly exposing that information to tampering -- the master must already HAVE those data, so why does it need the agent to echo them back?  If the master does not have those data, on the other hand, (meaning that you rely on it to be manually configured on each machine) then you have no centralized control over your configuration.  That may be sensible if you use Puppet only for provisioning, but if you're trying to do bona fide configuration management then I don't see why you would want to go that way.

If you want to assign machines to different environments, then I recommend setting up an ENC to make that association internally on the master.  That can be the only thing it does, if you wish -- you can continue to perform classification as you already do, right alongside such an ENC.

Any other per-node data managed by the master can and probably should be drawn from Hiera, whether or not you implement that ENC.  Facts should be used only to communicate node details that have to be computed on the node.


Hmm --I see I strengthened my tone as I went along there.  Suffice it to say that even though it's a call you need to make for yourself, I see the scales weighted against your proposed direction.


John

Reply all
Reply to author
Forward
0 new messages