Security best practice for Cloudbees Docker Workflow?

119 views
Skip to first unread message

Thomas Goeppel

unread,
Jan 9, 2016, 7:44:12 AM1/9/16
to Jenkins Users
Hello community,

I've been experimenting with the Cloudbees Docker Workflow plugin, and I really like the ease of use of this DSL. Now I'd like to get security right.

Problem: the plugin assumes that the docker binary is in the path. In my understanding this has the implication that the user that issues docker commands (e.g. jenkins) has to be in the docker group!

The docker variable provided by the docker workflow plugin limits the security impact by explicitly setting a non-priviledged  user with the "-u" option. However, at least in docker 1.9.1 I can pass a second "-u" option which overwrites the original settings:

docker.image('ubuntu').inside ('-v /etc:/etc-host -u 0') {
    sh '''
        whoami
        # -> root
        awk '{gsub(/[A-Z]/,"!"); print}' /etc-host/shadow
        # -> slightly masked password hashes
    '''
}

I could also have passed in the "--privileged" flag.

For a moment I asked myself it it would be worthwhile filing an ER, as to have the plugin sanitize the "docker run" parameters. But the fundamental problem is that a Jenkins job has the right to run the docker commands. Running the following script would be even worse:

node {
    sh 'docker run  -v /etc:/etc-host ubuntu cat /etc-host/shadow'


One option would be to write a shim for the docker command, that only allows a subset of commands, and sanitizes the options and parameters.

My question are these:
  • am I missing something?
  • how should I configure the Jenkins CI environment (server and nodes) to use the docker command safely?

Thanks in advance,

Thomas

Christopher Orr

unread,
Jan 9, 2016, 7:05:07 PM1/9/16
to jenkins...@googlegroups.com
On 09/01/16 13:44, Thomas Goeppel wrote:
> Hello community,
>
> I've been experimenting with the Cloudbees Docker Workflow plugin, and I
> really like the ease of use of this DSL. Now I'd like to get security right.
>
> Problem: the plugin assumes that the docker binary is in the path
> <http://documentation.cloudbees.com/docs/cje-user-guide/_limitations.html>.
> In my understanding this has the implication that the user that issues
> docker commands (e.g. jenkins) has to be in the docker group!
>
> The *docker* variable provided by the docker workflow plugin limits the
> security impact by explicitly setting a non-priviledged user with the
> "-u" option. However, at least in docker 1.9.1 I can pass a second "-u"
> option which overwrites the original settings:
>
> docker.image('ubuntu').inside ('-v /etc:/etc-host -u 0') {
> sh '''
> whoami
> # -> root
> awk '{gsub(/[A-Z]/,"!"); print}' /etc-host/shadow
> # -> slightly masked password hashes
> '''
> }
>
> I could also have passed in the "--privileged" flag.
>
> For a moment I asked myself it it would be worthwhile filing an ER, as
> to have the plugin sanitize the "docker run" parameters. But the
> fundamental problem is that a Jenkins job has the right to run the
> docker commands. Running the following script would be even worse:
>
> node {
> sh 'docker run -v /etc:/etc-host ubuntu cat /etc-host/shadow'
> }
>
>
> One option would be to write a shim for the docker command, that only
> allows a subset of commands, and sanitizes the options and parameters.

Even if you do that, the jenkins user, as part of the docker group, will
still have direct access to the unix socket that the Docker daemon uses.

As is quite often the case with a CI server, you most likely need to
either trust the users who can configure jobs (or edit Workflow scripts
(if in source control)), or lock down the Jenkins configuration to allow
only specific users.

The Docker security guide also says "only trusted users should be
allowed to control your Docker daemon":
https://docs.docker.com/engine/articles/security/

Regards,
Chris

Thomas Goeppel

unread,
Jan 10, 2016, 6:17:56 AM1/10/16
to Jenkins Users


On Sunday, January 10, 2016 at 1:05:07 AM UTC+1, Christopher Orr wrote:
> One option would be to write a shim for the docker command, that only
> allows a subset of commands, and sanitizes the options and parameters.

Even if you do that, the jenkins user, as part of the docker group, will
still have direct access to the unix socket that the Docker daemon uses.


As is quite often the case with a CI server, you most likely need to
either trust the users who can configure jobs (or edit Workflow scripts
(if in source control)), or lock down the Jenkins configuration to allow
only specific users.

  
The Docker security guide also says "only trusted users should be
allowed to control your Docker daemon":
https://docs.docker.com/engine/articles/security/

I feel that you're mixing up two areas of trust here, Jenkins (and CI users), and Docker (and system administrators). In an organization of any size or complexity, the roles allowed to control Jenkins won't be held by the same people that have the roles with root access. Even if it were so, just how much would one have to lock down a Jenkins production environment to mitigate the risks associated with running Jenkins as root, i.e. user "jenkins" in the group "docker"?

We could give up on provisioning containerized toolchains through Jenkins in a Non-Root-Jenkins production environment, but there is a lot of value in running and controlling Docker containers through Jenkins. Fortunately, full control of Docker isn't required for this use case: Jesse Glick demonstrated that nicely with the implementation of docker.image().inside {} that passes reasonably safe parameters through the Docker CLI (e.g. -u non-root).

Obviously, delegation of watching over safe use of the Docker CLI to Jenkins isn't possible in any environment where we can't run Jenkins as "root" ("docker"), and hence we need a trusted proxy. Technically that proxy might run with "setuid docker" (and have access to the Unix socket), or use a web service, but I think that the docker counterpart to a "restricted shell" would be best.

Stephen Connolly

unread,
Jan 11, 2016, 9:08:41 AM1/11/16
to jenkins...@googlegroups.com
FYI you do not run *Jenkins* as root... rather you run a build slave as the trusted account and then you lock down access to that build slave... e.g. if that build slave deploys into production you can use it as a kind of bastion host whereby it is off-line until you want to deploy into production.

--
You received this message because you are subscribed to the Google Groups "Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-users/aa632bac-58cb-4e3f-9238-ab1ad4424fe8%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Thomas Goeppel

unread,
Jan 11, 2016, 3:01:10 PM1/11/16
to Jenkins Users
Stephen,

thanks for your reply. You're right, nobody would run Jenkins as root in a production environment, but  through the Cloudbees Docker Workflow plugin Jenkins is in control of the docker client, and hence it's root on some machine (e.g. a build slave).

I don't really understand what you mean with "running a build slave as the trusted account". Maybe I'm missing something here, but from a security point of view, as soon as untrusted users control a docker host on a build slave (e.g. through Jenkins job definitions) that build slave is all but trusted, and an untrusted machine has no place in a corporate network. If one builds a firewall around an untrusted build slave as mitigation, I would expect practical problems accessing SCM or artifact repositories inside the corporate network from that slave.

Calling the docker client through Jenkins can be considered safe if Jenkins runs in a locked down environment (where only trusted users can define jobs), or if the docker host (build slave) is short lived (e.g. an ephemeral VM like Boot2Docker). Unfortunately, non of these options would work for me.

/Thomas

Stephen Connolly

unread,
Jan 11, 2016, 5:34:34 PM1/11/16
to jenkins...@googlegroups.com
You can use a feature such as http://documentation.cloudbees.com/docs/cje-user-guide/foldersplus-sect-controlledslaves.html so that only jobs in a specific folder can use the build slave... then you can use an authorization strategy (http://documentation.cloudbees.com/docs/cje-user-guide/rbac.html pairs well but there are others) so that only trusted users can configure jobs in that folder... yes there are still ways to hack around if you don't use script security or let untrusted users run jobs on the Jenkins master... but a locked down Jenkins is still possible...

If you want something even more secure you can have multiple Jenkins instances connected together and trigger the jobs on the locked down ones from slightly lesser locked down ones: http://documentation.cloudbees.com/docs/cjoc-user-guide/cluster-triggers.html

All the above is possible (i.e. I wrote a good chunk of that functionality) so even if you didn't want to purchase from CloudBees it is possible to implement the same in plugins (though the CJOC stuff would be a bigger chunk of work)

Thomas Goeppel

unread,
Jan 12, 2016, 1:14:29 AM1/12/16
to Jenkins Users
Stephen,

thanks for the suggestions. I'm sure that with the two methods you described it's possible to divide roles into those who can control a docker host, and those who can't, and the second method is even safe against privilege escalation. The organization I work for is a Cloudbees customer, but unfortunately both methods you suggested don't support use cases where both groups in your "lock down" scenario must be allowed to configure Workflow scripts with `docker.image().inside{}` (and that's what I have in mind).

FYI: I did some tests with the "setuid docker proxy" method I proposed. Writing a docker shim that filters commands and parameters to prevent privilege escalation seems to be feasible (i.e. one that only passes on commands, flags, and arguments in compliance with the intended use case in Jenkins Workflow).

/Thomas

Stephen Connolly

unread,
Jan 12, 2016, 4:51:11 AM1/12/16
to jenkins...@googlegroups.com
On 12 January 2016 at 06:14, Thomas Goeppel <thomas....@gmail.com> wrote:
Stephen,

thanks for the suggestions. I'm sure that with the two methods you described it's possible to divide roles into those who can control a docker host, and those who can't, and the second method is even safe against privilege escalation. The organization I work for is a Cloudbees customer, but unfortunately both methods you suggested don't support use cases where both groups in your "lock down" scenario must be allowed to configure Workflow scripts with `docker.image().inside{}` (and that's what I have in mind).

If you are using the Script Security plugin that should help vet what scripts can be executed... and by having the two teams in different Jenkins masters you can have different profiles of what they can run... but yeah I agree it would be better to be able to provide a "permission restricted" docker proxy to the build (not just workflow by the way) so that you can restrict what the build can do as this is a problem for any build (just think that somebody can add a unit test that uses System.exec to run docker
 

nicolas de loof

unread,
Jan 12, 2016, 5:18:54 AM1/12/16
to jenkins...@googlegroups.com
You should give docker 1.9 user namespace a try. Using this one, containers you run, even overriding command with `-u root`, won't have root privileges on host. Should also consider using SELinux if you want to secure your dockerhost vs docker containers attack surface.


Thomas Goeppel

unread,
Jan 12, 2016, 4:59:10 PM1/12/16
to Jenkins Users


On Tuesday, January 12, 2016 at 10:51:11 AM UTC+1, Stephen Connolly wrote:


On 12 January 2016 at 06:14, Thomas Goeppel <thomas....@gmail.com> wrote:
Stephen,

thanks for the suggestions. I'm sure that with the two methods you described it's possible to divide roles into those who can control a docker host, and those who can't, and the second method is even safe against privilege escalation. The organization I work for is a Cloudbees customer, but unfortunately both methods you suggested don't support use cases where both groups in your "lock down" scenario must be allowed to configure Workflow scripts with `docker.image().inside{}` (and that's what I have in mind).

If you are using the Script Security plugin that should help vet what scripts can be executed... and by having the two teams in different Jenkins masters you can have different profiles of what they can run... but yeah I agree it would be better to be able to provide a "permission restricted" docker proxy to the build (not just workflow by the way) so that you can restrict what the build can do as this is a problem for any build (just think that somebody can add a unit test that uses System.exec to run docker
 

Exactly that's my concern - I don't want to rely on Jenkins security for this kind of system level stuff. On the OS level, Jenkins operates under a single user account for all jobs. It would be easier if Jenkins could run jobs with setuid(), e.g. depending on the user roles that 'own' a job. Currently that only possible on the build slave level.

Thomas Goeppel

unread,
Jan 12, 2016, 5:08:14 PM1/12/16
to Jenkins Users
Dag Nicolas,

thanks for pointing me to docker user namespaces! It looks great, and I guess it will be part of my toolbox as soon as my machine runs a sufficiently recent kernel!

I checked features and usecase, and unfortunately it doesn't apply to the case where access to the host root through the docker client shouldn't be possible.
Reply all
Reply to author
Forward
0 new messages