Docker plugin. How to manage concurrent builds/volumes

241 views
Skip to first unread message

Kris Massey

unread,
Dec 8, 2015, 11:39:25 AM12/8/15
to Jenkins Users
Hi All,

I'm new to Docker, so thought I'd attempt to setup Jenkins with slaves that are Docker containers. The software we build used gradle as the build tool, and I've git a few issues. Below is a overview of where I am, however I'm struggling to progress to the next stage

Current Situation:

Jenkins - Running on a standalone VM (may put into Docker at a later date) 
  • Docker plugin to start/stop containers for builds
  • At the moment build steps are just to echo something into a file

Slaves - Docker images
  • Jenkins starts the image does whats needed and then stops the container
  • *Issue* - the container is left behind...is there anyway to automate removal of the containers from within Jenkins? 

The next issues I have are:
  • How to mount the workspace into the container.
    • The issue I'm having here is that we start X slaves, so if I mount the Jenkins master workspace onto the slave. Each slave will end up pointing to the same folder on the host trampling all over each other
    • Another issue here is, the project uses Gradle so it would be nice to attempt to reload/retain the gradle wrapper and dependency caches. 
  • Extract build results (JUnit etc)
    • By solving the issue above this could be resolved

So I guess summed up my question is, how should the file system be managed when using Docker containers as Jenkins Slaves? Any thoughts or how youve implemented things would be great.

Thanks,
Kris

Nigel Magnay

unread,
Dec 8, 2015, 11:52:19 AM12/8/15
to jenkins...@googlegroups.com
  • *Issue* - the container is left behind...is there anyway to automate removal of the containers from within Jenkins? 


It shou​ld be removed when the jenkins build itself is removed.

 
The next issues I have are:
  • How to mount the workspace into the container.
    • The issue I'm having here is that we start X slaves, so if I mount the Jenkins master workspace onto the slave. Each slave will end up pointing to the same folder on the host trampling all over each other
    • Another issue here is, the project uses Gradle so it would be nice to attempt to reload/retain the gradle wrapper and dependency caches. 
  • Extract build results (JUnit etc)
    • By solving the issue above this could be resolved


​Don't mount the workspace into the container, have the build check out the scm ​in its own isolated sandbox.

JUnit reporting, just use standard jenkins plugins.

If you want to optimise build times, create a docker image to use as a jenkins slave with caches pre-populated. 

Ryan Hochstetler

unread,
Nov 2, 2017, 6:21:56 PM11/2/17
to Jenkins Users
Kris,

I tried to travel down the same road, mounting a volume with the workspace directory into the slave containers as well as the master in order to expose the workspace contents post-build.  The jenkins way to provide developer-visibility into workspace contents seems to be committing the container so that the team can pull the resulting image and view the contents there.  That's pretty slick, but essentially requires me to teach hundreds of colleagues how to docker (and suffer their questions about why it can't work the way it used to work in the meantime).  

Given that this post is pretty dusty, I'm hoping you got past your problem.  Mind posting back here to summarize your solution?

nicolas de loof

unread,
Nov 4, 2017, 4:51:53 AM11/4/17
to jenkins...@googlegroups.com
2017-11-02 22:55 GMT+01:00 Ryan Hochstetler <ryan.hoc...@gmail.com>:
Kris,

I tried to travel down the same road, mounting a volume with the workspace directory into the slave containers as well as the master in order to expose the workspace contents post-build.  The jenkins way to provide developer-visibility into workspace contents seems to be committing the container so that the team can pull the resulting image and view the contents there.  That's pretty slick, but essentially requires me to teach hundreds of colleagues how to docker (and suffer their questions about why it can't work the way it used to work in the meantime).  

That's indeed a feature the docker-plugin provides and I totally dislike this approach. A local volume for the workspace would be a way better approach, and one could browse it's content from UI by running an ephemeral jenkins agent. Offering this option is on my todo-list for 1.1

Same for dependency cache : a local volume could be used to store this cache a retrieve it for subsequent builds (just like circle CI does). 

Main issue is we don't know the job to run on a docker agent as it get provisioned. This is by design of the Cloud API, which was designed for long-running VMs, not containers one can create and drop within a second. My plan is to adopt an approach comparable to one-shot-executor (or maybe just use this plugin, but there's few things I have to fix) so that a Docker Agent is tied to a specific item in build queue so we know about it and can decide a volume to use as workspace based (for sample) on job's name checksum, or something comparable.

 

Given that this post is pretty dusty, I'm hoping you got past your problem.  Mind posting back here to summarize your solution?

On Tuesday, December 8, 2015 at 10:39:25 AM UTC-6, Kris Massey wrote:
Hi All,

I'm new to Docker, so thought I'd attempt to setup Jenkins with slaves that are Docker containers. The software we build used gradle as the build tool, and I've git a few issues. Below is a overview of where I am, however I'm struggling to progress to the next stage

Current Situation:

Jenkins - Running on a standalone VM (may put into Docker at a later date) 
  • Docker plugin to start/stop containers for builds
  • At the moment build steps are just to echo something into a file

Slaves - Docker images
  • Jenkins starts the image does whats needed and then stops the container
  • *Issue* - the container is left behind...is there anyway to automate removal of the containers from within Jenkins? 

The next issues I have are:
  • How to mount the workspace into the container.
    • The issue I'm having here is that we start X slaves, so if I mount the Jenkins master workspace onto the slave. Each slave will end up pointing to the same folder on the host trampling all over each other
    • Another issue here is, the project uses Gradle so it would be nice to attempt to reload/retain the gradle wrapper and dependency caches. 
  • Extract build results (JUnit etc)
    • By solving the issue above this could be resolved

So I guess summed up my question is, how should the file system be managed when using Docker containers as Jenkins Slaves? Any thoughts or how youve implemented things would be great.

Thanks,
Kris

--
You received this message because you are subscribed to the Google Groups "Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-users+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-users/95857ff9-f7f0-4ab0-a570-de8922894112%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages