Running entire buildfarm within Docker

65 views
Skip to first unread message

Lucas Walter

unread,
Mar 11, 2016, 10:13:20 AM3/11/16
to ros-sig-...@googlegroups.com

I'm trying to get the entire buildfarm running with public github configs (the security shouldn't matter since by default they can't be accessed outside of the host running the docker containers, later I'd like to document the process in detail for changing every key).  Having the default config be an add-on rosdistro and build a couple of example projects off of github would be the ultimate goal but I'm only partway there.


So far it is to the point of having a live jenkins I can browse to from my host, and building_repository appears as an executor, but the slave did not.  My first attempt to generate_all_jobs also failed, I'll document that further.  I think running docker within docker is fine, but maybe configuration needs to be adjusted to make that work.

It would be great if anyone else interested could build the dockerfiles in that repository and see if they can help me.  (probably should put them on dockerhub later?)

One strange thing is that I can RUN the reconfigure.bash in the Dockerfile, but puppet apply takes up to ten minutes, but then I need to log into a live container and run it again to make it fully work (for jenkins to actually start for the master), and then puppet apply finishes in a more normal 90 seconds.  

I suspect there is a mix of installation steps and configuration changes happening in the script that the docker image records, but any processes/daemons launched are not automatically relaunched when the container starts.  Maybe these can be identified and properly set up in the images.

In the case of the master jenkins fails to install during the reconfigure.bash step, so I've added a RUN that installs it after that failed attempt.

Earlier I set up virtualbox vms and put some questions on answers.ros.org about them, and in that process I'm ready to bloom release my packages to my add-on rosdistro (but haven't tried it yet)- but my config git repos and packages are not publicly available.  Hopefully with docker anyone can set up an entire example farm with no configuration changes until they are ready to do so, and others can help me get to that point.

One difference with my private vm buildfarm effort is that I want to build packages privately on gitlab, and I don't know if this will create some problems later- but this all-docker process will be pure github.  (is there a gitlab online I can put a few projects into?) 

-Lucas Walter

Dirk Thomas

unread,
Mar 11, 2016, 11:45:20 AM3/11/16
to ros-sig-...@googlegroups.com
If I understand you correctly you are trying to run the buildfarm itself in Docker (which then runs its jobs in docker which means docker-in-docker). I can tell you from our experience in the past that this did not work at all. There were numerous problem and subtle bugs when we went this road a year and a half ago. Others have blogged about the problems too, e.g. https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/

Therefore I would highly recommend you to reconsider and not put docker-in-docker.

Cheers,
- Dirk

--
You received this message because you are subscribed to the Google Groups "ros-sig-buildfarm" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ros-sig-buildf...@googlegroups.com.
To post to this group, send email to ros-sig-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ros-sig-buildfarm/CAEMWo_x%2B%3DOKO%2Bki5BL4vcBMLj8%3D3TFXJ_ZGx5sgxRdt%2BwKdNsQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Lucas Walter

unread,
Mar 11, 2016, 12:25:57 PM3/11/16
to ros-sig-...@googlegroups.com
I'm not partial to any underlying process, it is just that I'd like there to be a way to bring up a buildfarm on a single system that is lighter weight than full virtual machines, where changes to configuration could be experimented with easily, and later the the configuration can migrate to real servers with the minimum of changes.

At the end of https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ it says 'expose the Docker socket to your CI container, by bind-mounting it with the -v flag... Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.'  (Does that require reconfiguration of the jobs, or would happen transparently?)

Even if that isn't a solution here, I'd like to make this system work up to the point that it can't any further because it still may be a useful example and easy to launch sandbox (maybe I can generate the jobs, but then won't be able to run any, or run them and make sure they fail in a particular way?) .  There ought to be some useful manual debug steps (or later non-docker tests that jenkins could run) that could be done within the sandbox to prove the configuration is reasonable short of running the actual docker jobs (e.g. sshing as jenkins-slave or root on one machine to another without getting asked for a password, or git pulling from a private repo that requires ssh keys).

Thanks,

-Lucas

Tully Foote

unread,
Mar 11, 2016, 1:19:25 PM3/11/16
to ros-sig-...@googlegroups.com
Hi Lucas, 

We have an open ticket to support running all three instances on one host: https://github.com/ros-infrastructure/buildfarm_deployment/issues/42

The challenge is that to do this the puppet configs must be made non-colliding or else they need to add a lot of logic to interleave the functionality conditionally. I do test the bringup occationally in a docker instance but don't run jobs. Usually, I'm testing for a specific configuration and just manually verify that it worked before tearing it down, and I don't try to run any jobs. 

We know it's possible but deconflicting the configs but we don't currently have the time available to focus on it. The other thing I'd like to do is refactor the deployment scripts into proper puppet module(s) instead of distributing the manifests in a repo. This would allow us to take advantage of the puppet deployment toolchain and mean that we could simplify the deployment process to just involve setting up the config files. 

Tully

Reply all
Reply to author
Forward
0 new messages