Docker, Docker Hub and Snap-CI: Issues and updates

278 views
Skip to first unread message

Subhas Dandapani

unread,
Jul 31, 2014, 10:46:44 AM7/31/14
to rapi...@googlegroups.com
Hi All,

As most of you know, we've spiked and switched to Docker for deployments.

Docker Hub automatically builds the docker image whenever we make any commit. So if you have Docker installed in your machine, you can simply run "docker pull rapidftr/rapidftr:latest" to get the latest RapidFTR.

​Problem:
When we push a commit - both Docker Hub (building image) & Snap-CI (deploying image to dev.rapidftr.com) get triggered at the same time.​
​ This is not good. We wa​
​nt Snap-CI to deploy only after Docker Hub has finished. Docker Hub can call a webhook once it has finished building, but unfortunately Snap-CI doesn't support any webhooks. We had conversations with them, and they have no foreseeable plans.

Proposal:
​Replace Snap-CI with our own Jenkins. Then we can configure webhooks and make Docker Hub call Jenkins. The way CI works in RapidFTR right now is:

Travis-CI is used for running tests (since it is very good at that, offers full sudo/root access, nicely integrates with GitHub pull requests, etc).
Snap-CI does just the deployment, once pull requests are merged.
So the proposal is to replace the Snap-CI portion with Jenkins.
A demo/spike of this jenkins is running on https://178.79.163.66

Larger Plan:
Philippines, South Sudan, Uganda, etc - all these deployments have their own servers and are running different versions of RapidFTR, making it vey difficult to manage the infrastructure. So there is a larger plan to make all these easily upgradable, and always running the latest and greatest. So if we use Jenkins as the CI - we can also do mass one-click upgrade of all these servers whenever we make a release.

Does this sound good?

- Subhas

Vijay Aravamudhan

unread,
Jul 31, 2014, 10:52:04 AM7/31/14
to rapi...@googlegroups.com
hi Subhas,
What about using Go instead of Jenkins? As you know, Go is now FOSS.

Go has native support for pipelines - and what you describe is better (imo) with a build pipeline as opposed to post-commit hooks.

The first step in the pipeline could be Docker and the second could be deploying the image.

what do you think?




--
You received this message because you are subscribed to the Google Groups "rapidftr" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rapidftr+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Subhas Dandapani

unread,
Jul 31, 2014, 11:01:19 AM7/31/14
to rapi...@googlegroups.com
Hi Vijay! Hope you're doing well!

The Docker Build happens directly in Docker's infrastructure (they call it the public Docker Hub) and it is not controlled by us. They poll our git repository and start rebuilding the image once we commit (which is good for us, since building and hosting a huge docker image is expensive, and good that Docker itself takes care of both :) So our custom pipelines actually begin only when Docker has finished this building, and notifies us using a WebHook...

Our problem with Snap-CI is that it doesn't have any webhooks, and AFAIK Go doesn't either, so it leaves no way for us to "wait" until Docker has finished building... Hope that makes sense?
--
- Subhas

Vijay Aravamudhan

unread,
Jul 31, 2014, 11:13:46 AM7/31/14
to rapi...@googlegroups.com
hi Subhas,
Ok - now I understand the problem.

CI tools are fundamentally good at doing one thing: poll a specific url, if a change is identified, run a series of steps.

Taking this same example, can the Docker write a particular file or invoke a url - which will in turn write a file. The CI tool of choice (or, if that's not possible, then the build task in rapidFTR) - could this then compare the changes (in SCM) to the file's creation timestamp and then continue the process?

What I am describing is a very rudimentary gate - both conditions need to be true for the next step to proceed.

Is this possible? (Sorry - I have not contributed to rapidFTR in a long time, but still keep in touch using the commit emails, etc).


Subhas Dandapani

unread,
Jul 31, 2014, 11:41:31 AM7/31/14
to rapi...@googlegroups.com
Hi Vijay, Discussion is also a contribution ;)

That's an interesting workaround... So Docker >> informs something else (say make a commit in some SCM) >> and then the CI picks up from there.

Ah but I just checked immediately, and the Docker Hub doesn't have any other mechanism other than calling a WebHook. So it would end up being a little extra hacky - Docker calls a webhook (say some simple sinatra app in heroku), which in turn writes a file in an SCM, which in turn triggers the CI :o So yeah, it seems either ways we need some workarounds, and the smallest one would be to make Docker call the CI directly...


John D. Hume

unread,
Jul 31, 2014, 12:40:18 PM7/31/14
to rapi...@googlegroups.com
On Thu, Jul 31, 2014 at 10:41 AM, Subhas Dandapani <r...@thoughtworks.com> wrote:
So yeah, it seems either ways we need some workarounds, and the smallest one would be to make Docker call the CI directly...

Are you confident that having the RapidFTR dev team (often made up only of free-time volunteers) own their own Jenkins instance is more sustainable than owning a tiny web-hook-to-SCM-interaction web service? I've experienced some and listened to a lot of frustration with Jenkins configuration, particularly around emulating build pipelines.

Given that Go is now OSS, we should also consider running our own Go instance instead of Jenkins and, if practical, building a plugin that triggers builds on webhook requests.

Subhas Dandapani

unread,
Aug 1, 2014, 4:37:00 AM8/1/14
to rapi...@googlegroups.com
Hi John, I'm not too inclined towards Jenkins, but just that we'll have to run our own CI (instead of a hosted one like snap or travis) for doing the deployments. It could also be gocd!

Right now, we had a few things spiked as part of this story:

- We were able to run Jenkins as a Docker container, with a Shared volume that stores all the configurations/jobs. So we were able to destroy and re-create Jenkins (whenever we needed any changes to the installed packages) without losing any data. It also makes it easy to port the CI to any other server (Stuart was mentioning that we need to shift our dev environments to Azure sometime because they're having some discounts there).

- We were also able to run Jenkins slaves as docker containers which auto-attach to the master. So we can keep spawning any number of slaves and jenkins will keep accepting them. We were also able to give different slaves different labels/tags, to run real production deployment tasks in isolated instances.

- We used Credentials and SSH-Agent plugins to manage the SSH keys. So Jenkins automatically starts a java-based ssh-agent when the build starts, and shuts it down when the build stops. So our deployment tasks need no changes or tweaks for managing the ssh keys.

- We used the RVM Plugin in Jenkins, so it automatically downloads, installs and sets up RVM before the deployment. The deployment uses Chef and Knife gems (instead of globally installed chef DK).

- Finally, we also used the built-in Triggers for the WebHooks.

I'll also spike Go. There was a public docker container for Go and Go agent, so I have to find out how to manage the SSH keys, install RVM, create a Webhook, and finally save the Go configuration somewhere (maybe a shared volume) so that we can retain the configuration.



--
You received this message because you are subscribed to the Google Groups "rapidftr" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rapidftr+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
- Subhas
Reply all
Reply to author
Forward
0 new messages