Using available pods in a Kubernetes deployment as slaves via Kubernetes plugin

10 views
Skip to first unread message

Karthik Duddu

unread,
Apr 4, 2018, 7:36:41 AM4/4/18
to Jenkins Developers
Hi all,

At my company, we use Jenkins to run builds with lots of parallel tasks, where the slaves for each task are provisioned from a private Kubernetes cluster. We have a very specific problem with these provisioned slaves: we'd like to reduce the time overhead of a Kubernetes slave to match that of a physical slave (or get as close as possible). Since our slave container itself has  a non-trivial start-up time (after provisioning, but before registering with the Jenkins master), we're thinking of maintaining a Kubernetes deployment of 'ready' slaves that register themselves with the master, and then are removed from the deployment when they're assigned a job; the rest of the lifecycle remains the same (that is, the slaves are still used only once). This ensures that we have a continuous supply of ready slaves, and we can also use pool size auto-scaling to keep up with load.

We've tried this out internally by modified the Kubernetes plugin a little to be able to support this system, and are reasonably satisfied with the results. I have a couple of questions with regard to this:

1. Is there a better way to reduce overhead? In our case, overhead essentially comprises of provisioning request time + pod scheduling time + container setup + slave connect-back.

2. Does this use-case fall within the realm of the Kubernetes plugin, or is it better off developed as a plugin dependent on this one?

Looking forward to feedback from y'all!

Thanks and regards,

Karthik Duddu
Reply all
Reply to author
Forward
0 new messages