[JIRA] (JENKINS-50735) Using pods in a Kubernetes deployment as slaves via Kubernetes plugin

3 views
Skip to first unread message

kar.meher95@gmail.com (JIRA)

unread,
Apr 11, 2018, 7:43:02 AM4/11/18
to jenkinsc...@googlegroups.com
Karthik Duddu created an issue
 
Jenkins / New Feature JENKINS-50735
Using pods in a Kubernetes deployment as slaves via Kubernetes plugin
Issue Type: New Feature New Feature
Assignee: Carlos Sanchez
Components: kubernetes-plugin
Created: 2018-04-11 11:42
Environment: Jenkins v2.89.2
Kubernetes plugin v1.3.3
Priority: Major Major
Reporter: Karthik Duddu

At my company, we use Jenkins to run builds with lots of parallel tasks, where the slaves for each task are provisioned from a private Kubernetes cluster. We have a very specific problem with these provisioned slaves: we'd like to reduce the time overhead of a Kubernetes slave to match that of a physical slave (or get as close as possible). Since our slave container itself has  a non-trivial start-up time (after provisioning, but before registering with the Jenkins master), we're thinking of maintaining a Kubernetes deployment of 'ready' slaves that register themselves with the master, and then are removed from the deployment when they're assigned a job; the rest of the lifecycle remains the same (that is, the slaves are still used only once). This ensures that we have a continuous supply of ready slaves, and we can also use pool size auto-scaling to keep up with load.

We've tried this out internally by modified the Kubernetes plugin a little to be able to support this system, and are reasonably satisfied with the results. I have a couple of questions with regard to this:

1. Is there a better way to reduce overhead? In our case, overhead essentially comprises of provisioning request time + pod scheduling time + container setup + slave connect-back.

2. Does this use-case fall within the realm of the Kubernetes plugin, or is it better off developed as a plugin dependent on this one?

Add Comment Add Comment
 
This message was sent by Atlassian JIRA (v7.3.0#73011-sha1:3c73d0e)
Atlassian logo

jenkins-ci@carlossanchez.eu (JIRA)

unread,
Apr 11, 2018, 10:59:02 AM4/11/18
to jenkinsc...@googlegroups.com
Carlos Sanchez commented on New Feature JENKINS-50735
 
Re: Using pods in a Kubernetes deployment as slaves via Kubernetes plugin

provisioning request time tends to 0 using the config in https://github.com/jenkinsci/kubernetes-plugin/#over-provisioning-flags

Other than your specific container startup time I don't see any of the other options adding much time

And then you have "Time in minutes to retain slave when idle" which would keep the agents around, but I'm guessing you don't want to reuse them

kar.meher95@gmail.com (JIRA)

unread,
Apr 11, 2018, 12:47:02 PM4/11/18
to jenkinsc...@googlegroups.com

Perhaps I should've explained a little more: we're using the Pipeline plugin suite of Jenkins, and there's an inital start-up cost associated with shell step of the pipeline. The breakdown of time consumed came about to roughly the following:

Breakdown of execution time taken by different steps of a simple `echo hello` shell step, when run in a pipeline with an un-modified version of the Kubernetes plugin:

Order of execution Operation Time (in secs)
     
0 Job start request 1
1 Job start 3
2 Request for pod 10
3 k8s scheduling + startup 5
4 Slave registration and info exchange 5
5 Agent Connection 20
6 Script setup 51
7 Script execution 8
8 Job end 0

kar.meher95@gmail.com (JIRA)

unread,
Apr 11, 2018, 12:50:02 PM4/11/18
to jenkinsc...@googlegroups.com
Karthik Duddu edited a comment on New Feature JENKINS-50735
Perhaps I should've explained a little more: we're using the Pipeline plugin suite of Jenkins, and there's an inital start-up cost associated with shell step of the pipeline. The breakdown of time consumed came about to roughly the following:

Breakdown of execution time taken by different steps of a simple `echo hello` shell step, when run in a pipeline with an un-modified version of the Kubernetes plugin:

 
|Order of execution|Operation|Time (in secs)|
| | | |
|0|Job start request|1|
|1|Job start|3|
|2|Request for pod|10|
|3|k8s scheduling + startup|5|
|4|Slave registration and info exchange|5|
|5|Agent Connection|20|
|6|Script setup|51|
|7|Script execution|8|
|8|Job end|0|


We've already set the over provisioning flags while starting up the instance. Using a deployment allows us to eliminate steps 2-6 (just after connection, we run a small script on the slave to complete the start-up time), and provides an overhead roughly similar to that of physical slaves.
 

kar.meher95@gmail.com (JIRA)

unread,
Apr 11, 2018, 12:50:02 PM4/11/18
to jenkinsc...@googlegroups.com

kar.meher95@gmail.com (JIRA)

unread,
Apr 11, 2018, 12:53:02 PM4/11/18
to jenkinsc...@googlegroups.com

Also, as you mentioned, we don't want to reuse slaves, which kind of eliminates "Time in minutes to retain slave when idle" as an option.

kar.meher95@gmail.com (JIRA)

unread,
Apr 11, 2018, 12:54:01 PM4/11/18
to jenkinsc...@googlegroups.com
Karthik Duddu updated an issue
 
Change By: Karthik Duddu
At my company, we use Jenkins to run builds with lots of parallel tasks using the Pipeline suite of plugins , where and the slaves for each task are provisioned from a private Kubernetes cluster. We have a very specific problem with these provisioned slaves: we'd like to reduce the time overhead of a Kubernetes slave to match that of a physical slave (or get as close as possible). Since our slave container itself has  a non-trivial start-up time (after provisioning, but before registering with the Jenkins master), we're thinking of maintaining a Kubernetes deployment of 'ready' slaves that register themselves with the master, and then are removed from the deployment when they're assigned a job; the rest of the lifecycle remains the same (that is, the slaves are still used only once). This ensures that we have a continuous supply of ready slaves, and we can also use pool size auto-scaling to keep up with load.


We've tried this out internally by modified the Kubernetes plugin a little to be able to support this system, and are reasonably satisfied with the results. I have a couple of questions with regard to this:

1. Is there a better way to reduce overhead? In our case, overhead essentially comprises of provisioning request time + pod scheduling time + container setup + slave connect-back + pipeline setup time .


2. Does this use-case fall within the realm of the Kubernetes plugin, or is it better off developed as a plugin dependent on this one?

jenkins-ci@carlossanchez.eu (JIRA)

unread,
Apr 12, 2018, 4:07:03 AM4/12/18
to jenkinsc...@googlegroups.com
Carlos Sanchez commented on New Feature JENKINS-50735
 
Re: Using pods in a Kubernetes deployment as slaves via Kubernetes plugin

So if you don't want agents to be provisioned per job, then you just have "external" agents connected and you manage them (with a deployment for instance). You just need to kill them after each job

You can use the swarm plugin for authentication https://plugins.jenkins.io/swarm https://www.infoq.com/articles/scaling-docker-kubernetes-v1

jinyuping@gmail.com (JIRA)

unread,
Apr 15, 2019, 10:45:01 PM4/15/19
to jenkinsc...@googlegroups.com

Without this plugin to manage slaves is more complex. However, being able to pre-launch a certain number of slave pods reduces the time for builds significantly. I'm in the same situation and had to set "Time in minutes to retain slave when idle" to keep some pods running and being reused. Still, the problem is how to launch the slave pods initially. Triggering it by some builds is awkward. 

I tried to use a separate deployment for slaves, but I ran into a problem with JENKINS_AGENT_NAME. I've no idea how to handle it. A random name got "Unknown client name" error. Karthik Duddu, is it possible to share your customization to Kubernetes plugin? 

Appreciate your help!

 

This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)

jinyuping@gmail.com (JIRA)

unread,
Apr 16, 2019, 2:42:02 AM4/16/19
to jenkinsc...@googlegroups.com

Carlos Sanchez I tried to add a permanent JNLP agent according to https://support.cloudbees.com/hc/en-us/articles/360004695871-Create-dedicated-agents-running-Kubernetes. While running a pipeline defined for Kubernetes plugin I got "ERROR: Node is not a Kubernetes node".  There is not much discussion about it on google. Is this a correct usage? 

 Thanks and best regards,

jenkins-ci@carlossanchez.eu (JIRA)

unread,
Apr 16, 2019, 6:44:02 AM4/16/19
to jenkinsc...@googlegroups.com

As it is now you can't try to run container and other steps from this plugin in a node that is not created by it

jinyuping@gmail.com (JIRA)

unread,
Apr 16, 2019, 7:19:02 PM4/16/19
to jenkinsc...@googlegroups.com

jinyuping@gmail.com (JIRA)

unread,
Apr 16, 2019, 11:16:03 PM4/16/19
to jenkinsc...@googlegroups.com

Carlos Sanchez Is there a plan to add support for pre-launching a certain number of pods? Since it has already support for retaining pods for reuse this sounds like a natural extension.

This plugin is so nice. It integrates with Kubernetes perfectly. For us, the only concern is the time waiting for pod creating/scheduling/connecting.

Thanks and best regards,

jenkins-ci@carlossanchez.eu (JIRA)

unread,
Apr 17, 2019, 6:16:01 AM4/17/19
to jenkinsc...@googlegroups.com

there are no plans
it's also not something trivial to do, because the jenkins "cloud api" is only called when there are new jobs in the queue. Not sure if there's another way I'm not aware of.
What you could do is submit a PR to make sure the container and other steps work in pre-launched agents, you would need to fix some things here https://github.com/jenkinsci/kubernetes-plugin/blob/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/pipeline/KubernetesNodeContext.java#L50

jinyuping@gmail.com (JIRA)

unread,
Apr 17, 2019, 6:24:02 AM4/17/19
to jenkinsc...@googlegroups.com

jglick@cloudbees.com (JIRA)

unread,
Jul 16, 2019, 3:43:24 PM7/16/19
to jenkinsc...@googlegroups.com
Jesse Glick assigned an issue to Unassigned
 
Change By: Jesse Glick
Assignee: Carlos Sanchez
Reply all
Reply to author
Forward
0 new messages