[JIRA] (JENKINS-37087) Label expressions and multiple labels per pod aren't handled properly

4 views
Skip to first unread message

nehaljw.kkd1@gmail.com (JIRA)

unread,
Aug 1, 2016, 5:19:02 AM8/1/16
to jenkinsc...@googlegroups.com
Nehal Wani created an issue
 
Jenkins / Bug JENKINS-37087
Label expressions and multiple labels per pod aren't handled properly
Issue Type: Bug Bug
Assignee: Carlos Sanchez
Components: kubernetes-plugin
Created: 2016/Aug/01 9:18 AM
Environment: Operating System: RHEL 6
Java version: "1.8.0_45"
Jenkins version: Reproducible on both, 1.651.2 and 2.7.1
Labels: plugin slave exception
Priority: Blocker Blocker
Reporter: Nehal Wani

Jenkins allows jobs to have label expressions of the sort:
(label1 || label2) && !(label3)

If the label expression is satisfied by any of the pod templates inside any of the kubernetes clouds, the function provision() in KubernetesCloud.java thinks that is has received a single label, instead of a label expression. When addProvisionedSlave() tries to get count of all running containers with the given label, the Kubernetes API throws the following backtrace and the job gets stuck in the queue:

WARNING: Failed to count the # of live instances on Kubernetes
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubemaster1.skae.tower-research.com/api/v1/namespaces/infra-build/pods?labelSelector=name%3Djenki$s-(label1||label2)%26%26!(label3). Message: unable to parse requirement: invalid label value: must have at most 63 characters, matching regex (([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?$ e.g. "MyValue" or "". Received status: Status(apiVersion=v1, code=500, details=null, kind=Status, message=unable to parse requirement: invalid label value: must have at most 63 characters$ matching regex (([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?: e.g. "MyValue" or "", metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=null, status=Fail$re, additionalProperties={}).
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:310)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:263)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:232)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:416)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:58)
        at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud.addProvisionedSlave(KubernetesCloud.java:477)
        at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud.provision(KubernetesCloud.java:357)
        at hudson.slaves.NodeProvisioner$StandardStrategyImpl.apply(NodeProvisioner.java:700)
        at hudson.slaves.NodeProvisioner.update(NodeProvisioner.java:305)
        at hudson.slaves.NodeProvisioner.access$000(NodeProvisioner.java:58)
        at hudson.slaves.NodeProvisioner$NodeProvisionerInvoker.doRun(NodeProvisioner.java:797)
        at hudson.triggers.SafeTimerTask.run(SafeTimerTask.java:50)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

A quick way to fix this is might be to launch containers with the combined labels defined in the pod template. For example, if the pod template has the labels: label1 and label2, then we could spawn the container with the label: name=jenkins-label1-label2 or something similar which satisfies the regex required by Kubernetes API.

In the current code, if one pod template has more than one label, then the container cap check for a particular template inside addProvisionedSlave() is wrong, since it checks only for the given label and not all possible labels for that particular pod.

Also, if more than one pod templates satisfy the given label expression, then all satisfying pod templates should be tried, instead of only the first, since one of them might have reached the container cap, but other might not have.

Add Comment Add Comment
 
This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c)
Atlassian logo

nehaljw.kkd1@gmail.com (JIRA)

unread,
Aug 1, 2016, 5:21:01 AM8/1/16
to jenkinsc...@googlegroups.com
Nehal Wani updated an issue
Change By: Nehal Wani
Jenkins allows jobs to have label expressions of the sort:
{{(label1 || label2) && !(label3)}}

If the label expression is satisfied by any of the pod templates inside any of the kubernetes clouds, the function {{provision()}} in {{KubernetesCloud.java}} thinks that is has received a single label, instead of a label expression. When {{addProvisionedSlave()}} tries to get count of all running containers with the given label, the Kubernetes API throws the following backtrace and the job gets stuck in the queue:

{code:java}

WARNING: Failed to count the # of live instances on Kubernetes
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https:// kubemaster1 kubernetes . skae mydomain . tower-research. com/api/v1/namespaces/infra-build/pods?labelSelector=name%3Djenki$s-(label1||label2)%26%26!(label3). Message: unable to parse requirement: invalid label value: must have at most 63 characters, matching regex (([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?$ e.g. "MyValue" or "". Received status: Status(apiVersion=v1, code=500, details=null, kind=Status, message=unable to parse requirement: invalid label value: must have at most 63 characters$ matching regex (([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?: e.g. "MyValue" or "", metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=null, status=Fail$re, additionalProperties={}).

        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:310)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:263)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:232)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:416)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:58)
        at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud.addProvisionedSlave(KubernetesCloud.java:477)
        at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud.provision(KubernetesCloud.java:357)
        at hudson.slaves.NodeProvisioner$StandardStrategyImpl.apply(NodeProvisioner.java:700)
        at hudson.slaves.NodeProvisioner.update(NodeProvisioner.java:305)
        at hudson.slaves.NodeProvisioner.access$000(NodeProvisioner.java:58)
        at hudson.slaves.NodeProvisioner$NodeProvisionerInvoker.doRun(NodeProvisioner.java:797)
        at hudson.triggers.SafeTimerTask.run(SafeTimerTask.java:50)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
{code}

A quick way to fix this is might be to launch containers with the combined labels defined in the pod template. For example, if the pod template has the labels: {{label1 and  label2}}, then we could spawn the container with the label: {{name=jenkins-label1-label2}} or something similar which satisfies the regex required by Kubernetes API.


In the current code, if one pod template has more than one label, then the container cap  check for a particular template inside {{addProvisionedSlave()}} is wrong, since it checks only for the given label and not all possible labels for that particular pod.

Also, if more than one pod templates satisfy the given label expression, then all satisfying pod templates should be tried, instead of only the first, since one of them might have reached the container cap, but other might not have.

jenkins-ci@carlossanchez.eu (JIRA)

unread,
Aug 21, 2016, 10:15:01 AM8/21/16
to jenkinsc...@googlegroups.com
Carlos Sanchez commented on Bug JENKINS-37087
 
Re: Label expressions and multiple labels per pod aren't handled properly

Seems that

jenkins-ci@carlossanchez.eu (JIRA)

unread,
Aug 21, 2016, 10:17:01 AM8/21/16
to jenkinsc...@googlegroups.com

nehaljw.kkd1@gmail.com (JIRA)

unread,
Aug 21, 2016, 11:27:01 AM8/21/16
to jenkinsc...@googlegroups.com
Nehal Wani commented on Bug JENKINS-37087
 
Re: Label expressions and multiple labels per pod aren't handled properly

Let's consider this hypothetical example:

Pod Template 1 supports:

customtag1 customtag2 customtag3

Pod Template 2 supports: customtag3 customtag4 customtag5

And I have a job, with the label expression:
customtag1 && customtag2

nehaljw.kkd1@gmail.com (JIRA)

unread,
Aug 21, 2016, 11:28:02 AM8/21/16
to jenkinsc...@googlegroups.com
Nehal Wani edited a comment on Bug JENKINS-37087
Let's consider this hypothetical example:

Pod Template 1 supports: {code}customtag1 customtag2 customtag3{code}
Pod Template 2 supports:
{code} customtag3 customtag4 customtag5 {code}

And I have a job, with the label expression:

{code} customtag1 && customtag2 {code}

In such a scenario, what label should be passed to the kubernetes cluster while creating the pod?

nehaljw.kkd1@gmail.com (JIRA)

unread,
Aug 21, 2016, 11:28:02 AM8/21/16
to jenkinsc...@googlegroups.com
Nehal Wani edited a comment on Bug JENKINS-37087
Let's consider this hypothetical example:

Pod Template 1 supports: {code}customtag1 customtag2 customtag3{code}
Pod Template 2 supports: {code}customtag3 customtag4 customtag5{code}

And I have a job, with the label expression: {code}customtag1 && customtag2 customtag3 {code}


In such a scenario, what label should be passed to the kubernetes cluster while creating the pod?

nehaljw.kkd1@gmail.com (JIRA)

unread,
Aug 21, 2016, 11:35:01 AM8/21/16
to jenkinsc...@googlegroups.com
Nehal Wani edited a comment on Bug JENKINS-37087
Let's consider this hypothetical example:

Pod Template 1 supports: {code}customtag1 customtag2 customtag3{code}
Pod Template 2 supports: {code}customtag3 customtag4 customtag5{code}

And I have a job, with the label expression: {code} ( customtag1 && customtag3 ) || customtag6 {code}


In such a scenario, what label should be passed to the kubernetes cluster while creating the pod?

jenkins-ci@carlossanchez.eu (JIRA)

unread,
Aug 21, 2016, 11:43:01 AM8/21/16
to jenkinsc...@googlegroups.com

First, you need to pick a template, which I believe the docker plugin just picks the first that matches, in this case will pick Pod Template 1.
Then the pod is started based on the template, so the pod labels will be customtag1 customtag2 customtag3

jenkins-ci@carlossanchez.eu (JIRA)

unread,
Sep 28, 2016, 7:07:02 AM9/28/16
to jenkinsc...@googlegroups.com

jenkins-ci@carlossanchez.eu (JIRA)

unread,
Sep 28, 2016, 7:07:02 AM9/28/16
to jenkinsc...@googlegroups.com
Carlos Sanchez started work on Bug JENKINS-37087
 
Change By: Carlos Sanchez
Status: Open In Progress

biethb@gmail.com (JIRA)

unread,
Dec 12, 2016, 10:41:01 AM12/12/16
to jenkinsc...@googlegroups.com

Is this still open? I took a glance at the PR and couldn't find any test, am I missing something?

nehaljw.kkd1@gmail.com (JIRA)

unread,
Dec 21, 2016, 5:09:02 AM12/21/16
to jenkinsc...@googlegroups.com

nehaljw.kkd1@gmail.com (JIRA)

unread,
Dec 21, 2016, 5:09:02 AM12/21/16
to jenkinsc...@googlegroups.com
Nehal J Wani stopped work on Bug JENKINS-37087
 
Change By: Nehal J Wani
Status: In Progress Open

nehaljw.kkd1@gmail.com (JIRA)

unread,
Dec 21, 2016, 5:10:01 AM12/21/16
to jenkinsc...@googlegroups.com
Nehal J Wani resolved as Fixed
Change By: Nehal J Wani
Status: Open Resolved
Resolution: Fixed

nehaljw.kkd1@gmail.com (JIRA)

unread,
Dec 21, 2016, 5:11:06 AM12/21/16
to jenkinsc...@googlegroups.com
Reply all
Reply to author
Forward
0 new messages