[JIRA] (JENKINS-61245) Nested node blocks referring to the same node take two (or more) executors

11 views
Skip to first unread message

rjfenton@mtu.edu (JIRA)

unread,
Feb 26, 2020, 3:08:02 PM2/26/20
to jenkinsc...@googlegroups.com
Ryan Fenton-Garcia created an issue
 
Jenkins / Bug JENKINS-61245
Nested node blocks referring to the same node take two (or more) executors
Issue Type: Bug Bug
Assignee: Unassigned
Components: pipeline
Created: 2020-02-26 20:07
Environment: Jenkins 2.204.2
Pipeline 2.6
Windows Server 2016
Labels: scripted pipeline
Priority: Minor Minor
Reporter: Ryan Fenton-Garcia

We have a pipeline wherein there is a function that has to may need to be called from multiple places. The code in this function needs to run on master. So we have:

def someFunction {
  node("master") {
    // do the stuff that needs master
  }
}

Now what we recently found out is that if/when this function is called from code already running on master it will actually lock not just the current executor it is using on master but an additional executor as well. 

To further investigate this, we found out that even doing something as simple as...

node("master") {
  node("master") {
    sleep(600)
  }
}

 ...in a test job, actually locks two executors on master, if you have more than one executor, or hangs indefinitely "Waiting for next available executor on 'master'" even though the only thing using the only executor on master is already this block of code. 

I'd suspect this isn't just the case for master, but actually the case for any code specifying that it needs run on a label that it's possible already running on. I think a possible work around for code that needs to execute in this fashion is to write all these blocks of code like this:

if ("${NODE_NAME}" != "master") { 
  node("master") {
    // do my code
  }
} else {
  // do my code
}

Or else wrap this into a closure of some sort of it's own... and call nodeIfNotAlreadyOn("master") instead or something.
But this isn't exactly very nice, and not exactly the behaviour I was initially expecting.

Add Comment Add Comment
 
This message was sent by Atlassian Jira (v7.13.12#713012-sha1:6e07c38)
Atlassian logo

rjfenton@mtu.edu (JIRA)

unread,
Feb 26, 2020, 3:09:02 PM2/26/20
to jenkinsc...@googlegroups.com
Ryan Fenton-Garcia updated an issue
Change By: Ryan Fenton-Garcia
We have a pipeline wherein there is a function that has needs to may need to be called from multiple places . The , where the code in this within that function needs to run on the node " master " . So we have:
{code:java}

def someFunction {
  node("master") {
    // do the stuff that needs master
  }
}{code}

Now what we recently found out is that if/when this function is called from code already running on master it will actually lock not just the current executor it is using on master but an additional executor as well. 

To further investigate this, we found out that even doing something as simple as...
{code:java}

node("master") {
  node("master") {
    sleep(600)
  }
}{code}

 ...in a test job, actually locks two executors on master, if you have more than one executor, or hangs indefinitely "Waiting for next available executor on 'master'" even though the only thing using the only executor on master is already this block of code. 

I'd suspect this isn't just the case for master, but actually the case for any code specifying that it needs run on a label that it's possible already running on. I think a possible work around for code that needs to execute in this fashion is to write all these blocks of code like this:
{code:java}

if ("${NODE_NAME}" != "master") {
  node("master") {
    // do my code
  }
} else {
  // do my code
}
{code}

Or else wrap this into a closure of some sort of it's own... and call nodeIfNotAlreadyOn("master") instead or something.
But this isn't exactly very nice, and not exactly the behaviour I was initially expecting.

rjfenton@mtu.edu (JIRA)

unread,
Feb 26, 2020, 3:11:03 PM2/26/20
to jenkinsc...@googlegroups.com
Ryan Fenton-Garcia updated an issue
We have a pipeline wherein there is a function that needs to be called from multiple places, where the code within that function needs to run on the node "master". So we have:

{code:java}
def someFunction {
  node("master") {
    // do the stuff that needs master
  }
}{code}
Now what we recently found out is that if/when this function is called from code already running on master it will actually lock not just the current executor it is using on master but an additional executor as well. 

To further investigate this, we found out that even doing something as simple as...
{code:java}
node("master") {
  node("master") {
    sleep(600)
  }
}{code}
 ...in a test job, actually locks two executors on master, if you have more than one executor, or hangs indefinitely "Waiting for next available executor on 'master'" even though the only thing using the only executor on master is already this block of code. 

I'd suspect this isn't just the case for master, but actually the case for any code specifying that it needs run on a label that it's possible already running on. I think a possible work around for code that needs to execute in this fashion is to write all these blocks of code like this:
{code:java}
if ("${NODE_NAME}" != "master") {
  node("master") {
    // do my code
  }
} else {
  // do my code
}
{code}
Or else wrap this into a closure of some sort of it's own... and call nodeIfNotAlreadyOn("master") instead or something.

But
..
but
this isn't exactly very nice, and not exactly the behaviour I was initially expecting.

rjfenton@mtu.edu (JIRA)

unread,
Feb 26, 2020, 3:12:02 PM2/26/20
to jenkinsc...@googlegroups.com
Ryan Fenton-Garcia updated an issue
We have a pipeline wherein there is a function that needs to be called from multiple places, where the code within that function needs to run on the node "master". So we have:
{code:java}
def someFunction () {
  // maybe do some stuff

  node("master") {
    //
definitely do the some stuff that needs master

  }
}{code}
Now what we recently found out is that if/when this function is called from code already running on master it will actually lock not just the current executor it is using on master but an additional executor as well. 

To further investigate this, we found out that even doing something as simple as...
{code:java}
node("master") {
  node("master") {
    sleep(600)
  }
}{code}
 ...in a test job, actually locks two executors on master, if you have more than one executor, or hangs indefinitely "Waiting for next available executor on 'master'" even though the only thing using the only executor on master is already this block of code. 

I'd suspect this isn't just the case for master, but actually the case for any code specifying that it needs run on a label that it's possible already running on. I think a possible work around for code that needs to execute in this fashion is to write all these blocks of code like this:
{code:java}
if ("${NODE_NAME}" != "master") {
  node("master") {
    // do my code
  }
} else {
  // do my code
}
{code}
Or else wrap this into a closure of some sort of it's own... and call nodeIfNotAlreadyOn("master") instead or something...
but this isn't exactly very nice, and not exactly the behaviour I was initially expecting.

rjfenton@mtu.edu (JIRA)

unread,
Feb 26, 2020, 3:13:02 PM2/26/20
to jenkinsc...@googlegroups.com
Ryan Fenton-Garcia updated an issue
We have a pipeline wherein there is a function that needs to be called from multiple places, where the code within that function needs to run on the node "master". So we have:
{code:java}
def someFunction() {
  // maybe do some stuff

  node("master") {
    // definitely do some stuff that needs master
  }
}{code}
Now what we
have recently found out , is that if/when this function is called from code already running on master it will actually lock not just the current executor it is using on master but an additional executor as well. 


To further investigate this, we found out that even doing something as simple as...
{code:java}
node("master") {
  node("master") {
    sleep(600)
  }
}{code}
 ...in a test job, actually locks two executors on master, if you have more than one executor, or hangs indefinitely "Waiting for next available executor on 'master'" even though the only thing using the only executor on master is already this block of code. 

I'd suspect this isn't just the case for master, but actually the case for any code specifying that it needs run on a label that it's possible already running on. I think a possible work around for code that needs to execute in this fashion is to write all these blocks of code like this:
{code:java}
if ("${NODE_NAME}" != "master") {
  node("master") {
    // do my code
  }
} else {
  // do my code
}
{code}
Or else wrap this into a closure of some sort of it's own... and call nodeIfNotAlreadyOn("master") instead or something...
but this isn't exactly very nice, and not exactly the behaviour I was initially expecting.

rjfenton@mtu.edu (JIRA)

unread,
Feb 26, 2020, 3:15:03 PM2/26/20
to jenkinsc...@googlegroups.com
Ryan Fenton-Garcia updated an issue
We have a pipeline wherein there is a function that needs to be called from multiple places, where the code within that function needs to run on the node "master". So we have:
{code:java}
def someFunction() {
  // maybe do some stuff

  node("master") {
    // definitely do some stuff that needs master
  }
}{code}
Now what we have recently found out, is that if/when this function is called from code already running on master it will actually lock not just the current executor it is using on master but an additional executor as well. 


To further investigate this, we found out that even doing something as simple as...
{code:java}
node("master") {
  node("master") {
    sleep(600)
  }
}{code}
 ...in a test job, actually locks two executors on master, if you have more than one executor, or hangs indefinitely "Waiting for next available executor on 'master'" even though the only thing using the only executor on master is already this block of code. 

I'd suspect that this isn't just the case for master, but actually the case for any code specifying that it needs run on a label that it's possible already running on. I think a possible work around for code that needs to execute in this fashion is to write all these blocks of code like this:

{code:java}
if ("${NODE_NAME}" != "master") {
  node("master") {
    // do my code
  }
} else {
  // do my code
}
{code}
Or else wrap this into a closure of some sort of it's own... and call nodeIfNotAlreadyOn("master") instead or something...
but this isn't exactly very nice, and not exactly the behaviour I was initially expecting.


Furthermore, I honestly can't think of a use case where you would ever want to use two of the exact same label on an explicitly different executor at the exact same time (especially when the first executor with that label is only going to be locked waiting for the code on the second executor to execute), which is why this really feels like more of a bug than anything else to me.
Reply all
Reply to author
Forward
0 new messages