Need help designing a fix for JENKINS-50504

79 views
Skip to first unread message

m...@basilcrow.com

unread,
Mar 31, 2018, 2:28:49 PM3/31/18
to Jenkins Developers
Hi all,

I just filed JENKINS-50504 to describe a bug that I hit a few times a month. In short, when the master's connection to an SSH slave times out and a new connection is opened, jobs still keep running under the old Remoting channel, but their workspaces get handed out to new jobs (because the logic that checks for a workspace being in use doesn't take this case into account), and then both jobs clobber each other and fail.

I have written a detailed evaluation of the issue in the bug. The cause of the problem is that WorkspaceList#inUse is a Map<FilePath, Entry> and FilePath#equals checks that the channels are the same to consider a given FilePath equal to another one. In my case, the channel reference of the proposed workspace is a new channel (because the node reconnected), while the entry in inUse references the old channel (because the job is still running under the old channel). As a result, the workspace is not considered to be in use and is handed out to a new job.

Since this bug impacts me at least twice a month and takes down a large percentage of my Jenkins jobs, I would like to try and contribute a fix. However, I need help designing a solution. I can think of two ugly solutions:

1. When a node reconnects due to an I/O error, update the entries in WorkspaceList#inUse to reference the new channel in the key to the map. This would fix the bug. However, it seems ugly to use the new channel in the "in use" map, because the job is still technically running under the old channel.

2. Maintain a list of all channels that a given node has ever had open (including channels that got closed due to timeout). Then, when checking for a workspace being in use, construct a proposed FilePath for each one of those channels, and fail if any of them has an entry in the "in use" map. This design concerns me because of the potential for this list of old channels to grow in size without bound.

Could someone with more familiarity with Jenkins core weigh in with a better way to solve this problem? If so, I could try to submit a pull request.

Thanks in advance,
Basil

Ivan Fernandez Calvo

unread,
Apr 1, 2018, 9:47:02 AM4/1/18
to Jenkins Developers
The pruposed workaround could cause concurrence issues, I think the the main issue why the agent is not disconnected and keep the old connection is the most important thing. Did you checked the open connection from the Agent to the master with netstat? It should be two connections the old one an an new one, Has the agent more than one slave.jar process running? Are your agents VM or baremetal? Did you tune your tcp stack with proper values to keepalive?

m...@basilcrow.com

unread,
Apr 2, 2018, 4:08:28 PM4/2/18
to Jenkins Developers
Hi Ivan,

Thanks for your reply. I'm not exactly sure how my proposed workaround would necessarily cause concurrency issues. Doesn't that depend on how it's implemented? I agree that it's strange that the agent wasn't disconnected and still keeps the old connection to the master, even though new jobs use a new connection. Doesn't this violate the invariant implied by the implementation of WorkspaceList#inUse, which is that the entries in the map always represent the latest channel for a given node? This definitely seems like a core bug to me. I don't believe I should need to tune my TCP stack, because pipeline claims to be resilient to network outages. If the master logs "SEVERE: I/O error in channel jenkins-node" and "INFO: Attempting to reconnect jenkins-node", then why do jobs continue running on the old connection, violating the invariant in WorkspaceList#inUse?

Thanks,
Basil

Oleg Nenashev

unread,
Apr 4, 2018, 4:26:29 AM4/4/18
to Jenkins Developers
This issue seems to be Pipeline-specific (actually DueableTask-Specific). Standard Freestyle jobs should abort immediately on the agent disconnection, but Pipeline jobs may recover and continue using the workspace.


However, it seems ugly to use the new channel in the "in use" map, because the job is still technically running under the old channel.

No, it should be running under the new channel. Old channel gets disposed, and Remoting 3.14+ adds some diagnostics for these cases (e.g. JENKINS-45294). Now it causes some issues in Durable task which does not always recreate FilePath and underlying Workspace (JENKINS-41854 and other similar issues with "Channel is closing or closed").

WorkspaceList#inUse should be reacquired by Pipeline for sure when it reconnects to a new agent. I would guess it happens even now (or not?), but clearly there is a potential of race conditions between recovered jobs and new submissions.

The proposed patch may help, although workspace management is not really the strongest part of the Jenkins core. I would rather suggest redesigning it so that workspaces can be tracked independently on the node state (the proposed change does the same for a single cache). Some better UI/ workspace release features could be added as an added value.

BR, Oleg
 

Jesse Glick

unread,
Apr 4, 2018, 5:56:04 PM4/4/18
to Jenkins Dev
On Wed, Apr 4, 2018 at 4:26 AM, Oleg Nenashev <o.v.ne...@gmail.com> wrote:
> WorkspaceList#inUse should be reacquired by Pipeline for sure when it
> reconnects to a new agent. I would guess it happens even now (or not?)

No, currently a lock is acquired only when a `node` (or `ws`) body is
started. I made a note in JENKINS-41854 about this.

m...@basilcrow.com

unread,
Apr 10, 2018, 8:42:20 PM4/10/18
to Jenkins Developers
Thanks for pointing out JENKINS-45294. That is exactly what I am facing, at least twice a month. It causes severe disruption to my users, so I need to come up with a plan. I see that the bug is unassigned. If it isn't fixed soon, I might have to try to fix it myself by necessity. I suppose the best way to start would be by writing a test case that triggers the issue. Does the Jenkinsrule test harness provide any functionality for setting up this kind of scenario? I see there are some existing tests that restart Jenkins, but I'm not sure how to write an automated test that makes a node disconnect and reconnect in the manner described in the bug. Any advice or pointers to existing code or tests would be appreciated.

m...@basilcrow.com

unread,
Apr 10, 2018, 8:45:29 PM4/10/18
to Jenkins Developers
I meant "Thanks for pointing out JENKINS-41854" below.

Jesse Glick

unread,
Apr 11, 2018, 7:13:58 AM4/11/18
to Jenkins Dev
There are some tests in `workflow-durable-task-step` which simulate broken connections as well as restarts, so if the issue is indeed reliably reproducible, you could probably do it that way.

A test case would certainly be a valuable contribution. I doubt there is a straightforward, localized fix—my proposed approach involves adding new APIs in core Pipeline code that would involve somewhat subtle changes to multiple plugins and an understanding of serialization semantics including pickles.
Reply all
Reply to author
Forward
0 new messages