[JIRA] (JENKINS-53668) hudson.remoting.ChannelClosedException

1 view
Skip to first unread message

mgreco2k@gmail.com (JIRA)

unread,
Sep 19, 2018, 1:57:02 PM9/19/18
to jenkinsc...@googlegroups.com
Michael Greco created an issue
 
Jenkins / Bug JENKINS-53668
hudson.remoting.ChannelClosedException
Issue Type: Bug Bug
Assignee: Unassigned
Components: docker-plugin
Created: 2018-09-19 17:56
Environment: Jenkins 2.140
Docker Plugin 1.1.5
Docker version 18.06.1-ce, build e68fc7a
Ubuntu 18.04.1 LTS
Priority: Critical Critical
Reporter: Michael Greco

We started using Docker Plugin 1.1.5 with Jenkins 2.140 and we are seeing this "channel closed down" message an almost every job.
[WORK] Cannot contact docker-000cu0ya5igsh: hudson.remoting.ChannelClosedException: Channel "unknown": Remote call on docker-000cu0ya5igsh failed. The channel is closing down or has closed down
It always feels like this is while a larger amount of work is being done in the container (for example running "mvn test" where we have thousands of tests).

 

Add Comment Add Comment
 
This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)

pjdarton@gmail.com (JIRA)

unread,
Feb 8, 2019, 8:12:02 AM2/8/19
to jenkinsc...@googlegroups.com
pjdarton commented on Bug JENKINS-53668
 
Re: hudson.remoting.ChannelClosedException

I've experienced this kind of issue where I work.  In our case, it was caused by the host running out of memory, triggering the oom-killer, and the oom-killer then decided that the Jenkins "java -jar slave.jar" process (the one responsible for keeping the slave connected to the master) was the least important process and killed it.

The result was that, when things got busy, slaves died at random, despite doing nothing wrong themselves.

This was particularly caused by our use of certain software packages that decide how much memory they're going to allocate to themselves based on the amount of memory available ... and that look at the whole host's memory instead of the container's fair share of that memory.  It doesn't take many processes to each allocate themselves half of the host's entire RAM before things get tight and the oom-killer gets invoked.

 

Try turning off memory overcommit in your docker host, limiting the amount of memory available to each container, and limiting the number of containers you run concurrently.

pjdarton@gmail.com (JIRA)

unread,
Sep 24, 2019, 8:56:02 AM9/24/19
to jenkinsc...@googlegroups.com
pjdarton closed an issue as Incomplete
 

Without further information, it won't be possible to debug this.

If you're still experiencing this issue, and getting it with a recent version of the plugin, please add log information (see https://github.com/jenkinsci/docker-plugin/blob/master/CONTRIBUTING.md for hints about this) and re-open the issue.

Change By: pjdarton
Status: Open Closed
Resolution: Incomplete
This message was sent by Atlassian Jira (v7.13.6#713006-sha1:cc4451f)
Atlassian logo
Reply all
Reply to author
Forward
0 new messages