[JIRA] (JENKINS-48955) master-slave connection getting terminated once in every 12 hours and recovered after 1 minute

408 views
Skip to first unread message

eugene@chepurniy.com (JIRA)

unread,
Aug 15, 2018, 3:23:02 AM8/15/18
to jenkinsc...@googlegroups.com
Eugene Chepurniy commented on Bug JENKINS-48955
 
Re: master-slave connection getting terminated once in every 12 hours and recovered after 1 minute

The same approach found:

ERROR: Connection terminated
java.io.EOFException
        at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2680)
        at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3155)
        at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:861)
        at java.io.ObjectInputStream.<init>(ObjectInputStream.java:357)
        at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:48)
        at hudson.remoting.Command.readFrom(Command.java:140)
        at hudson.remoting.Command.readFrom(Command.java:126)
        at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:36)
        at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
Caused: java.io.IOException: Unexpected termination of the channel
        at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)
ERROR: Socket connection to SSH server was lost
java.net.SocketTimeoutException: The connect timeout expired
        at com.trilead.ssh2.Connection$1.run(Connection.java:762)
        at com.trilead.ssh2.util.TimeoutService$TimeoutThread.run(TimeoutService.java:91)
Slave JVM has not reported exit code before the socket was lost
[08/15/18 06:37:34] [SSH] Connection closed.

Jenkins app: 2.136
SSH Agent Plugin: 1.16

EC2 Fleet Plugin (v.1.1.7) was used to prepare agents. 
No possible network issues found, reproduces frequently in 20-30 minutes after successful agent startup. AWS spot instances weren't stopped/terminated during spotted fails.

For the time of bug investigation is there any possibility to make failed stages transparently restart in case of agent connectivity issues?

Add Comment Add Comment
 
This message was sent by Atlassian JIRA (v7.10.1#710002-sha1:6efc396)

eugene@chepurniy.com (JIRA)

unread,
Aug 15, 2018, 3:24:02 AM8/15/18
to jenkinsc...@googlegroups.com

eugene@chepurniy.com (JIRA)

unread,
Aug 15, 2018, 3:25:01 AM8/15/18
to jenkinsc...@googlegroups.com

ERROR: Connection terminated
java.io.EOFException
        at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2680)
        at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3155)
        at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:861)
        at java.io.ObjectInputStream.<init>(ObjectInputStream.java:357)
        at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:48)
        at hudson.remoting.Command.readFrom(Command.java:140)
        at hudson.remoting.Command.readFrom(Command.java:126)
        at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:36)
        at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
Caused: java.io.IOException: Unexpected termination of the channel
        at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)
ERROR: Socket connection to SSH server was lost
java.net.SocketTimeoutException: The connect timeout expired
        at com.trilead.ssh2.Connection$1.run(Connection.java:762)
        at com.trilead.ssh2.util.TimeoutService$TimeoutThread.run(TimeoutService.java:91)
Slave JVM has not reported exit code before the socket was lost
[08/15/18 06:37:34] [SSH] Connection closed.
{code}

Jenkins app: 2.136
SSH Agent Plugin: 1.16
SSH Slaves Plugin: 1.26

EC2 Fleet Plugin (v.1.1.7) was used to prepare agents. 
No possible network issues found, reproduces frequently in 20-30 minutes after successful agent startup. AWS spot instances weren't stopped/terminated during spotted fails.

For the time of bug investigation is there any possibility to make failed stages transparently restart in case of agent connectivity issues?

g.prakash09@gmail.com (JIRA)

unread,
Aug 15, 2018, 4:06:03 AM8/15/18
to jenkinsc...@googlegroups.com

The problem is because of duplicate IP in the network, Slave IP I used was consumed by some other device on that particular time and during that time master resolved ARP with that unknown host rather than with slave. Please check your network and monitor connectivity during that time with master

eugene@chepurniy.com (JIRA)

unread,
Aug 15, 2018, 4:12:02 AM8/15/18
to jenkinsc...@googlegroups.com

Prakash G, thanks for the comment but in my case, IPs are managed by AWS and overlapping of addresses is impossible.

kuisathaverat@gmail.com (JIRA)

unread,
Aug 21, 2018, 11:25:02 AM8/21/18
to jenkinsc...@googlegroups.com

kuisathaverat@gmail.com (JIRA)

unread,
Aug 21, 2018, 11:36:02 AM8/21/18
to jenkinsc...@googlegroups.com
Ivan Fernandez Calvo commented on Bug JENKINS-48955
 
Re: master-slave connection getting terminated once in every 12 hours and recovered after 1 minute

Eugene Chepurniy this issue is not related at all to the trace you attach, please open a new issue with those details if the problem still persists. The problem seems a timeout between the Jenkins instance and the Agent, these could be the root causes:

  • Network issue between Jenkins instance and Agent, if both are in different networks check with your IT Team that there are no issues
  • The Jenkins instance has a heavy load then the agent lost connections with it.
  • The Agent has a heavy load then the Jenkins instance lost connections with it.
  • The Agent process die (slave.jar) check the latest build logs on that Agent, and Agent logs, if you do not have Agent logs check https://github.com/jenkinsci/remoting/blob/master/docs/workDir.md to enable them
  • An OOM error on the agent, check that there are no hs_err_pid files on the workdir of the Agent, if you Agents has low memory try to pass -Xmx256m and -Xms256m parameters to the Agent JVM options to fix the memory, 128MB/256MB used to be enough to and Agent with 10 executors.
  • Check the Agent syslog and kernel log for network issue or another kind of performance issues.

In any case without the logs on the Agent side, it is complicated to know what happens.

kuisathaverat@gmail.com (JIRA)

unread,
Aug 21, 2018, 11:37:03 AM8/21/18
to jenkinsc...@googlegroups.com

eugene@chepurniy.com (JIRA)

unread,
Aug 22, 2018, 3:07:01 AM8/22/18
to jenkinsc...@googlegroups.com
Eugene Chepurniy commented on Bug JENKINS-48955
 
Re: master-slave connection getting terminated once in every 12 hours and recovered after 1 minute

Ivan Fernandez Calvo thanks for your responses. 

  • any kind of network issues are excluded (or have very low possibility) - both server and agents are in same AWS VPC and have 10Gigs network enabled. In most of the time (99.99%) agents are performing well w/o any issues. 
  • there are only 2 executors per agent and agent is m4.xlarge (with 16G of RAM) instance. Jenkins agent is starting with default config.
  • There were no OOMs/agent crushes spotted.
  • I'm going to follow your suggestion and turn Agents logs on to see if some additional information can be gathered. 
  • And yes - we have a pretty high load on agents but I'm not sure it is so huge to make ssh connection interruptions. 

martin.stiborsky@gmail.com (JIRA)

unread,
Oct 4, 2018, 2:31:01 PM10/4/18
to jenkinsc...@googlegroups.com

Hi, I'd like to join this party. We have exactly the same problem as described here. All matches, our agents EC2 instances also run on m4.xlarge (and some other instance types), all the details are the same as our environment.

From time to time, some slaves just disconnects, leaving build failed and developers frustrated. We are trying to find out what's wrong for pretty long couple of weeks.

Any hints appreciated.

This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)

kuisathaverat@gmail.com (JIRA)

unread,
Oct 5, 2018, 3:46:03 AM10/5/18
to jenkinsc...@googlegroups.com

Martin Stiborský if you are using the latest version, you could check the remoting logs, on the working agent folder should be a folder named remoting, there would be the log files

martin.stiborsky@gmail.com (JIRA)

unread,
Oct 5, 2018, 3:53:02 AM10/5/18
to jenkinsc...@googlegroups.com
Martin Stiborský edited a comment on Bug JENKINS-48955
[~ifernandezcalvo] yes, I found that. There is the same error as logged on master:

 

{ { code:java}
Sep 03, 2018 12:56:07 PM hudson.remoting.SynchronousCommandTransport$ReaderThread run }}
{{ SEVERE: I/O error in channel channel }}
{{ java.io.IOException: Unexpected termination of the channel }}
{{ at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77) }}
{{ Caused by: java.io.EOFException }}
{{ at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2681) }}
{{ at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3156) }}
{{ at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:862) }}
{{ at java.io.ObjectInputStream.<init>(ObjectInputStream.java:358) }}
{{ at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49) }}
{{ at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:36) }}
{{ at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63) {code } }

martin.stiborsky@gmail.com (JIRA)

unread,
Oct 5, 2018, 3:53:03 AM10/5/18
to jenkinsc...@googlegroups.com

Ivan Fernandez Calvo yes, I found that. There is the same error as logged on master:

 

Sep 03, 2018 12:56:07 PM hudson.remoting.SynchronousCommandTransport$ReaderThread run
SEVERE: I/O error in channel channel
java.io.IOException: Unexpected termination of the channel
{{ at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)}}
Caused by: java.io.EOFException
{{ at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2681)}}
{{ at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3156)}}
{{ at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:862)}}
{{ at java.io.ObjectInputStream.<init>(ObjectInputStream.java:358)}}
{{ at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)}}
{{ at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:36)}}
{{ at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)}}

eugene@chepurniy.com (JIRA)

unread,
Oct 5, 2018, 6:23:02 AM10/5/18
to jenkinsc...@googlegroups.com
Eugene Chepurniy edited a comment on Bug JENKINS-48955
We still experiencing described problems. 
What was done among other actions: 
1. Ping thread was disabled in Jenkins. ([https://wiki.jenkins.io/display/JENKINS/Ping+Thread)]
2. SELinux was completely disabled on slaves (getenforce outputs `Disabled`)
3. All possible timeouts were increased. 
4. Java version was set to be same on agents and server.

The most helpful action was disabling of SELinux - the number of ssh failures decreased by 10 times.
[~stibi] FYI.

eugene@chepurniy.com (JIRA)

unread,
Oct 5, 2018, 6:23:02 AM10/5/18
to jenkinsc...@googlegroups.com

We still experiencing described problems. 
What was done among other actions: 

1. Ping thread was disabled in Jenkins. (https://wiki.jenkins.io/display/JENKINS/Ping+Thread)


2. SELinux was completely disabled on slaves (getenforce outputs `Disabled`)
3. All possible timeouts were increased. 

The most helpful action was disabling of SELinux - the number of ssh failures decreased by 10 times.
Martin Stiborský FYI.

kuisathaverat@gmail.com (JIRA)

unread,
Oct 5, 2018, 7:14:02 AM10/5/18
to jenkinsc...@googlegroups.com

On the latest version, you can disable the TCP_NODELAY in the UI, try this too. I am working on a version that uses the native ssh client, this will fix this kind of issues, it will take a couple of months.

eugene@chepurniy.com (JIRA)

unread,
Oct 8, 2018, 7:39:02 AM10/8/18
to jenkinsc...@googlegroups.com

Ivan Fernandez Calvo I'm going to give this solution a chance and provide a feedback here. Thanks for staying in touch. 

kuisathaverat@gmail.com (JIRA)

unread,
Feb 1, 2020, 12:12:04 PM2/1/20
to jenkinsc...@googlegroups.com
Status: Fixed but Unreleased Closed
This message was sent by Atlassian Jira (v7.13.6#713006-sha1:cc4451f)
Atlassian logo

ianfixes@gmail.com (JIRA)

unread,
Feb 3, 2020, 8:09:04 AM2/3/20
to jenkinsc...@googlegroups.com
Ian Katz commented on Bug JENKINS-48955
 
Re: master-slave connection getting terminated once in every 12 hours and recovered after 1 minute

I see that this has been closed as "can't reproduce". Assuming that you could reproduce the issue, what data would you record? I can try to do that and open a new issue with the requested data next time I experience this issue (which happens regularly)

kuisathaverat@gmail.com (JIRA)

unread,
Feb 3, 2020, 11:04:03 AM2/3/20
to jenkinsc...@googlegroups.com

I usually request this info Common info needed to troubleshooting a bug to try to replicate the issue, but in this case, I guess that the 4th point would give you the solution directly when the issue happens should be an entry on the SSHD logs that said something that explains why the agent disconnect

Reply all
Reply to author
Forward
0 new messages