Need help debugging ChannelClosedExceptions

81 views
Skip to first unread message

Daniel Weber

unread,
Nov 8, 2018, 7:13:07 AM11/8/18
to Jenkins Users
Hi all,

(Note: This is a copy from my post which was originally done in Jenkins-dev, where it was wrong).

we are trying to find the root cause for ChannelClosedExceptions we keep getting in about 10% of our builds (which is far too often). 

We use one-time agents deployed in AWS (instances are created and destroyed for each build) in this scenario. I set the default log level to FINEST on the agents and found this

Nov 06, 2018 5:38:57 PM org.jenkinsci.remoting.protocol.impl.BIONetworkLayer$Reader run
FINER: [JNLP4-connect connection to frcv001/10.179.171.222:49394] Unexpected I/O exception
java.io.IOException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(Unknown Source)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source)
at sun.nio.ch.IOUtil.read(Unknown Source)
at sun.nio.ch.SocketChannelImpl.read(Unknown Source)
at hudson.remoting.SocketChannelStream$1.read(SocketChannelStream.java:35)
at sun.nio.ch.ChannelInputStream.read(Unknown Source)
at sun.nio.ch.ChannelInputStream.read(Unknown Source)
at sun.nio.ch.ChannelInputStream.read(Unknown Source)
at java.nio.channels.Channels$ReadableByteChannelImpl.read(Unknown Source)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer$Reader.run(BIONetworkLayer.java:245)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:93)
at java.lang.Thread.run(Unknown Source)

Nov 06, 2018 5:38:57 PM org.jenkinsci.remoting.protocol.NetworkLayer onRecvClosed
FINEST: [JNLP4-connect connection to frcv001/10.179.171.222:49394] RECV Closed
Nov 06, 2018 5:38:57 PM org.jenkinsci.remoting.protocol.FilterLayer onRecvClosed
FINEST: [JNLP4-connect connection to frcv001/10.179.171.222:49394] RECV Closed
Nov 06, 2018 5:38:57 PM org.jenkinsci.remoting.protocol.ApplicationLayer onRecvClosed
FINE: [JNLP4-connect connection to frcv001/10.179.171.222:49394] RECV Closed
Nov 06, 2018 5:38:57 PM com.trilead.ssh2.log.Logger log
FINEST: Received SSH_MSG_CHANNEL_DATA 64511 bytes payload
Nov 06, 2018 5:38:57 PM com.trilead.ssh2.log.Logger log
FINEST: Got SSH_MSG_CHANNEL_DATA (channel 100, 64502)
Nov 06, 2018 5:38:57 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Terminated

and then later

FINER: HandleMessage: got shutdown
Nov 06, 2018 5:38:57 PM hudson.remoting.Request$2 run
WARNING: Failed to send back a reply to the request hudson.remoting.Request$2@503986c5
hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel@798747f3:JNLP4-connect connection to frcv001/10.179.171.222:49394": channel is already closed
at hudson.remoting.Channel.send(Channel.java:717)
at hudson.remoting.Request$2.run(Request.java:382)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:93)
at java.lang.Thread.run(Unknown Source)
Caused by: java.nio.channels.ClosedChannelException
at org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer.access$1800(BIONetworkLayer.java:48)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer$Reader.run(BIONetworkLayer.java:264)
... 4 more

The agents successfully reconnect after this, but that does not help us because at that time, the build has already 
failed and the machine will be destroyed.

  • agent.jar is downloaded from the master before startup to make sure they use the correct version
  • I did not find any log messaged indicating the ping thread terminated the connection
  • We are currently using Jenkins LTS 2.121.3, Remoting version: 3.21.1 
  • Both master and agents are running under Windows Server 2012.
Any help / hints would be greatly appreciated. I'd also be happy to open an issue if you think it makes sense :)

Best Regards,
Daniel

Viacheslav Dubrovskyi

unread,
Nov 8, 2018, 7:25:25 AM11/8/18
to jenkins...@googlegroups.com

Hi Daniel,

Did you read https://wiki.jenkins.io/display/JENKINS/Remoting+issue ?

08.11.2018 14:13, Daniel Weber пишет:
--
You received this message because you are subscribed to the Google Groups "Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-users/971cdf53-90b7-491a-b691-c448df7b44d1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
-- 
WBD,
Viacheslav Dubrovskyi

Daniel Weber

unread,
Nov 8, 2018, 5:34:26 PM11/8/18
to Jenkins Users
Hi Viacheslav,

thanks for your reply!

Yes, I also know this Wiki page. But I saw no indication that the ping thread fails (checked the logs) and we're not using ssh agents.
So this also did not help. But anyhow we disabled the ping thread on one of our masters but still see this issue.

BR,
Daniel

Martin d'Anjou

unread,
Nov 8, 2018, 10:19:15 PM11/8/18
to Jenkins Users
Hi,

I cannot help directly, but last summer we had a GSoC project to implement remoting over Kafka as a fault tolerant communication layer in Jenkins.
If you keep having comms breakdown, you could head over to the Remoting Gitter Chat channel and ask about this plugin.
I agree it would be better to find the root cause of the problem though.

Martin

Daniel Weber

unread,
Nov 20, 2018, 5:12:12 AM11/20/18
to Jenkins Users
Hi Martin,

thanks for your reply. In what state is that alternative remoting implementation?
We keep having these issues, one reason seems to be bandwidth limitations between our company network and AWS.
But that's not the only one, we also run into this "on premise", without AWS being in the picture.

BR,
Daniel
Reply all
Reply to author
Forward
0 new messages