Frequent "Disconnected computer for node" messages in jenkins logs

413 views
Skip to first unread message

Vincent Massol

unread,
Feb 14, 2020, 4:49:19 AM2/14/20
to Jenkins Users
Hi guys.

Is anyone also having problems with "Disconnected computer for node" happening all the time, resulting in jenkins master killing agent nodes?

We're getting that all the time it seems for https://ci.xwiki.org


2020-02-14 09:10:59.040+0000 [id=268440] INFO c.n.j.p.d.DockerContainerWatchdog#loadNodeMap: We currently have 7 nodes assigned to this Jenkins instance, which we will check
42020-02-14 09:10:59.040+0000 [id=268440] INFO c.n.j.p.d.DockerContainerWatchdog#execute: Checking Docker Cloud docker-a3 at tcp://xxx
52020-02-14 09:10:59.047+0000 [id=268440] INFO c.n.j.p.d.DockerContainerWatchdog#execute: Checking Docker Cloud docker-a4 at tcp://xxx
62020-02-14 09:10:59.057+0000 [id=268440] INFO c.n.j.p.d.DockerContainerWatchdog#execute: Checking Docker Cloud docker-a5 at tcp://xxx
72020-02-14 09:10:59.083+0000 [id=268440] INFO c.n.j.p.d.DockerContainerWatchdog#execute: Checking Docker Cloud docker-a6 at tcp://xxx
82020-02-14 09:10:59.109+0000 [id=268440] INFO c.n.j.p.d.DockerContainerWatchdog#execute: Checking Docker Cloud docker-a7 at tcp://xxx
92020-02-14 09:10:59.132+0000 [id=268440] INFO c.n.j.p.d.DockerContainerWatchdog#execute: Docker Container Watchdog check has been completed
102020-02-14 09:10:59.133+0000 [id=268440] INFO hudson.model.AsyncPeriodicWork#lambda$doRun$0: Finished DockerContainerWatchdog Asynchronous Periodic Work. 93 ms
112020-02-14 09:13:37.434+0000 [id=268432] INFO i.j.docker.DockerTransientNode$1#println: Disconnected computer for node 'Jenkins SSH Slave a3-0094ebcvu7jkf'.
122020-02-14 09:13:37.434+0000 [id=268243] INFO h.r.SynchronousCommandTransport$ReaderThread#run: I/O error in channel Jenkins SSH Slave a3-0094ebcvu7jkf
13java.net.SocketException: Socket closed
14 at java.net.SocketInputStream.socketRead0(Native Method)
15 at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
16 at java.net.SocketInputStream.read(SocketInputStream.java:171)
17 at java.net.SocketInputStream.read(SocketInputStream.java:141)
18 at io.jenkins.docker.client.DockerMultiplexedInputStream.readInternal(DockerMultiplexedInputStream.java:48)
19 at io.jenkins.docker.client.DockerMultiplexedInputStream.read(DockerMultiplexedInputStream.java:30)
20 at hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:91)
21 at hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:72)
22 at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:103)
23 at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:39)
24 at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
25 at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
262020-02-14 09:13:37.440+0000 [id=268432] INFO i.j.docker.DockerTransientNode$1#println: Removed Node for node 'Jenkins SSH Slave a3-0094ebcvu7jkf'.
27
28
29


Since it happens so frequently (like every 5mn), I'm wondering if it's a problem or if it's the normal way that Jenkins work: agent finishes its job, watchdog comes, tries to connect to the agent, fails, and removes it.

Any idea?

Thx
-Vincent

Victor Martinez

unread,
Feb 14, 2020, 7:59:20 AM2/14/20
to Jenkins Users
I've seen those stack traces with some other Cloud Node providers in Jenkins. 

Not sure if that's an implementation within the Jenkins core or the docker-plugin itself or some specific design.

Ivan Fernandez Calvo

unread,
Feb 14, 2020, 1:50:27 PM2/14/20
to Jenkins Users
Pingthread and some monitoring stuff run every 4 min, I think that the disconnections happens before that process but because there is not activity on this agents is not detected until the pingthread passes. So I guess you have half closed connections, I mean, the agent closes the convention but the master does not received the reset packet. If you are using SSH agents, you can enable the verbose mode on the sshd server to monitor what the heck happens see https://github.com/jenkinsci/ssh-slaves-plugin/blob/master/doc/TROUBLESHOOTING.md#common-info-needed-to-troubleshooting-a-bug

Vincent Massol

unread,
Feb 14, 2020, 3:47:39 PM2/14/20
to Jenkins Users
Thanks Ivan. We're not using SHH agents but Docker Cloud (the agents are provisioned on the fly as docker containers).

I was indeed looking for how to turn on some debugging on the agent side but I couldn't find anything. Also the agent docker container is removed once the job is finished so it seems even harder to get some info about what's going on.

What I wanted to know is whether what we're experiencing is a normal behavior of Jenkins or not. I'm asking because a lot of our jobs are going fine every day but we stil have several ones that are killed in mid-air every day. For example if I take agent 6 (a6) from https://up1.xwikisas.com/#vI0VAypIpe_tD9LrQRTdMA I can see it's been terminate on 2020-02-10 at:
* 4:44
* 5:06
* 5:24
* 7:45
* 10:06
* 10:24
* etc

Now I don't think we have that many job failures every day. It's more like 1 or 2 per day. So I'm not sure what to think of it. 

I was trying to investigate why we see the following regularly (every day) in our CI job logs:

Cannot contact Jenkins SSH Slave a6-009448n7sqon4: java.lang.InterruptedException
Agent Jenkins SSH Slave a6-009448n7sqon4 was deleted; cancelling node body
Could not connect to Jenkins SSH Slave a6-009448n7sqon4 to send interrupt signal to process

And then I discovered what I've pasted at https://up1.xwikisas.com/#vI0VAypIpe_tD9LrQRTdMA by looking at the jenkins master log file and I went "wow, how come there are so many disconnections".

Any idea is most welcome!

Thanks a lot
-Vincent

Ivan Fernandez Calvo

unread,
Feb 15, 2020, 6:55:26 AM2/15/20
to jenkins...@googlegroups.com
After seen the log I understand you are asking for the INFO messages that inform that a Docker agent is disconnected, IIRC those messages are normal they only inform about the Docker agent status, you can change the verbose level of the Java package on logs configuration to omit those type of messages if they bother you.
About the other message the InterruptedException, this looks like and issue, but there is not much info to troubleshooting it, you have to monitor those errors and try to find something in common, same job always, same Docker image, Same resources, ... the most common issue is a resources problem, in those cases the container is killed because an OOM error, you can check if this is the case if you can make a Docker inspect of the container.

El 14 feb 2020, a las 21:47, Vincent Massol <vma...@gmail.com> escribió:


--
You received this message because you are subscribed to a topic in the Google Groups "Jenkins Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/jenkinsci-users/A1H9vVP-9c4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to jenkinsci-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-users/6745b3f8-6da2-49b4-8e99-835fb67315dc%40googlegroups.com.

Vincent Massol

unread,
Feb 16, 2020, 8:15:54 AM2/16/20
to Jenkins Users
After seen the log I understand you are asking for the INFO messages that inform that a Docker agent is disconnected, IIRC those messages are normal they only inform about the Docker agent status, 

Thanks for your reply. Let me make sure I understand. So the Docker Cloud plugin will spawn new Jenkins Docker agents. It'll stop the agents by using DockerContainerWatchdog thread which regularly tries to connect to the agent and when it fails, it removes the agent. This is what happened in the following example:

2020-02-14 09:13:37.434+0000 [id=268432] INFO i.j.docker.DockerTransientNode$1#println: Disconnected computer for node 'Jenkins SSH Slave a3-0094ebcvu7jkf'.
122020-02-14 09:13:37.434+0000 [id=268243] INFO h.r.SynchronousCommandTransport$ReaderThread#run: I/O error in channel Jenkins SSH Slave a3-0094ebcvu7jkf
13java.net.SocketException: Socket closed
14 at java.net.SocketInputStream.socketRead0(Native Method)
15 at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
16 at java.net.SocketInputStream.read(SocketInputStream.java:171)
17 at java.net.SocketInputStream.read(SocketInputStream.java:141)
18 at io.jenkins.docker.client.DockerMultiplexedInputStream.readInternal(DockerMultiplexedInputStream.java:48)
19 at io.jenkins.docker.client.DockerMultiplexedInputStream.read(DockerMultiplexedInputStream.java:30)
20 at hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:91)
21 at hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:72)
22 at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:103)
23 at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:39)
24 at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
25 at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
262020-02-14 09:13:37.440+0000 [id=268432] INFO i.j.docker.DockerTransientNode$1#println: Removed Node for node 'Jenkins SSH Slave a3-0094ebcvu7jkf'.

So it means that whether the agent finishes it work or whether there's a connection issue between Jenkins master and the agent, it'll be reported the same in the jenkins.log file. Basically it's the same mechanism for stopping an agent having finished its work or handling a connection error. In both cases it"ll be reported as INFO in the logs too. Right?

you can change the verbose level of the Java package on logs configuration to omit those type of messages if they bother you

Indeed that could be interesting. Now it means we would also not be able to see the real communication errors between master and agents I guess.

Thanks a lot for your help. If you could confirm this it would be great; I'd be able to move forward and move to the next problems (we have plenty of intermittent errors to figure out ;)).

Have a great weekend
-Vincent

Iván Fernández Calvo

unread,
Feb 16, 2020, 8:33:41 AM2/16/20
to jenkins...@googlegroups.com


> El 16 feb 2020, a las 14:15, Vincent Massol <vma...@gmail.com> escribió:
>
> In both cases it"ll be reported as INFO in the logs too. Right?

It seems like, I didn’t noticed that the exception is also a INFO messages, so the only difference is the stack trace of the exception, the log level is the same.

Vincent Massol

unread,
Feb 16, 2020, 8:48:01 AM2/16/20
to Jenkins Users
Hi Ivan,
 
so the only difference is the stack trace of the exception, the log level is the same.

Is it possible that you misunderstood the data at  https://up1.xwikisas.com/#vI0VAypIpe_tD9LrQRTdMA ? :)

As is mentioned there, ALL of the lines are the same as the ones at the top (ie they all have a stack trace). I just didn't put the full lines for the sake of space ;)

This is why I've been asking from the beginning if it's normal :) Usually when there's a stack trace it's not really normal. But it happens so frequently that the only explanation I can think of is that it's the normal behavior of Jenkins.

WDYT?

Thanks again!
-Vincent

kuisathaverat

unread,
Feb 16, 2020, 10:09:30 AM2/16/20
to jenkins...@googlegroups.com
El El dom, 16 feb 2020 a las 14:48, Vincent Massol <vma...@gmail.com> escribió:
This is why I've been asking from the beginning if it's normal :) Usually when there's a stack trace it's not really normal. But it happens so frequently that the only explanation I can think of is that it's the normal behavior of Jenkins.

So you see exceptions on every disconnection, I’ve never seen this behavior on the Docker plugin, I’ve seen the distinction messages but without an exception.  
--
Reply all
Reply to author
Forward
0 new messages