[JIRA] (JENKINS-37575) Jenkins node keeps sending the sames logs indefinitely

1 visualização
Pular para a primeira mensagem não lida

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 17:16:0220/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour created an issue
 
Jenkins / Bug JENKINS-37575
Jenkins node keeps sending the sames logs indefinitely
Issue Type: Bug Bug
Assignee: Jesse Glick
Components: durable-task-plugin
Created: 2016/Aug/20 9:15 PM
Environment: Windows Server 2012 R2
Jenkins 2.7.2
Priority: Major Major
Reporter: Quentin Dufour

Problem

The build never ends on the node.
It sends forever the same 20-30 lines.
Some logs files can be bigger than 10GB before I kill the process.

Investigation

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.

How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.

Add Comment Add Comment
 
This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c)
Atlassian logo

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 17:18:0220/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
Change By: Quentin Dufour
Attachment: Capture d'écran de 2016-08-20 17-04-34.png
Attachment: Capture d'écran de 2016-08-20 16-35-18.png
h1. Problem


The build never ends on the node.
It sends forever the same 20-30 lines.
Some logs files can be bigger than 10GB before I kill the process.

h1. Investigation


I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.

(check the attached screenshots above)

h1. How to reproduce


I've tried to reproduce this bug for 3 days but didn't achieve to.

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 17:28:0120/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
Change By: Quentin Dufour
Attachment: jekins_10GB_log.png

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 17:58:0120/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

The build never ends on the node.
It sends forever the same 20-30 lines.
Some logs files can be bigger than 10GB before I kill the process.

h1. Investigation

h2. Steps

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.

(check the attached screenshots above) h2. Screenshots
On

h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 18:04:0120/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

The build never ends on the node.
It sends forever the same 20-30 lines.
Some logs files can be bigger than 10GB before I kill the process.

h1. Investigation

h2. Steps

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.

h2. Screenshots
On

the first one we can see the logs. Especially that the date from timestamper is different from the date of the logs. It's a good indication that we have a problem.

On the second screenshot, I've connected netbeans on the node which handle the job. I've stopped the thread and also put a breakpoint on sink.write(buf). When I hit continue, I've seen that this function is always called. We can see some variables, especially the file descriptor and its associated size. The folder of this file is open on the top left of this screenshot. We can see that the size of this file is arround 3MB.

On the last screenshot, it's just an example at how big the file becomes before crashing Jenkins in an Out of Memory Exception...

h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 18:29:0120/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

The build never ends on the node.
It sends forever the same 20-30 lines.
Some logs files can be bigger than 10GB before I kill the process.

h1. Investigation

h2. Steps

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.

It seems that Jenkins read the whole file. If it fails, it will return 0. I suspect that Jenkins is failing to close the file descriptor. So the lastLocation is not updated. But the data are sent. Jenkins retries to read the file, fail again, etc. That's only a supposition for now.

h2. Screenshots
On the first one we can see the logs. Especially that the date from timestamper is different from the date of the logs. It's a good indication that we have a problem.

On the second screenshot, I've connected netbeans on the node which handle the job. I've stopped the thread and also put a breakpoint on sink.write(buf). When I hit continue, I've seen that this function is always called. We can see some variables, especially the file descriptor and its associated size. The folder of this file is open on the top left of this screenshot. We can see that the size of this file is arround 3MB.

On the last screenshot, it's just an example at how big the file becomes before crashing Jenkins in an Out of Memory Exception...

h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 19:08:0220/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

The build never ends on the node.
It sends forever the same 20-30 lines.
Some logs files can be bigger than 10GB before I kill the process.

h1. Investigation

h2. Steps

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.

Edit 1: It seems that Jenkins read the whole file. If it fails, it will return 0. I suspect that Jenkins is failing to close the file descriptor. So the lastLocation is not updated. But the data are sent. Jenkins retries to read the file, fail again, etc. That's only a supposition for now.

Edit 2: It seems that it comes from the network, as I've captured a java.io.InterruptedIOException in this loop.

h2. Screenshots
On the first one we can see the logs. Especially that the date from timestamper is different from the date of the logs. It's a good indication that we have a problem.

On the second screenshot, I've connected netbeans on the node which handle the job. I've stopped the thread and also put a breakpoint on sink.write(buf). When I hit continue, I've seen that this function is always called. We can see some variables, especially the file descriptor and its associated size. The folder of this file is open on the top left of this screenshot. We can see that the size of this file is arround 3MB.

On the last screenshot, it's just an example at how big the file becomes before crashing Jenkins in an Out of Memory Exception...

h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 19:09:0120/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
Change By: Quentin Dufour
Attachment: Capture d'écran de 2016-08-20 19-02-41.png

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 19:10:0120/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

The build never ends on the node.
It sends forever the same 20-30 lines.
Some logs files can be bigger than 10GB before I kill the process.

h1. Investigation

h2. Steps

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.

Edit 1: It seems that Jenkins read the whole file. If it fails, it will return 0. I suspect that Jenkins is failing to close the file descriptor. So the lastLocation is not updated. But the data are sent. Jenkins retries to read the file, fail again, etc. That's only a supposition for now.

Edit 2: It seems that it comes from the network, as I've captured a java.io.InterruptedIOException in this loop.
!Capture d'écran de 2016-08-20 19-02-41.png!

h2. Screenshots
On the first one we can see the logs. Especially that the date from timestamper is different from the date of the logs. It's a good indication that we have a problem.

On the second screenshot, I've connected netbeans on the node which handle the job. I've stopped the thread and also put a breakpoint on sink.write(buf). When I hit continue, I've seen that this function is always called. We can see some variables, especially the file descriptor and its associated size. The folder of this file is open on the top left of this screenshot. We can see that the size of this file is arround 3MB.

On the last screenshot, it's just an example at how big the file becomes before crashing Jenkins in an Out of Memory Exception...

h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 19:11:0120/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

The build never ends on the node.
It sends forever the same 20-30 lines.
!Capture d'écran de 2016-08-20 19-02-41.png|thumbnail!

Some logs files can be bigger than 10GB before I kill the process.
!Capture d'écran de 2016-08-20 19-02-41.png|thumbnail!

h1. Investigation

h2. Steps

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.

Edit 1: It seems that Jenkins read the whole file. If it fails, it will return 0. I suspect that Jenkins is failing to close the file descriptor. So the lastLocation is not updated. But the data are sent. Jenkins retries to read the file, fail again, etc. That's only a supposition for now.

Edit 2: It seems that it comes from the network, as I've captured a java.io.InterruptedIOException in this loop.
!Capture d'écran de 2016-08-20 19-02-41.png |thumbnail !

h2. Screenshots
On the first one we can see the logs. Especially that the date from timestamper is different from the date of the logs. It's a good indication that we have a problem.

On the second screenshot, I've connected netbeans on the node which handle the job. I've stopped the thread and also put a breakpoint on sink.write(buf). When I hit continue, I've seen that this function is always called. We can see some variables, especially the file descriptor and its associated size. The folder of this file is open on the top left of this screenshot. We can see that the size of this file is arround 3MB.

On the last screenshot, it's just an example at how big the file becomes before crashing Jenkins in an Out of Memory Exception...

h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 19:13:0120/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

The build never ends on the node.
It sends forever the same 20-30 lines.
!Capture d'écran de 2016-08-20 19 16 - 02 35 - 41 18 .png|thumbnail!


Some logs files can be bigger than 10GB before I kill the process.
! Capture d'écran de 2016-08-20 19-02-41 jenkins_10GB_log .png|thumbnail!


h1. Investigation

h2. Steps

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.
!Capture d'écran de 2016-08-20 17-04-34.png|thumbnail!

Edit 1: It seems that Jenkins read the whole file. If it fails, it will return 0. I suspect that Jenkins is failing to close the file descriptor. So the lastLocation is not updated. But the data are sent. Jenkins retries to read the file, fail again, etc. That's only a supposition for now.

Edit 2: It seems that it comes from the network, as I've captured a java.io.InterruptedIOException in this loop.
!Capture d'écran de 2016-08-20 19-02-41.png|thumbnail!

h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 19:14:1320/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

The build never ends on the node.
It sends forever the same 20-30 lines.
!Capture d'écran de 2016-08-20 16-35-18.png|thumbnail!


Some logs files can be bigger than 10GB before I kill the process.
! jenkins_10GB_log jekins_10GB_log .png|thumbnail!


h1. Investigation

h2. Steps

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.
!Capture d'écran de 2016-08-20 17-04-34.png|thumbnail!

Edit 1: It seems that Jenkins read the whole file. If it fails, it will return 0. I suspect that Jenkins is failing to close the file descriptor. So the lastLocation is not updated. But the data are sent. Jenkins retries to read the file, fail again, etc. That's only a supposition for now.

Edit 2: It seems that it comes from the network, as I've captured a java.io.InterruptedIOException in this loop.
!Capture d'écran de 2016-08-20 19-02-41.png|thumbnail!

h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 19:16:0220/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

The build never ends on the node.
It sends forever the same 20-30 lines.
We can see the difference between the timestamper date (put when received by the master) and the log date (written during the Powershell execution)
!Capture d'écran de 2016-08-20 16-35-18.png|thumbnail!

Some logs files can be bigger than 10GB before I kill the process.
(Yes, it's really stored in the JENKINS_HOME)
!jekins_10GB_log.png|thumbnail!


h1. Investigation

h2. Steps

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.
The process is terminated, the log file is not bigger than 3MB (and can be seen in the upper left corner of the screenshot)
!Capture d'écran de 2016-08-20 17-04-34.png|thumbnail!

Edit 1: It seems that Jenkins read the whole file. If it fails, it will return 0. I suspect that Jenkins is failing to close the file descriptor. So the lastLocation is not updated. But the data are sent. Jenkins retries to read the file, fail again, etc. That's only a supposition for now.

Edit 2: It seems that it comes from the network, as I've captured a java.io.InterruptedIOException in this loop in hudson . remoting.ProxyOutputStream.
!Capture d'écran de 2016-08-20 19-02-41.png|thumbnail!

h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 19:40:0320/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
Change By: Quentin Dufour
Attachment: Capture d'écran de 2016-08-20 19-35-36.png

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 19:40:0320/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

The build never ends on the node.
It sends forever the same 20-30 lines.
We can see the difference between the timestamper date (put when received by the master) and the log date (written during the Powershell execution)
!Capture d'écran de 2016-08-20 16-35-18.png|thumbnail!

Some logs files can be bigger than 10GB before I kill the process.
(Yes, it's really stored in the JENKINS_HOME)
!jekins_10GB_log.png|thumbnail!

h1. Investigation

h2. Steps

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.
The process is terminated, the log file is not bigger than 3MB (and can be seen in the upper left corner of the screenshot)
!Capture d'écran de 2016-08-20 17-04-34.png|thumbnail!

Edit Update 1: It seems that Jenkins read the whole file. If it fails, it will return 0. I suspect that Jenkins is failing to close the file descriptor. So the lastLocation is not updated. But the data are sent. Jenkins retries to read the file, fail again, etc. That's only a supposition for now.

Edit Update 2: It seems that it comes from the network, as I've captured a java.io.InterruptedIOException in this loop in hudson.remoting.ProxyOutputStream.

!Capture d'écran de 2016-08-20 19-02-41.png|thumbnail!

Update 3: It seems that the Jenkins Master is guilty. I've connected my debugger to this one. The error occurs when it tries to write the log in its JENKINS_HOME.
!Capture d'écran de 2016-08-20 19-35-36.png|thumbnail!   

h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 19:44:0120/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
Change By: Quentin Dufour
Attachment: Capture d'écran de 2016-08-20 19-35-13.png

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 19:45:0120/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

The build never ends on the node.
It sends forever the same 20-30 lines.
We can see the difference between the timestamper date (put when received by the master) and the log date (written during the Powershell execution)
!Capture d'écran de 2016-08-20 16-35-18.png|thumbnail!

Some logs files can be bigger than 10GB before I kill the process.
(Yes, it's really stored in the JENKINS_HOME)
!jekins_10GB_log.png|thumbnail!

h1. Investigation

h2. Steps

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.
The process is terminated, the log file is not bigger than 3MB (and can be seen in the upper left corner of the screenshot)
!Capture d'écran de 2016-08-20 17-04-34.png|thumbnail!

* Update 1: * It seems that Jenkins read the whole file. If it fails, it will return 0. I suspect that Jenkins is failing to close the file descriptor. So the lastLocation is not updated. But the data are sent. Jenkins retries to read the file, fail again, etc. That's only a supposition for now.

* Update 2: * It seems that it comes from the network, as I've captured a java.io.InterruptedIOException in this loop in hudson.remoting.ProxyOutputStream.

!Capture d'écran de 2016-08-20 19-02-41.png|thumbnail!

* Update 3: * It seems that the Jenkins Master is guilty. I've connected my debugger to this one. The error occurs when it tries to write the log in its JENKINS_HOME. When executing the green line on the following screenshot.
!Capture d'écran de 2016-08-20 19-35-36.png|thumbnail!   

The error is catched in DurableTaskStep$Execution.check, as it seems to be a workspace error. It seems that Jenkins doesn't find the workspace folder, as it's searching the jenkins node workspace in its local file system. C:\\ci\\int12\\ocoint...
!Capture d'écran de 2016-08-20 19-35-13.png|thumbnail!


h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 19:46:0120/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

The build never ends on the node.
It sends forever the same 20-30 lines.
We can see the difference between the timestamper date (put when received by the master) and the log date (written during the Powershell execution)
!Capture d'écran de 2016-08-20 16-35-18.png|thumbnail!

Some logs files can be bigger than 10GB before I kill the process.
(Yes, it's really stored in the JENKINS_HOME)
!jekins_10GB_log.png|thumbnail!

h1. Investigation

h2. Steps

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.
The process is terminated, the log file is not bigger than 3MB (and can be seen in the upper left corner of the screenshot)
!Capture d'écran de 2016-08-20 17-04-34.png|thumbnail!

*Update 1:* It seems that Jenkins read the whole file. If it fails, it will return 0. I suspect that Jenkins is failing to close the file descriptor. So the lastLocation is not updated. But the data are sent. Jenkins retries to read the file, fail again, etc. That's only a supposition for now.

*Update 2:* It seems that it comes from the network, as I've captured a java.io.InterruptedIOException in this loop in hudson.remoting.ProxyOutputStream.

!Capture d'écran de 2016-08-20 19-02-41.png|thumbnail!

*Update 3: *It seems that the Jenkins Master is guilty. I've connected my debugger to this one. The error occurs when it tries to write the log in its JENKINS_HOME. When executing the green line on the following screenshot.

!Capture d'écran de 2016-08-20 19-35-36.png|thumbnail!   

The error is catched in DurableTaskStep$Execution.check, as it seems to be a workspace error. It seems that Jenkins doesn't find the workspace folder, as it's searching the jenkins node workspace in its local file system. C:\\ci\\int12\\ocoint... So it saves the log but interrupt the task, send to the slave that it has interrupted the task. The slave thinks that the logs has not been saved, and resend it to the master, which don't find the node workspace in it's local filesystem, etc.
!Capture d'écran de 2016-08-20 19-35-13.png|thumbnail!


h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 21:28:0220/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

The Sometimes, after a certain amount of builds, the build never ends on the node.
It sends forever the same 20-30 lines.
It seems that the problem occures more when I restart Jenkins while there are some tasks running.
We can see the difference between the timestamper date (put when received by the master) and the log date (written during the Powershell execution)
!Capture d'écran de 2016-08-20 16-35-18.png|thumbnail!

Some logs files can be bigger than 10GB before I kill the process.
(Yes, it's really stored in the JENKINS_HOME)
!jekins_10GB_log.png|thumbnail!

h1. Investigation

h2. Steps

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.
The process is terminated, the log file is not bigger than 3MB (and can be seen in the upper left corner of the screenshot)
!Capture d'écran de 2016-08-20 17-04-34.png|thumbnail!

*Update 1:* It seems that Jenkins read the whole file. If it fails, it will return 0. I suspect that Jenkins is failing to close the file descriptor. So the lastLocation is not updated. But the data are sent. Jenkins retries to read the file, fail again, etc. That's only a supposition for now.

*Update 2:* It seems that it comes from the network, as I've captured a java.io.InterruptedIOException in this loop in hudson.remoting.ProxyOutputStream.
!Capture d'écran de 2016-08-20 19-02-41.png|thumbnail!

*Update 3:* It seems that the Jenkins Master is guilty. I've connected my debugger to this one. The error occurs when it tries to write the log in its JENKINS_HOME. When executing the green line on the following screenshot.

!Capture d'écran de 2016-08-20 19-35-36.png|thumbnail!   

The error is catched in DurableTaskStep$Execution.check, as it seems to be a workspace error. It seems that Jenkins doesn't find the workspace folder, as it's searching the jenkins node workspace in its local file system. C:\\ci\\int12\\ocoint... So it saves the log but interrupt the task, send to the slave that it has interrupted the task. The slave thinks that the logs has not been saved, and resend it to the master, which don't find the node workspace in it's local filesystem, etc.
!Capture d'écran de 2016-08-20 19-35-13.png|thumbnail!


h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.

I had to put my production in debug to inspect the error.
As I don't understand why it fails, I can't create a reproduce protocol yet.
I hope that someone google this error with the same keywords and post a comment here to mention that he has the same error and more information :)

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 23:43:0120/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

Sometimes, after a certain amount of builds, the build never ends on the node.
It sends forever the same 20-30 lines.
It seems that the problem occures more when I restart Jenkins while there are some tasks running.
We can see the difference between the timestamper date (put when received by the master) and the log date (written during the Powershell execution)
!Capture d'écran de 2016-08-20 16-35-18.png|thumbnail!

Some logs files can be bigger than 10GB before I kill the process.
(Yes, it's really stored in the JENKINS_HOME)
!jekins_10GB_log.png|thumbnail!

h1. Investigation

h2. Steps

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.
The process is terminated, the log file is not bigger than 3MB (and can be seen in the upper left corner of the screenshot)
!Capture d'écran de 2016-08-20 17-04-34.png|thumbnail!

*Update 1:* It seems that Jenkins read the whole file. If it fails, it will return 0. I suspect that Jenkins is failing to close the file descriptor. So the lastLocation is not updated. But the data are sent. Jenkins retries to read the file, fail again, etc. That's only a supposition for now.

*Update 2:* It seems that it comes from the network, as I've captured a java.io.InterruptedIOException in this loop in hudson.remoting.ProxyOutputStream.
!Capture d'écran de 2016-08-20 19-02-41.png|thumbnail!

*Update 3:* It seems that the Jenkins Master is guilty. I've connected my debugger to this one. The error occurs when it tries to write the log in its JENKINS_HOME. When executing the green line on the following screenshot.
!Capture d'écran de 2016-08-20 19-35-36.png|thumbnail!   

The error is catched in DurableTaskStep$Execution.check, as it seems to be a workspace error. It seems that Jenkins doesn't find the workspace folder, as it's searching the jenkins node workspace in its local file system. C:\\ci\\int12\\ocoint... So it saves the log but interrupt the task, send to the slave that it has interrupted the task. The slave thinks that the logs has not been saved, and resend it to the master, which don't find the node workspace in it's local filesystem, etc.
!Capture d'écran de 2016-08-20 19-35-13.png|thumbnail!

*Update 4:* It seems that when the master deserializes the response of the node, it returns a null object instead of the logs when it comes to Write Logs...
!Capture d'écran de 2016-08-20 23-41-53.png|thumbnail!


h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.
I had to put my production in debug to inspect the error.
As I don't understand why it fails, I can't create a reproduce protocol yet.
I hope that someone google this error with the same keywords and post a comment here to mention that he has the same error and more information :)

quentin@dufour.io (JIRA)

não lida,
20 de ago. de 2016, 23:43:0120/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
Change By: Quentin Dufour
Attachment:
Capture d'écran de 2016-08-20 23-41-53.png

quentin@dufour.io (JIRA)

não lida,
21 de ago. de 2016, 08:59:0121/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

Sometimes, after a certain amount of builds, the build never ends on the node.
It sends forever the same 20-30 lines.
It seems that the problem occures more when I restart Jenkins while there are some tasks running.
We can see the difference between the timestamper date (put when received by the master) and the log date (written during the Powershell execution)
!Capture d'écran de 2016-08-20 16-35-18.png|thumbnail!

Some logs files can be bigger than 10GB before I kill the process.
(Yes, it's really stored in the JENKINS_HOME)
!jekins_10GB_log.png|thumbnail!

_I think it's worth mentionning but I have 3 jenkins nodes on the same machine._

h1. Investigation

h2. Steps

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.
The process is terminated, the log file is not bigger than 3MB (and can be seen in the upper left corner of the screenshot)
!Capture d'écran de 2016-08-20 17-04-34.png|thumbnail!

*Update 1:* It seems that Jenkins read the whole file. If it fails, it will return 0. I suspect that Jenkins is failing to close the file descriptor. So the lastLocation is not updated. But the data are sent. Jenkins retries to read the file, fail again, etc. That's only a supposition for now.

*Update 2:* It seems that it comes from the network, as I've captured a java.io.InterruptedIOException in this loop in hudson.remoting.ProxyOutputStream.
!Capture d'écran de 2016-08-20 19-02-41.png|thumbnail!

*Update 3:* It seems that the Jenkins Master is guilty. I've connected my debugger to this one. The error occurs when it tries to write the log in its JENKINS_HOME. When executing the green line on the following screenshot.
!Capture d'écran de 2016-08-20 19-35-36.png|thumbnail!   

The error is catched in DurableTaskStep$Execution.check, as it seems to be a workspace error. It seems that Jenkins doesn't find the workspace folder, as it's searching the jenkins node workspace in its local file system. C:\\ci\\int12\\ocoint... So it saves the log but interrupt the task, send to the slave that it has interrupted the task. The slave thinks that the logs has not been saved, and resend it to the master, which don't find the node workspace in it's local filesystem, etc.
!Capture d'écran de 2016-08-20 19-35-13.png|thumbnail!

*Update 4:* It seems that when the master deserializes the response of the node, it returns a null object instead of the logs when it comes to Write Logs...
!Capture d'écran de 2016-08-20 23-41-53.png|thumbnail!


h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.
I had to put my production in debug to inspect the error.
As I don't understand why it fails, I can't create a reproduce protocol yet.
I hope that someone google this error with the same keywords and post a comment here to mention that he has the same error and more information :)

quentin@dufour.io (JIRA)

não lida,
22 de ago. de 2016, 09:10:0122/08/2016
para jenkinsc...@googlegroups.com
*Update 5:* I've modified the code of durable-task-plugin to have more logs in the console

{{SEVERE: QDU WILL RETURN NULL AND WILL HAVE TO BE REUPLOADED
Aug 22, 2016 9:04:54 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke
SEVERE: QDU IOEXCEPTION {0}
java.io.InterruptedIOException
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:147)
at java.io.OutputStream.write(OutputStream.java:75)
at hudson.remoting.RemoteOutputStream.write(RemoteOutputStream.java:106)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:137)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:116)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2772)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at hudson.remoting.Engine$1$1.run(Engine.java:85)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException}}
at java.lang.Object.wait(Native Method)
at hudson.remoting.PipeWindow$Real.get(PipeWindow.java:209)
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:122)
... 14 more


h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.
I had to put my production in debug to inspect the error.
As I don't understand why it fails, I can't create a reproduce protocol yet.
I hope that someone google this error with the same keywords and post a comment here to mention that he has the same error and more information :)

quentin@dufour.io (JIRA)

não lida,
22 de ago. de 2016, 09:22:0222/08/2016
para jenkinsc...@googlegroups.com

quentin@dufour.io (JIRA)

não lida,
22 de ago. de 2016, 10:03:0222/08/2016
para jenkinsc...@googlegroups.com
{ { noformat}
SEVERE: QDU WILL RETURN NULL AND WILL HAVE TO BE REUPLOADED
Aug 22, 2016 9:04:54 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke
SEVERE: QDU IOEXCEPTION {0}
java.io.InterruptedIOException
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:147)
at java.io.OutputStream.write(OutputStream.java:75)
at hudson.remoting.RemoteOutputStream.write(RemoteOutputStream.java:106)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:137)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:116)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2772)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at hudson.remoting.Engine$1$1.run(Engine.java:85)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException}}
at java.lang.Object.wait(Native Method)
at hudson.remoting.PipeWindow$Real.get(PipeWindow.java:209)
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:122)
... 14 more
{noformat } }


h1. How to reproduce

I've tried to reproduce this bug for 3 days but didn't achieve to.
I had to put my production in debug to inspect the error.
As I don't understand why it fails, I can't create a reproduce protocol yet.
I hope that someone google this error with the same keywords and post a comment here to mention that he has the same error and more information :)

quentin@dufour.io (JIRA)

não lida,
22 de ago. de 2016, 10:04:0222/08/2016
para jenkinsc...@googlegroups.com
  @Override public Long invoke(File f, VirtualChannel channel) throws IOException, InterruptedException {
                try {
                    long len = f.length();
                    if (len > lastLocation) {
                     RandomAccessFile raf = new RandomAccessFile(f, "r");
                     try {
                     raf.seek(lastLocation);
                     long toRead = len - lastLocation;
                     if (toRead > Integer.MAX_VALUE) { // >2Gb of output at once is unlikely
                     throw new IOException("large reads not yet implemented");
                     }
                     // TODO is this efficient for large amounts of output? Would it be better to stream data, or return a byte[] from the callable?
                     byte[] buf = new byte[(int) toRead];
                     raf.readFully(buf);
                     sink.write(buf);
                     } finally {
                     raf.close();
                     }
                     LOGGER.log(Level.
SEVERE , "QDU WILL RETURN AS NEW CURSOR POSITION {0}", len);
                     return len;
                    } else {
                     LOGGER.log(Level.SEVERE, "QDU WILL RETURN NULL AND WILL HAVE TO BE REUPLOADED");
                     return null;
                    }
                } catch(IOException e) {
                    LOGGER.log(Level.SEVERE, "QDU IOEXCEPTION {0}", e);
                    throw e;
                } catch(Exception e) {
                    LOGGER.log(Level.SEVERE, "QDU UNKNOWN EXCEPTION {0}", e);
                }
                return null;
            }
{noformat}
{noformat}
SEVERE
: QDU WILL RETURN NULL AND WILL HAVE TO BE REUPLOADED

quentin@dufour.io (JIRA)

não lida,
22 de ago. de 2016, 11:51:0122/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

Sometimes, after a certain amount of builds, the build never ends on the node.
It sends forever the same 20-30 lines.
It seems that the problem occures more when I restart Jenkins while there are some tasks running.
We can see the difference between the timestamper date (put when received by the master) and the log date (written during the Powershell execution)
!Capture d'écran de 2016-08-20 16-35-18.png|thumbnail!

Some logs files can be bigger than 10GB before I kill the process.
(Yes, it's really stored in the JENKINS_HOME)
!jekins_10GB_log.png|thumbnail!

_I think it's worth mentionning but that I have 3 jenkins nodes on the same machine._
                     LOGGER.log(Level.SEVERE, "QDU WILL RETURN AS NEW CURSOR POSITION {0}", len);

quentin@dufour.io (JIRA)

não lida,
22 de ago. de 2016, 11:52:0122/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

Sometimes, after a certain amount of builds, the build never ends on the node.
It sends forever the same 20-30 lines.
It seems that the problem occures more when I restart Jenkins while there are some tasks running.
We can see the difference between the timestamper date (put when received by the master) and the log date (written during the Powershell execution)
!Capture d'écran de 2016-08-20 16-35-18.png|thumbnail!

Some logs files can be bigger than 10GB before I kill the process.
(Yes, it's really stored in the JENKINS_HOME)
!jekins_10GB_log.png|thumbnail!

_I think it's worth mentionning that I have 3 jenkins nodes on the same machine._


h1. Investigation

h2. Steps

I've found on Wireshark that the node keeps sending the same logs forever.
So the jenkins master is not (directly) the culprit.
After enabling the debugguer on the slave, I've found that the method FileMonitoringTask$FileMonitoringController$WriteLog.invoke is called in an infinite loop somewhere in this file:
durable-task-plugin\src\main\java\org\jenkinsci\plugins\durabletask\FileMonitoringTask.java
The same file is read again and again with a lastLocation of 1930670. lastLocation represent the bytes already read. But I don't understand why it doesn't increase.
The process is terminated, the log file is not bigger than 3MB (and can be seen in the upper left corner of the screenshot)
!Capture d'écran de 2016-08-20 17-04-34.png|thumbnail!

*Update 1:* It seems that Jenkins read the whole file. If it fails, it will return 0. I suspect that Jenkins is failing to close the file descriptor. So the lastLocation is not updated. But the data are sent. Jenkins retries to read the file, fail again, etc. That's only a supposition for now.

*Update 2:* It seems that it comes from the network, as I've captured a java.io.InterruptedIOException in this loop in hudson.remoting.ProxyOutputStream.
!Capture d'écran de 2016-08-20 19-02-41.png|thumbnail!

*Update 3:* It seems that the Jenkins Master is guilty. I've connected my debugger to this one. The error occurs when it tries to write the log in its JENKINS_HOME. When executing the green line on the following screenshot.
!Capture d'écran de 2016-08-20 19-35-36.png|thumbnail!   

The error is catched in DurableTaskStep$Execution.check, as it seems to be a workspace error. It seems that Jenkins doesn't find the workspace folder, as it's searching the jenkins node workspace in its local file system. C:\\ci\\int12\\ocoint... So it saves the log but interrupt the task, send to the slave that it has interrupted the task. The slave thinks that the logs has not been saved, and resend it to the master, which don't find the node workspace in it's local filesystem, etc.
!Capture d'écran de 2016-08-20 19-35-13.png|thumbnail!

*Update 4:* It seems that when the master deserializes the response of the node, it returns a null object instead of the logs when it comes to Write Logs...
!Capture d'écran de 2016-08-20 23-41-53.png|thumbnail!

*Update 5:* I've modified the code of durable-task-plugin to have more logs in the console

{noformat}
// org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke

quentin@dufour.io (JIRA)

não lida,
22 de ago. de 2016, 12:03:0222/08/2016
para jenkinsc...@googlegroups.com

On the Jenkins Node, I've the following logs:

{noformat}
Aug 22, 2016 2:56:39 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke

SEVERE: QDU WILL RETURN NULL AND WILL HAVE TO BE REUPLOADED
Aug 22, 2016 2:56:41 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke

SEVERE: QDU IOEXCEPTION {0}
java.io.InterruptedIOException
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:147)
at java.io.OutputStream.write(OutputStream.java:75)
at hudson.remoting.RemoteOutputStream.write(RemoteOutputStream.java:106)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:137)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:116)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2772)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at hudson.remoting.Engine$1$1.run(Engine.java:85)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at hudson.remoting.PipeWindow$Real.get(PipeWindow.java:209)
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:122)
... 14 more

Aug 22, 2016 2:56:52 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke

SEVERE: QDU IOEXCEPTION {0}
java.io.InterruptedIOException
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:147)
at java.io.OutputStream.write(OutputStream.java:75)
at hudson.remoting.RemoteOutputStream.write(RemoteOutputStream.java:106)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:137)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:116)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2772)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at hudson.remoting.Engine$1$1.run(Engine.java:85)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at hudson.remoting.PipeWindow$Real.get(PipeWindow.java:209)
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:122)
... 14 more

Aug 22, 2016 2:56:54 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke

SEVERE: QDU WILL RETURN NULL AND WILL HAVE TO BE REUPLOADED
Aug 22, 2016 2:57:02 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke

SEVERE: QDU IOEXCEPTION {0}
java.io.InterruptedIOException
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:147)
at java.io.OutputStream.write(OutputStream.java:75)
at hudson.remoting.RemoteOutputStream.write(RemoteOutputStream.java:106)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:137)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:116)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2772)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at hudson.remoting.Engine$1$1.run(Engine.java:85)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at hudson.remoting.PipeWindow$Real.get(PipeWindow.java:209)
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:122)
... 14 more

Aug 22, 2016 2:57:09 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke

SEVERE: QDU WILL RETURN NULL AND WILL HAVE TO BE REUPLOADED
Aug 22, 2016 2:57:12 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke

SEVERE: QDU IOEXCEPTION {0}
java.io.InterruptedIOException
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:147)
at java.io.OutputStream.write(OutputStream.java:75)
at hudson.remoting.RemoteOutputStream.write(RemoteOutputStream.java:106)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:137)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:116)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2772)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at hudson.remoting.Engine$1$1.run(Engine.java:85)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at hudson.remoting.PipeWindow$Real.get(PipeWindow.java:209)
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:122)
... 14 more

Aug 22, 2016 2:57:22 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke

SEVERE: QDU IOEXCEPTION {0}
java.io.InterruptedIOException
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:147)
at java.io.OutputStream.write(OutputStream.java:75)
at hudson.remoting.RemoteOutputStream.write(RemoteOutputStream.java:106)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:137)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:116)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2772)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at hudson.remoting.Engine$1$1.run(Engine.java:85)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at hudson.remoting.PipeWindow$Real.get(PipeWindow.java:209)
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:122)
... 14 more

Aug 22, 2016 2:57:24 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke

SEVERE: QDU WILL RETURN NULL AND WILL HAVE TO BE REUPLOADED
Aug 22, 2016 2:57:33 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke

SEVERE: QDU IOEXCEPTION {0}
java.io.InterruptedIOException
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:147)
at java.io.OutputStream.write(OutputStream.java:75)
at hudson.remoting.RemoteOutputStream.write(RemoteOutputStream.java:106)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:137)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:116)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2772)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at hudson.remoting.Engine$1$1.run(Engine.java:85)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at hudson.remoting.PipeWindow$Real.get(PipeWindow.java:209)
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:122)
... 14 more

Aug 22, 2016 2:57:39 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke

SEVERE: QDU WILL RETURN NULL AND WILL HAVE TO BE REUPLOADED
Aug 22, 2016 2:57:43 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke

SEVERE: QDU IOEXCEPTION {0}
java.io.InterruptedIOException
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:147)
at java.io.OutputStream.write(OutputStream.java:75)
at hudson.remoting.RemoteOutputStream.write(RemoteOutputStream.java:106)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:137)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:116)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2772)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at hudson.remoting.Engine$1$1.run(Engine.java:85)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at hudson.remoting.PipeWindow$Real.get(PipeWindow.java:209)
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:122)
... 14 more

Aug 22, 2016 2:57:53 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke

quentin@dufour.io (JIRA)

não lida,
22 de ago. de 2016, 12:03:0422/08/2016
para jenkinsc...@googlegroups.com
{noformat}
Aug 22, 2016 2:56:39 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke
SEVERE: QDU WILL RETURN NULL AND WILL HAVE TO BE REUPLOADED
Aug 22, 2016 9 2 : 04 56 : 41 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke
54 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke
SEVERE: QDU
WILL RETURN NULL AND WILL HAVE TO BE REUPLOADED
IOEXCEPTION {0}
}
}
java.io.InterruptedIOException
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream. java :147)
at java
. io.OutputStream.write(OutputStream.java:75)

at hudson.remoting.RemoteOutputStream.write(RemoteOutputStream.java:106)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:137)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:116)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2772)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at hudson.remoting.Engine$1$1.run(Engine.java:85)
at java. lang. Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
at java.lang.
Object.wait(Native Method)

quentin@dufour.io (JIRA)

não lida,
22 de ago. de 2016, 12:26:0422/08/2016
para jenkinsc...@googlegroups.com
Quentin Dufour updated an issue
h1. Problem

Sometimes, after a certain amount of builds, the build never ends on the node.
It sends forever the same 20-30 lines.
It seems that the problem occures occurs more often when I restart Jenkins while there are some tasks running. But I'm not sure.
It also seems that the problem is linked to a high load/network on the master/node.
We can see the difference between the timestamper date (put when received by the master) and the log date (written during the Powershell execution)
!Capture d'écran de 2016-08-20 16-35-18.png|thumbnail!

Some logs files can be bigger than 10GB before I kill the process.
(Yes, it's really stored in the JENKINS_HOME)
!jekins_10GB_log.png|thumbnail!

_I think it's worth mentionning that I have 3 jenkins nodes on the same machine , and that my JENKINS-HOME is located on a network drive (CIFS/SMB) ._
On the Jenkins Node, I've the following logs:

{noformat}
Aug 22, 2016 2:56:39 AM org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog invoke
SEVERE: QDU WILL RETURN NULL AND WILL HAVE TO BE REUPLOADED
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:147)
at java.io.OutputStream.write(OutputStream.java:75)
at hudson.remoting.RemoteOutputStream.write(RemoteOutputStream.java:106)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:137)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController$WriteLog.invoke(FileMonitoringTask.java:116)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2772)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at hudson.remoting.Engine$1$1.run(Engine.java:85)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at hudson.remoting.PipeWindow$Real.get(PipeWindow.java:209)
at hudson.remoting.ProxyOutputStream.write(ProxyOutputStream.java:122)
... 14 more

quentin@dufour.io (JIRA)

não lida,
22 de ago. de 2016, 13:16:0222/08/2016
para jenkinsc...@googlegroups.com
Responder a todos
Responder ao autor
Encaminhar
0 nova mensagem