[JIRA] [ws-cleanup-plugin] (JENKINS-24824) Asynchronous cleanup not removing renamed workspace directories on slaves

305 views
Skip to first unread message

victor.volle@beta-thoughts.org (JIRA)

unread,
Aug 18, 2015, 12:06:06 PM8/18/15
to jenkinsc...@googlegroups.com
Victor Volle commented on Bug JENKINS-24824
 
Re: Asynchronous cleanup not removing renamed workspace directories on slaves

We added some logging around the deletion (see below) and found that there was a permissions problem.

I would suggest to apply the patch to the plugin so that others can see the root cause as well

From 4dd5c02d5c65a30860ed4ccc2baa860f331f156f Mon Sep 17 00:00:00 2001
From: Victor Volle <vi...@Victors-MacBook-Pro.local>
Date: Tue, 18 Aug 2015 17:53:41 +0200
Subject: [PATCH] JENKINS-24824: log error

---
 src/main/java/hudson/plugins/ws_cleanup/Wipeout.java | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/src/main/java/hudson/plugins/ws_cleanup/Wipeout.java b/src/main/java/hudson/plugins/ws_cleanup/Wipeout.java
index e1e759b..a9e50a6 100644
--- a/src/main/java/hudson/plugins/ws_cleanup/Wipeout.java
+++ b/src/main/java/hudson/plugins/ws_cleanup/Wipeout.java
@@ -69,7 +69,12 @@ import hudson.remoting.VirtualChannel;
     private final static Command COMMAND = new Command();
     private final static class Command implements FileCallable<Object> {
         public Object invoke(File f, VirtualChannel channel) throws IOException, InterruptedException {
-            Util.deleteRecursive(f);
+            try {
+                Util.deleteRecursive(f);
+            } catch (IOException e) {
+                LOGGER.log(Level.WARNING, "error cleaning up", e);
+                throw e;
+            }
             return null;
         }
     }
-- 
2.3.2 (Apple Git-55)
Add Comment Add Comment
 
This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265)
Atlassian logo

scm_issue_link@java.net (JIRA)

unread,
Aug 18, 2015, 2:32:03 PM8/18/15
to jenkinsc...@googlegroups.com

Code changed in jenkins
User: Oliver Gondža
Path:
src/main/java/hudson/plugins/ws_cleanup/Wipeout.java
http://jenkins-ci.org/commit/ws-cleanup-plugin/e151b89222f93b8b125f3f5e1dffb51e6ad8be9d
Log:
JENKINS-24824 Log deletion failure

scm_issue_link@java.net (JIRA)

unread,
Aug 18, 2015, 2:32:10 PM8/18/15
to jenkinsc...@googlegroups.com

tim-christian.bloss@elaxy.de (JIRA)

unread,
Sep 1, 2015, 3:57:18 AM9/1/15
to jenkinsc...@googlegroups.com

Same issue here.

We have one Linux master running multiple slaves, some windows, some Linux.

Linux slaves (VMWare VM) get a revert to snapshot using vmware plugin, which is not possible for windows slaves in our current environment.

To handle hanging processes and locked files, we disconnect each windows slave after 1 single build, shutdown and power on the vm.

Our problem are ever growing workspace folders, allocating far more than 80gig of disk space within about two weeks, as the async cleanup can not run and remove old workspaces due to premature disconnect and termination of the java vm.

In our case the workspace cleanup plugin can rename old workspaces successfully (* -> *ws-cleanup[1234567890]+) , but does not delete those old folders after successfully reconnecting the windows slaves.

We get no errors logged using ws-cleanup plugin version 0.28 in our build logs.

skarlso777@gmail.com (JIRA)

unread,
Oct 9, 2015, 9:21:01 AM10/9/15
to jenkinsc...@googlegroups.com

Hi.

Same issue here. Looks like the async delete isn't happening fast enough or isn't happening at all. Most of the time, when our git ran, but the job failed early, the async cleanup did not have time to finish deleting a folder.

Or the other problem is usually an access denied error. Note, not a permission, but access. Meaning, the directory was still being used by some loose thread of a process.

For now, I'm inclined to create a script on the slaves and put it into cron job which runs and cleans up every folder with ws-cleanup in it. But that is not a really nice solution.

skarlso777@gmail.com (JIRA)

unread,
Oct 9, 2015, 9:29:06 AM10/9/15
to jenkinsc...@googlegroups.com
Gergely Brautigam edited a comment on Bug JENKINS-24824
Hi.

Same issue here. Looks like the async delete isn't happening fast enough or isn't happening at all. Most of the time, when our git ran, but the job failed early, the async cleanup did not have time to finish deleting a folder. 

Or the other problem is usually an access denied error. Note, not a permission, but access. Meaning, the directory was still being used by some loose thread of a process. 

For now, I'm inclined to create a script on the slaves and put it into cron job which runs and cleans up every folder with _ws-cleanup_ in it. But that is not a really nice solution.

*EDIT*: And yes, same here, usually there is no error involved. Just a [WS-CLEANUP] Deleting project workspace... and no 'Done' in the logs anywhere. 

ogondza@gmail.com (JIRA)

unread,
Oct 13, 2015, 3:04:14 AM10/13/15
to jenkinsc...@googlegroups.com

Tim-Christian Bloss Your use-case is quite special and was not taken into account originally.

Gergely Brautigam

Looks like the async delete isn't happening fast enough or isn't happening at all. Most of the time, when our git ran, but the job failed early, the async cleanup did not have time to finish deleting a folder.

I do not understand this. How does the problem manifests?

Note the async deletion failure cuses should not go into a build log (as is runs in parallel with the build and can even run for a longer time than the build itself) but a java logger. Check Jenkins master log or configure custom logged in UI for hudson.plugins.ws_cleanup to see why it fails.

In the meantime,

JENKINS-27648 was rejected in jenkins core so I guess we have to implement a periodic task to clean dangling ws-cleanup dirs.

ogondza@gmail.com (JIRA)

unread,
Oct 13, 2015, 3:04:18 AM10/13/15
to jenkinsc...@googlegroups.com
Oliver Gondža edited a comment on Bug JENKINS-24824
[~tcb_xy] Your use-case is quite special and was not taken into account originally.

[~skarlso]
bq. Looks like the async delete isn't happening fast enough or isn't happening at all. Most of the time, when our git ran, but the job failed early, the async cleanup did not have time to finish deleting a folder.


I do not understand this. How does the problem manifests?

Note the async deletion failure cuses should not go into a build log (as is runs in parallel with the build and can even run for a longer time than the build itself) but a java logger. Check Jenkins master log or configure custom logged in UI for {{hudson.plugins.ws_cleanup}} to see why it fails.

In the meantime, JENKINS-27648 was rejected in jenkins core so I guess we have to implement a periodic task to clean dangling {{ \ *ws-cleanup \ *}} dirs.  That should resolve the problems with slaves disconnected unexpectedly and directories not deleted in time because of opened file descriptors.

skarlso777@gmail.com (JIRA)

unread,
Oct 13, 2015, 3:13:02 AM10/13/15
to jenkinsc...@googlegroups.com

The symptom is a lingering ws-cleanup folder with the timestamp which didn't get deleted. Btw, most of the times it manifests on windows only. Rarely happens on a linux slave interestingly enough.

Yes, sorry, the access denied error message was in the Jenkins Log.

However, the Done, part is in the job's log. And that one was obviously missing because of the exception in the main log.

dbeck@cloudbees.com (JIRA)

unread,
Oct 13, 2015, 5:01:05 AM10/13/15
to jenkinsc...@googlegroups.com

skarlso777@gmail.com (JIRA)

unread,
Oct 13, 2015, 5:11:02 AM10/13/15
to jenkinsc...@googlegroups.com

Windows file locking?

Lacking enough evidence to say yay, or nay. It is 'a' possibility.

Or even some other process locking a certain file. I even saw git lingering and the .git folder could not be deleted in a RENAMED folder. So much crazy, such wow.

dbeck@cloudbees.com (JIRA)

unread,
Oct 13, 2015, 6:50:01 AM10/13/15
to jenkinsc...@googlegroups.com

some other process locking a certain file. I even saw git lingering

Sorry, that's what I meant. Too much locking going on and Windows doesn't allow deleting folders with open handles.

skarlso777@gmail.com (JIRA)

unread,
Oct 13, 2015, 7:09:06 AM10/13/15
to jenkinsc...@googlegroups.com

Affirmative. When it happens, it's usually because of windows file locking.

fvissing@schneider-electric.com (JIRA)

unread,
Oct 28, 2015, 5:33:03 AM10/28/15
to jenkinsc...@googlegroups.com

I have a reproduce for this.
Provision a docker slave that will be provisioned with the 'One retention strategy'
have a job that executes in the docker container (my job has less than 100 mb data)
perform ws clean-up on success, fail etc.
now once the job completes the docker container is killed and the wc-cleanup_timestamp folder remains

fvissing@schneider-electric.com (JIRA)

unread,
Oct 28, 2015, 5:33:05 AM10/28/15
to jenkinsc...@googlegroups.com
Frank Vissing edited a comment on Bug JENKINS-24824
I have a reproduce for this.
Provision a docker slave that will be provisioned with the 'One retention strategy' 
have a job that executes in the docker container (my job  has less than 100 mb data)
perform ws clean-up on success, fail etc.
now once the job completes the docker container is killed and the wc-cleanup_timestamp folder remains
this job runs on linux, so no relation to windows here

fvissing@schneider-electric.com (JIRA)

unread,
Oct 28, 2015, 5:34:02 AM10/28/15
to jenkinsc...@googlegroups.com
Frank Vissing edited a comment on Bug JENKINS-24824
I have a reproduce for this.
Provision a docker slave that will be provisioned with the 'One retention strategy' 
have a job that executes in the docker container (my job  has less than 100 mb data)
perform ws clean-up on success, fail etc.
now once the job completes the docker container is killed and the wc-cleanup_timestamp folder remains
this job runs on linux, so no relation to windows here
one option could be a checkbox to disable assync delete

fvissing@schneider-electric.com (JIRA)

unread,
Oct 28, 2015, 5:37:02 AM10/28/15
to jenkinsc...@googlegroups.com
Frank Vissing edited a comment on Bug JENKINS-24824
I have a reproduce for this.
Provision a docker slave that will be provisioned with the 'One retention strategy' 
have a job that executes in the docker container (my job  has less than 100 mb data)
perform ws clean-up on success, fail etc.
now once the job completes the docker container is killed and the wc-cleanup_timestamp folder remains
this job runs on linux, so no relation to windows here
one option could be a checkbox to disable assync delete


The workaround for this is not to delete the workspace after building but prior to building.

fvissing@schneider-electric.com (JIRA)

unread,
Oct 28, 2015, 5:55:02 AM10/28/15
to jenkinsc...@googlegroups.com
Frank Vissing edited a comment on Bug JENKINS-24824
I have a reproduce for this.
Provision a docker slave that will be provisioned with the 'One retention strategy' 
have a job that executes in the docker container (my job  has less than 100 mb data)
perform ws clean-up on success, fail etc.
now once the job completes the docker container is killed and the wc-cleanup_timestamp folder remains
this job runs on linux, so no relation to windows here
one option could be a checkbox to disable assync delete
the /home/jenkins/workspace folder is mounted in the docker container using data volumes, therfore the data remains on the filesystem until we destroy the data volumen

The workaround for this is not to delete the workspace after building but prior to building.

fvissing@schneider-electric.com (JIRA)

unread,
Oct 28, 2015, 6:08:03 AM10/28/15
to jenkinsc...@googlegroups.com
Frank Vissing edited a comment on Bug JENKINS-24824
I have a reproduce  for  that can trigger  this. . not the same as the classic slave setup but still.
Provision a docker slave that will be provisioned with the 'One retention strategy' 
have a job that executes in the docker container (my job  has less than 100 mb data)
perform ws clean-up on success, fail etc.
now once the job completes the docker container is killed and the wc-cleanup_timestamp folder remains
this job runs on linux, so no relation to windows here
one option could be a checkbox to disable assync delete
the /home/jenkins/workspace folder is mounted in the docker container using data volumes, therfore the data remains on the filesystem until we destroy the data volumen

The workaround for this is not to delete the workspace after building but prior to building.

jarkko.rantavuori@iki.fi (JIRA)

unread,
Nov 6, 2015, 3:25:02 AM11/6/15
to jenkinsc...@googlegroups.com

We get this all the time - for out test jobs, there are hundreds of cleanup folders appearing on our disk. This is Ubuntu 14.04.1 64-bit slave, so at least for us it is not related to windows file locking. Also, we have had "Delete workspace before build starts" selected, but it has not fixed the issue.

jarkko.rantavuori@iki.fi (JIRA)

unread,
Nov 6, 2015, 3:28:01 AM11/6/15
to jenkinsc...@googlegroups.com
Jarkko Rantavuori edited a comment on Bug JENKINS-24824
We get this all the time - for out test jobs, there are hundreds of *cleanup* folders appearing on our disk. This is Ubuntu 14.04.1 64-bit slave, so at least for us it is not related to windows file locking. Also, we have had "Delete workspace before build starts" selected, but it has not fixed the issue.

Our Jenkins version is 1.632, cleanup plugin 0.26.

jarkko.rantavuori@iki.fi (JIRA)

unread,
Nov 6, 2015, 9:31:03 AM11/6/15
to jenkinsc...@googlegroups.com
Jarkko Rantavuori edited a comment on Bug JENKINS-24824
We get this all the time - for out test jobs, there are hundreds of *cleanup* folders appearing on our disk. This is Ubuntu 14.04.1 64-bit slave, so at least for us it is not related to windows file locking. Also, we have had "Delete workspace before build starts" selected, but it has not fixed the issue.

Our Jenkins version is 1.632, cleanup plugin 0.26.


Update: reverting to cleanup plugin 0.23, which didn't have asynch deletion, allowed us to see the error: we had a shell script section in the jobs which created a folder having root owner instead of jenkins, so the following runs of a job would fail since there weren't able to delete the previous workspace.

I think what needs to be done for asynch plugin is to fail the build somehow if the deletion fails and report the error. Also, it would be nice if user could select between asynch and synch delete instead of having to downgrade all the way to 0.23.

jens.doose@onwerk.de (JIRA)

unread,
Nov 30, 2015, 7:57:02 AM11/30/15
to jenkinsc...@googlegroups.com

I have the same behaviour in a non-slave environment, I don't know if this is the same or a new issue.
A lot of directories like workspace_ws-cleanup_1447277714463 get created and never deleted.

In the log of the jenkins job there is the output that everything is ok:
{{
[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Done
}}

The job is building node projects and loading npm componentes as well as bower components which result in quite huge directory structures, might that be a problem?

jens.doose@onwerk.de (JIRA)

unread,
Nov 30, 2015, 8:00:02 AM11/30/15
to jenkinsc...@googlegroups.com
Jens Doose edited a comment on Bug JENKINS-24824
I have the same behaviour in a *non-slave* environment, I don't know if this is the same or a new issue.

A lot of directories like workspace_ws-cleanup_1447277714463 get created and never deleted.

In the log of the jenkins job there is the output that everything is ok:
{{
\ [WS-CLEANUP \ ] Deleting project workspace...
\ [WS-CLEANUP \ ] Done
}}

The job is building node projects and loading npm componentes as well as bower components which result in quite huge directory structures, might that be a problem?

tomiphone3G@gmail.com (JIRA)

unread,
Jan 24, 2016, 7:56:04 AM1/24/16
to jenkinsc...@googlegroups.com

In fact if you want to switch "asynch/synch" mode, you can add Patterns, because this will affect cleanup method :

  • Add Patterns : synch mode
  • No Patterns : asynch mode

This is just a trick to see the cleanup problem without downgrade to 0.23.

tomiphone3G@gmail.com (JIRA)

unread,
Jan 24, 2016, 9:14:05 AM1/24/16
to jenkinsc...@googlegroups.com

And I have submit a pull request to add a new option to switch between synch and asynch mode without downgrading : https://github.com/jenkinsci/ws-cleanup-plugin/pull/26

ogondza@gmail.com (JIRA)

unread,
Jan 25, 2016, 8:04:02 AM1/25/16
to jenkinsc...@googlegroups.com

Thomas Collignon, thanks for the workaround!

@All, please check the logged (hudson.plugins.ws_cleanup.Wipeout) cause in slave/master log and attach it to this issue.

ogondza@gmail.com (JIRA)

unread,
Jan 26, 2016, 7:43:01 AM1/26/16
to jenkinsc...@googlegroups.com

It seems that people generally prefer to fail the build in case the cleanup fails rather that clutter the slave's workspace in the long run. This does not get well together with asynchronous deletion requirement.

One way I can think of is always use sync approach from post-build step and async only in pre-build step. If the original (renamed) workspace will not be gone by the end of the build, sych deletion will be reattempted and will have a chance to fail the build in case of problems. The advantage is that build can start right away and most of the post build actions will do its job (publish junit, determine result, etc.) before the final cleanup kicks in.

WDYT?

tomiphone3G@gmail.com (JIRA)

unread,
Jan 26, 2016, 8:26:04 AM1/26/16
to jenkinsc...@googlegroups.com

This may be a good thing, why not
I ask myself if it's possible when asynch mode is on post-build, to get error et bring back to jenkins job trace. In this case the boolean "asynchronously" is necessary I Think to let people chosing the strategy.

ogondza@gmail.com (JIRA)

unread,
Jan 26, 2016, 8:55:03 AM1/26/16
to jenkinsc...@googlegroups.com

At a certain point Jenkins considers the build log to be complete and never amended again. The problem is that in theory, the cleanup can take longer than the build itself. We can either not care about the result at all (current implementation), wait for the result at the end of the build (potentially postponing the build completion) or try a compromise - check the status at the end of the build and report failure if there is any but do not wait for the completion (this can not guarantee there will be no dangling workspace directories).

When I think about this further, once we rename the workspace (as we do for async cleanup) it will always require manual cleanup in case of failure as it will not be any build's workspace any longer. When the cleanup fail with sync approach, it leaves the workspace half deleted to be reused by future builds - which is far from optimal too.

I am working on implementing plugin specific periodic task to cleanup all the temporary directories (since more general solution was rejected from core).

tomiphone3G@gmail.com (JIRA)

unread,
Jan 26, 2016, 12:20:03 PM1/26/16
to jenkinsc...@googlegroups.com

ok I see
you work on another plugin to clean periodically or you think added this features in this one? you need some help?

right now you think add option "asynchronously" is not necessary ?

ogondza@gmail.com (JIRA)

unread,
Jan 26, 2016, 2:10:06 PM1/26/16
to jenkinsc...@googlegroups.com

My original idea was the async cleanup will be performance optimization that will be transparent to the user (whoever run the builds). Which it is only partially.

I am extending this plugin with the periodic task. I understand the plugin becomes lot more sophisticated that we hoped it to be so I will implement a Jenkins-wide, property-based killswitch to bet back to sync cleanup should it cause further problems.

I still do not think that slave workspace getting full should be a concern of a user (as opposed to instance administrator), so I prefer this over a per-job configuration option.

tomiphone3G@gmail.com (JIRA)

unread,
Jan 26, 2016, 5:17:07 PM1/26/16
to jenkinsc...@googlegroups.com

Ok I agree with you. So I wait for your new implementation

Thank for your answer

ogondza@gmail.com (JIRA)

unread,
Jan 27, 2016, 2:59:02 AM1/27/16
to jenkinsc...@googlegroups.com
Oliver Gondža edited a comment on Bug JENKINS-24824
My original idea was the async cleanup will be performance optimization that will be transparent to the user (whoever run the builds). Which it is only partially.

I am extending this plugin with the periodic task. I understand the plugin becomes lot more sophisticated that we hoped it to be so I will implement a Jenkins-wide, property-based killswitch to  bet  get  back to sync cleanup should it cause further problems.


I still do not think that slave workspace getting full should be a concern of a user (as opposed to instance administrator), so I prefer this over a per-job configuration option.

bimp@bimparas.com (JIRA)

unread,
Apr 8, 2016, 2:49:02 PM4/8/16
to jenkinsc...@googlegroups.com

Has this issue been resolved? I am still seeing this issue in version 0.25 of plugin. Thanks.

sven.schott@gmail.com (JIRA)

unread,
May 2, 2016, 7:47:04 PM5/2/16
to jenkinsc...@googlegroups.com

Hi, just wanting to know if there is a timeframe on resolution of this issue. Would like to know just in case it's not in the near future so that I can set up a semi-temporary workaround for the problem (most likely a scheduled cleanup on our windows machines).

jwhitcraft@sugarcrm.com (JIRA)

unread,
May 19, 2016, 9:30:03 AM5/19/16
to jenkinsc...@googlegroups.com

+1 to a timeframe for fixing this

I'm thinking of rolling back to 0.23 becuase of this problem.

mike.dimmick@mnetics.co.uk (JIRA)

unread,
May 19, 2016, 12:13:03 PM5/19/16
to jenkinsc...@googlegroups.com

I have been experiencing this problem on a Windows 7 Enterprise installation. During manual clean-up I noticed that even using the command line rmdir /s I was getting a lot of The directory is not empty errors, meaning I had to run the command twice to complete removal.

I also noticed that a large amount of disk space was being consumed by Windows Search content indexing data. I disabled the Windows Search service and deleted the content indexes. Having done this, I'm no longer getting the errors from rmdir. I can also delete workspaces from the Jenkins 'Wipe Out Current Workspace' feature without problems, which was typically reporting an error previously.

On this system, JENKINS_HOME is at C:\Users\jenkins\.jenkins. We also have an installation of Jenkins on Windows Server 2012 R2, which doesn't suffer from the problem - here, JENKINS_HOME is C:\Jenkins, and the Windows Search feature is not installed.

dbeck@cloudbees.com (JIRA)

unread,
May 19, 2016, 3:43:01 PM5/19/16
to jenkinsc...@googlegroups.com

Windows Search content indexing data. I disabled the Windows Search service

Don't run the Windows Search on JENKINS_HOME. Don't run antivirus on JENKINS_HOME. No exceptions.

ogondza@gmail.com (JIRA)

unread,
Jul 14, 2016, 10:24:02 AM7/14/16
to jenkinsc...@googlegroups.com

I am having a second look at this: https://github.com/jenkinsci/ws-cleanup-plugin/pull/28

It implements a administrative monitor that retries the deletion on report failures in UI.

This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c)
Atlassian logo

scm_issue_link@java.net (JIRA)

unread,
Oct 20, 2016, 9:57:08 AM10/20/16
to jenkinsc...@googlegroups.com

Code changed in jenkins
User: Oliver Gondža
Path:
pom.xml
src/main/java/hudson/plugins/ws_cleanup/Wipeout.java
src/test/java/hudson/plugins/ws_cleanup/CleanupPowermockTest.java
src/test/java/hudson/plugins/ws_cleanup/CleanupTest.java
http://jenkins-ci.org/commit/ws-cleanup-plugin/ccd907188e0489e76ca21a7513843864bab48c90
Log:
[FIXED JENKINS-24824] Collect all asynchronously deleted directories

scm_issue_link@java.net (JIRA)

unread,
Oct 20, 2016, 9:57:18 AM10/20/16
to jenkinsc...@googlegroups.com

ogondza@gmail.com (JIRA)

unread,
Nov 1, 2016, 11:38:02 AM11/1/16
to jenkinsc...@googlegroups.com
Oliver Gondža commented on Bug JENKINS-24824
 
Re: Asynchronous cleanup not removing renamed workspace directories on slaves

This took lot longer than I expected. The fix will be in 0.32.

gstock.public@gmail.com (JIRA)

unread,
Nov 9, 2016, 1:14:06 PM11/9/16
to jenkinsc...@googlegroups.com
aflat commented on Bug JENKINS-24824

I'm still hitting this on a solaris x86 machine. Seems to work on other OS's

java.io.IOException: Unable to delete '/opt/jenkins/workspace/myjob-Solaris_ws-cleanup_1478635926754/local/.git/refs/remotes/origin'. Tried 3 times (of a maximum of 3) waiting 0.1 sec between attempts.
at hudson.Util.deleteFile(Util.java:248)
at hudson.FilePath.deleteRecursive(FilePath.java:1209)
at hudson.FilePath.deleteContentsRecursive(FilePath.java:1218)
at hudson.FilePath.deleteRecursive(FilePath.java:1200)
at hudson.FilePath.deleteContentsRecursive(FilePath.java:1218)
at hudson.FilePath.deleteRecursive(FilePath.java:1200)
at hudson.FilePath.deleteContentsRecursive(FilePath.java:1218)
at hudson.FilePath.deleteRecursive(FilePath.java:1200)
at hudson.FilePath.deleteContentsRecursive(FilePath.java:1218)
at hudson.FilePath.deleteRecursive(FilePath.java:1200)
at hudson.FilePath.deleteContentsRecursive(FilePath.java:1218)
at hudson.FilePath.deleteRecursive(FilePath.java:1200)
at hudson.FilePath.deleteContentsRecursive(FilePath.java:1218)
at hudson.FilePath.deleteRecursive(FilePath.java:1200)
at hudson.FilePath.access$1000(FilePath.java:195)
at hudson.FilePath$14.invoke(FilePath.java:1179)
at hudson.FilePath$14.invoke(FilePath.java:1176)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2731)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ......remote call to sun04(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1435)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:795)
at hudson.FilePath.act(FilePath.java:985)
at hudson.FilePath.act(FilePath.java:974)
at hudson.FilePath.deleteRecursive(FilePath.java:1176)
at hudson.plugins.ws_cleanup.Wipeout$DisposableImpl.dispose(Wipeout.java:110)
at org.jenkinsci.plugins.resourcedisposer.AsyncResourceDisposer$WorkItem.run(AsyncResourceDisposer.java:254)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.file.DirectoryNotEmptyException: /opt/jenkins/workspace/myjob-Solaris_ws-cleanup_1478635926754/local/.git/refs/remotes/origin
at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:242)
at sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
at java.nio.file.Files.deleteIfExists(Files.java:1118)
at hudson.Util.tryOnceDeleteFile(Util.java:287)
at hudson.Util.deleteFile(Util.java:243)
at hudson.FilePath.deleteRecursive(FilePath.java:1209)
at hudson.FilePath.deleteContentsRecursive(FilePath.java:1218)
at hudson.FilePath.deleteRecursive(FilePath.java:1200)
at hudson.FilePath.deleteContentsRecursive(FilePath.java:1218)
at hudson.FilePath.deleteRecursive(FilePath.java:1200)
at hudson.FilePath.deleteContentsRecursive(FilePath.java:1218)
at hudson.FilePath.deleteRecursive(FilePath.java:1200)
at hudson.FilePath.deleteContentsRecursive(FilePath.java:1218)
at hudson.FilePath.deleteRecursive(FilePath.java:1200)
at hudson.FilePath.deleteContentsRecursive(FilePath.java:1218)
at hudson.FilePath.deleteRecursive(FilePath.java:1200)
at hudson.FilePath.deleteContentsRecursive(FilePath.java:1218)
at hudson.FilePath.deleteRecursive(FilePath.java:1200)
at hudson.FilePath.access$1000(FilePath.java:195)
at hudson.FilePath$14.invoke(FilePath.java:1179)
at hudson.FilePath$14.invoke(FilePath.java:1176)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2731)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more
java.io.IOException: Unable to delete '/opt/jenkins/workspace/myjob-Solaris_ws-cleanup_1478635926754/local/.git/refs/remotes/origin'. Tried 3 times (of a maximum of 3) waiting 0.1 sec between attempts.

juan.facorro@gmail.com (JIRA)

unread,
Dec 19, 2016, 7:36:05 AM12/19/16
to jenkinsc...@googlegroups.com

I'm having the same issue on CentOS Linux release 7.2.1511 (Core).

juan.facorro@gmail.com (JIRA)

unread,
Dec 19, 2016, 7:39:02 AM12/19/16
to jenkinsc...@googlegroups.com
Juan Facorro edited a comment on Bug JENKINS-24824
I'm We are having the same issue on CentOS Linux release 7.2.1511 (Core). The plugin version is 0.32.

Directories seem to be marked for cleanup but their owner instead of being {{jenkins}} is another user.

gjphilp@gmail.com (JIRA)

unread,
Jan 9, 2017, 6:50:02 PM1/9/17
to jenkinsc...@googlegroups.com
Gregor Philp updated an issue
 
Change By: Gregor Philp
Attachment: Screen Shot 2017-01-09 at 3.46.57 PM.png

gjphilp@gmail.com (JIRA)

unread,
Jan 9, 2017, 6:51:01 PM1/9/17
to jenkinsc...@googlegroups.com
Gregor Philp commented on Bug JENKINS-24824
 
Re: Asynchronous cleanup not removing renamed workspace directories on slaves

Hi, we have the same issue still on CentOS Linux 6 and 7 platforms. We'd like these to just be removed and not reported so that we have to manually remove them. We end up with 100s, 1000s of these.

It seems this might be related to the problem we're having of the java heap being used up and then our master fails. I then have to restart master.

thanks
Gregor

gjphilp@gmail.com (JIRA)

unread,
Jan 9, 2017, 6:54:01 PM1/9/17
to jenkinsc...@googlegroups.com
Gregor Philp edited a comment on Bug JENKINS-24824
Hi, we have the same issue still on CentOS Linux 6 and 7 platforms. We'd like these to just be removed and not reported so that we have to manually remove them.  We end up with 100s, 1000s of these.
!Screen Shot 2017-01-09 at 3.46.57 PM.png|thumbnail!

We have latest plugin versions:
Workspace Cleanup Plugin - 0.32
Resource Disposer Plugin - 0.3
and running jenkins master - 2.19.4

It seems this might be related to the problem we're having of the java heap being used up and then our master fails.  I then have to restart master.

thanks
Gregor

ogondza@gmail.com (JIRA)

unread,
Jan 10, 2017, 3:22:02 AM1/10/17
to jenkinsc...@googlegroups.com

Gregor Philp, the screenshot demonstrates how it is supposed to work. Jenkins tries to delete it but as it fail repeatedly so item is tracked in resource disposer until the directory get fried. It seems to be never in your case. It is reported for you attention as there is something that prevents the directory to be deleted for a long time. The cause needs to be found and eliminated in your case. Hover over the exception message to see the full exception that might contain the clue.

stephan_fenton@symantec.com (JIRA)

unread,
Mar 30, 2017, 3:42:08 PM3/30/17
to jenkinsc...@googlegroups.com

I am still experiencing the problem, and in my case I noticed that the build process is creating soft links that are owned by root. The cleanup process is not able to delete these, so it tracks them in the resource disposer. I am working with the developer to find how those links are being created, but this is something that should be addressed in the plugin as well.

This message was sent by Atlassian JIRA (v7.3.0#73011-sha1:3c73d0e)
Atlassian logo

ogondza@gmail.com (JIRA)

unread,
Mar 30, 2017, 4:09:03 PM3/30/17
to jenkinsc...@googlegroups.com

Stephan Fenton, note that this has nothing to do with the plugin or async deletion. The build just creates stuff Jenkins agent has no permission to clean.

The only thing I can think of to improve this is turn the async deletion off in case the project has a bad record on workspace deletion so such persistent problems will be presented to jobs owners eventually and not the admins.

ogondza@gmail.com (JIRA)

unread,
Mar 31, 2017, 2:58:02 AM3/31/17
to jenkinsc...@googlegroups.com

ing.comp.ibarra@gmail.com (JIRA)

unread,
Apr 13, 2018, 3:26:02 PM4/13/18
to jenkinsc...@googlegroups.com
Cesar Ibarra commented on Bug JENKINS-24824
 
Re: Asynchronous cleanup not removing renamed workspace directories on slaves

Can anyone explain me how they resolve this?
I am using Jenkins 2.107.2 with ws-cleanup 0.34 and I am also experience the issue of a lingering ws-cleanup folder with the timestamp which didn't get deleted.
I am running Jenkins on a OS Ubuntu 16.04.

Any help on how to solve this?

ogondza@gmail.com (JIRA)

unread,
Apr 13, 2018, 3:35:02 PM4/13/18
to jenkinsc...@googlegroups.com

Guys, this ticket was resolved year and a half ago. Please, file a separate issue.

dirk.heinrichs@recommind.com (JIRA)

unread,
Aug 14, 2018, 5:31:02 AM8/14/18
to jenkinsc...@googlegroups.com

But the problem still persists (here too, BTW.). So why file a new issue for the VERY SAME problem? Reopening a ticket is common practice in such cases.

This message was sent by Atlassian JIRA (v7.10.1#710002-sha1:6efc396)

adam.brousseau88@gmail.com (JIRA)

unread,
Aug 21, 2018, 3:57:03 PM8/21/18
to jenkinsc...@googlegroups.com

We also have this issue but only on Windows slaves (connected through Cygwin SSH).

zakharovdi@gmail.com (JIRA)

unread,
Aug 24, 2018, 5:00:02 AM8/24/18
to jenkinsc...@googlegroups.com

Have the same issue. Jenkins 2.121.1 in docker container(workspace on volume) , ws-cleanup 0.34

ogondza@gmail.com (JIRA)

unread,
Aug 24, 2018, 9:52:04 AM8/24/18
to jenkinsc...@googlegroups.com
Oliver Gondža closed an issue as Fixed
 

Guys, this ticket was resolved year and a half ago. Please, file a separate issue.

Change By: Oliver Gondža
Status: Resolved Closed

bochenski.kuba+jenkins@gmail.com (JIRA)

unread,
Sep 14, 2018, 6:13:05 AM9/14/18
to jenkinsc...@googlegroups.com
This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)

michaelaervin@lavorotechnologies.com (JIRA)

unread,
Aug 31, 2019, 12:21:02 PM8/31/19
to jenkinsc...@googlegroups.com

My workspace directory is still getting filled with ws-cleanup directories and causing jenkins to spike all cpu cores and crash.

version of plugin is 0.37

michaelaervin@lavorotechnologies.com (JIRA)

unread,
Aug 31, 2019, 2:01:02 PM8/31/19
to jenkinsc...@googlegroups.com
Michaela Ervin updated an issue
 
Change By: Michaela Ervin
Comment:
My workspace directory is still getting filled with ws-cleanup directories and causing jenkins to spike all cpu cores and crash.

version of plugin is 0.37
Reply all
Reply to author
Forward
0 new messages