A job/pipeline is using custom workspace. The job is set-up in such a way that every build creates there own workspace inside the main/default workspace. So, if the actual workspace is "D:\jenkins\workspace\<job_name>", each run of the job/pipeline is creating there own workspace like
"D:\jenkins\workspace\<job_name>\1"
"D:\jenkins\workspace\<job_name>\2"
"D:\jenkins\workspace\<job_name>\3"
.
.
.
This lock down of workspace is implemented as per a requirement.
If the job is successful, these workspaces are deleted in a stage using the deletedir(). But if the job fails, i leave them for postmortem.
Is there any standard/recommended way to clean up the entire workspace in a scheduled way (like once a month)? I have a pool of slaves and the scheduled job should clean in all slaves.