Hi, I'm pretty sure we have a dedicated workspace per slave. I am running a pipeline job, with the following setup: master polls on the Jenkinsfile in own workspace on node A slaves build inside the pipeline and do their own polling on node B and C. Node C has been disabled, but I still get unwanted polling. But looking at the output of the perforce plugin in detail I do see strange things: ------------------------ [Pipeline] stage (Build) Using the ‘stage’ step without a block argument is deprecated Entering stage Build Proceeding [Pipeline] echo Bulding <snip> [Pipeline] node Still waiting to schedule task Waiting for next available executor on docker Running on nodeB in <snip> [Pipeline] { [Pipeline] echo perforce depot path: "<snip>" perforce view: "jenkins-<job>-nodeB" perforce view spec: "<snip>" perforce populate type: "AutoCleanImpl" [Pipeline] checkout ... p4 client o jenkins<job>-NodeB+ ... p4 info + P4 Task: establishing connection. ... server: <snip> ... node: nodeA ... p4 client o jenkins<job>-nodeB + ... p4 client -i + ... client: jenkins-<job>-nodeB ... p4 client o jenkins<job>-nodeB + ... p4 info + ... p4 counter change + ... p4 changes m1 -ssubmitted //jenkins<job>-nodeB/... + Building on Node: master ... p4 client o jenkins<job>-nodeB + ... p4 info + P4 Task: establishing connection. ... server: <snip> ... node: nodeB P4 Task: reverting all pending and shelved revisions. ... p4 revert <snip> + ... rm [abandoned files] duration: (3ms) P4 Task: skipping clean, no options set. P4 Task: syncing files at change: <snip> ... p4 sync <snip> + ... p4 client -i + duration: 0m 2s P4 Task: saving built changes. ... p4 client o jenkins<job>-nodeB + ... p4 info + ... p4 changes m100 //jenkins<job>-nodeB/...<snip> + ------------------------------------------------ Notice that for the same workspace belonging to nodeB the master node, nodeA, establishes a connection during the pipeline. Could this be causing problems somehow? Is this wanted behavior? |