| Hi, We have noticed, that the locks were not handled / cleaned up properly during parallel phase executions. Sample Pipeline Script:
node {
milestone()
lock(resource: "my_bld_lock", inversePrecedence: true) {
milestone()
stage("Bld") {
sleep 5
}
}
milestone()
parallel([
"Testing": {
lock(resource: "my_test_lock", inversePrecedence: true) {
stage("Test") {
// error "error"
sleep 10
}
}
},
"Second Level Testing": {
lock(resource: "my_second_test_lock", inversePrecedence: true) {
stage("Second Level Test") {
sleep 10
}
}
},
"Deployment": {
lock(resource: "my_deploy_lock", inversePrecedence: true) {
stage("Deploy") {
sleep 90
}
}
},
failFast: true
])
}
Scenario : Trigger a pipeline build with with above pipeline script first and then trigger a second one with the line error "error" un-commented. You will see that, the first pipeline build gets killed (ABORTED or NOT_BUILT) as soon as the second pipeline build errors out. The first pipeline build console shows that it is Superseded by the second build, which obviously should not be the case, the second build was waiting for the lock "my_deploy_lock" and it should not have killed the build that possessed the lock. Note: failFast is required in this case, the problem is that the errored build is killing another build |