Background I think I've reached the limits of what's possible in native scripted pipeline without updating any plugins using lockable resources plugin as-is. Recently I answered a question around using lockable resources and lockable resource limits similar to this issue... I came up with a solution but it's still not great. I guess I need to look more into what it takes to develop this into a plugin. This is a siginificant gap in Jenkins' ability to do large depth parallelism while maintaining limits across a matrix of builds. You can see my reply which promted me to develop this custom withLocks step. http://sam.gleske.net/blog/engineering/2020/03/29/jenkins-parallel-conditional-locks.html Custom step source withLocks custom pipeline step for shared pipeline libraries. Usage of custom step Obtain two locks.
withLocks(['foo', 'bar']) {
// some code runs after both foo and bar locks are obtained
}
Obtain one lock with parallel limits. The index gets evaluated against the limit in order to limit parallelism with modulo operation. Similar to workaround my color-lock example. Note: if you specify multiple locks with limit and index, then the same limits apply to all locks. The next example will show how to limit specific locks without setting limits for all locks.
Map tasks = [failFast: true]
for(int i = 0; i < 5; i++) {
int taskInt = i
tasks["Task ${taskInt}"] = {
stage("Task ${taskInt}") {
withLocks(obtain_lock: 'foo', limit: 3, index: taskInt) {
echo 'This is an example task being executed'
sleep(30)
}
echo 'End of task execution.'
}
}
}
stage("Parallel tasks") {
parallel(tasks)
}
Obtain obtain the foo and bar locks. Only proceed if both locks have been obtained simultaneously. However, set foo locks to be limited by 3 simultaneous possible locks. When specifying multiple locks you can pass in the setting with lock name plus _limit and _index to define behavior for just that lock. In the following scenario, the first three locks will race for foo lock with limits and wait on bar for execution. The remaining two tasks will wait on just foo with limits. As an ordering recommendation, in the locks list, foo is first item so that any limited tasks not blocked by bar can execute right away. Please note: when using multiple locks this way there's actually a performance difference between the order in the list of foo or bar versus reversing the order. I have no control over this and just appears to be a severe limitation in how pipeline handles CPS sequence.
Map tasks = [failFast: true]
for(int i = 0; i < 5; i++) {
int taskInt = i
tasks["Task ${taskInt}"] = {
List locks = ['foo', 'bar']
if(taskInt > 2) {
locks = ['foo']
}
stage("Task ${taskInt}") {
withLocks(obtain_lock: locks, foo_limit: 3, foo_index: taskInt) {
echo 'This is an example task being executed'
sleep(30)
}
echo 'End of task execution.'
}
}
}
stage("Parallel tasks") {
parallel(tasks)
}
You may need to quote the setting depending on the characters used. For example, if you have a lock named with a special character other than an underscore, then it must be quoted.
withLocks(obtain_lock: ['hello-world'], 'hello-world_limit': 3, ...) ...
If you want locks printed out for debugging purposes you can use the printLocks option. It simply echos out the locks it will attempt to obtain in the parallel stage.
withLocks(..., printLocks: true, ...) ...
|