Declarative pipelines per branch and reusable stages (keeping it DRY)

5,192 views
Skip to first unread message

Kenneth Brooks

unread,
Apr 11, 2017, 11:43:49 AM4/11/17
to Jenkins Users
TL;DR up front:
As a user, I want to have a pipeline that performs specific pipeline stages based on the branch. Recommendation: Put the when{} condition outside the pipeline{} tag.
As a user, I want to declare my stages but have the implementation be separate so that I can reuse them in multiple pipelines

Currently the Declarative syntax has the ability to perform a stage conditionally using 'when' but not a whole pipeline.
This leads to making the pipeline fairly inflexible and much harder to read thru.

Take for example:

pipeline {

   stages
{
     stage
('Build') {
       
when { branch "develop || master || feature"} // no the real syntax, i know
       steps
{ /* do some build stuff */ }
     
}

     stage
('Scan') {
       
when { branch "master"}
       steps
{ /* run static code analysis or other code scanning */}
     
}

     stage
('Pull Request Build') {
       
when { branch "PR-*"}
       steps
{ /* do a merge build stuff */ }
     
}

     stage
('Dev Deploy') {
       
when { branch "develop || master"}
       steps
{ /* deploy to dev */ }
     
}

     stage
('Pull Request Deploy') {
       
when { branch "PR-*"}
       steps
{ /* deploy to special PR sandbox */}
     
}
 
}
}


In this simple example, the following will happen, but it is extremely hard to follow.

Feature -> Build
Master -> Build, Scan, Dev Deploy
Develop -> Build, Dev Deploy
Pull Request -> Pull Request Build, Pull Request Deploy

I would suggest we allow the when to be placed at the pipeline level somehow.args


pipeline('master') { // Just for naming
 
when { branch "master" }
  stages
{
    stage
('Build'){
      steps
{ /* do some build stuff */ }
   
}
    stage
('Scan'){
      steps
{ /* run static code analysis or other code scanning */}
   
}
    stage
('Dev Deploy'){
      steps
{ /* deploy to dev */ }
   
}
 
}
}

pipeline
('develop') { // Just for naming
 
when { branch "develop" }
  stages
{
    stage
('Build'){
      steps
{ /* do some build stuff */ }
   
}
    stage
('Dev Deploy'){
      steps
{ /* deploy to dev */ }
   
}
 
}
}

pipeline
('pull request') { // Just for naming
 
when { branch "PR-*" }
  stages
{
    stage
('Pull Request Build') {
      steps
{ /* do a merge build stuff */ }
   
}
    stage
('Pull Request Deploy') {
      steps
{ /* deploy to special PR sandbox */}
   
}
 
}
}

pipeline
('feature') { // Just for naming
  when { branch != "master || PR-* || develop" } // just do a build for any 'other' branches, which would then include developer feature branches
  stages
{
    stage
('Build') {
      steps
{ /* do some build stuff */ }
   
}
 
}
}


That, to me, is much cleaner. It is very easy to see exactly what each pipeline is doing.
This brings one downside. The stage is repeated.
stage('Build') and stage('Dev Deploy') are the same impl, but I have to write them 2 times.
I could create a global library, but then that has 2 other downsides. It is no longer declarative syntax in the global library, the global library is loaded external. I have to now go to a whole other file to see that implementation.

To keep things DRY I would also like to then see the stages treated as a definition and and implementation.
Define the stages external to the pipeline, but pull them into each pipeline.

This can optionally be done (like you'll see on the Pull Request stages).

Here is what I believe the combination of the two would look like:


pipeline('master') { // Just for naming
 
when { branch "master" }
  stages
{
    stage
('Build')
    stage
('Scan')
    stage
('Dev Deploy')
 
}
}

pipeline
('develop') { // Just for naming
  when { branch "develop" }
    stages
{
    stage
('Build')
    stage
('Dev Deploy')
 
}
}

pipeline
('pull request') { // Just for naming
  when { branch "PR-*" }
  stages
{
    stage
('Pull Request Build') {
      steps
{ /* do a merge build stuff */ }
   
}
    stage
('Pull Request Deploy') {
      steps
{ /* deploy to special PR sandbox */}
   
}
 
}
}

pipeline
('feature') { // Just for naming
  when { branch != "master || PR-* || develop" } // just do a build for any 'other' branches, which would then include developer feature branches
  stages
{
    stage
('Build')
 
}
}

/* Stage definitions below */
stage
('Build'){
  steps
{ /* do some build stuff */ }
}

stage
('Scan'){
  steps
{ /* run static code analysis or other code scanning */}
}

stage
('Dev Deploy'){
  steps
{ /* deploy to dev */ }
}


Is there a way to do this with the current declarative syntax?
If not, what is the best way to get this into the declarative syntax? Open jira enhancement requests?

What we've resorted to in the mean time (which still doesn't solve the DRY part) is to have a Jenkinsfile that does the if logic and then loads a specific pipeline (which has its own demons because the load evals the file immediately and is holding onto a heavyweight executor the whole time).

if (env.BRANCH_NAME.startsWith("develop")) {
    load
'develop-pipeline.groovy'
} else if (env.BRANCH_NAME.startsWith("master")) {
    load
'master-pipeline.groovy'
} else if (env.BRANCH_NAME.startsWith("PR-")) {
    load
'pull-request-pipeline.groovy'
} else {
    load
'feature-pipeline.groovy'
}

Thanks,
Ken


jer...@bodycad.com

unread,
Apr 11, 2017, 3:57:56 PM4/11/17
to Jenkins Users
Up vote for the forward declaration of stage/steps and just give the assembly later on (would just invert the 3rd example, delcare implementation first then use them).

Patrick Wolf

unread,
Apr 12, 2017, 4:53:23 PM4/12/17
to Jenkins Users
Feel free to open a JIRA ticket but I'm not a huge fan of this because it is counter to the KISS principle we wanted with Declarative and breaks the Blue Ocean editor.  We have discussed having multiple "stages" blocks but rejected that because it quickly becomes needlessly complex without adding any use case coverage. IMO, having multiple "stages" makes much more sense than having multiple "pipelines" or else you will have to recreate all agent, environment, libraries, options, parameters etc for each pipeline and that leads to wanting those sections being DRY as well and Declarative pretty much falls apart completely.

BTW, It is already possible to have multiple 'pipeline' closures in a single Jenkinsfile but they will be treated as parts of a whole Pipeline and this cannot be used in the editor.  Because the Jenkinsfile is treated as one continuous Pipeline anything outside of the pipeline closures is interpreted as Scripted Pipeline. This means you can use 'if' blocks around the separate 'pipeline' blocks instead of using 'load' if you choose but keeping them in separate files makes maintenance easier, I think.

if (BRANCH_NAME.startsWith("develop")) {
   
pipeline { .... }
}

Also, it's worth noting that 'readTrusted' probably works better than 'load' because this takes the committer into account and it doesn't require a workspace.


As for DRY stages there are several ways to accomplish this with Pipeline.

1. Shared Library and Resources - This is the preferred method of creating DRY routines

You create a global variable that has all of the steps you want (with appropriate variable replacement for environment variables). You could have a build.groovy global variable in the /vars directory that does all of your build steps. Then the steps in your stage can be single line.

Alternatively, you can store shell scripts in the /resources of your shared library and run those in your steps without having to duplicate anything:


2. You can define your steps directly in the Jenkinsfile at the top level either as strings or methods and simply call that method from with each pipeline.

3. You can define your steps in a configuration file as a property or yaml and load those files using the Pipeline utility steps plugin. https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Utility+Steps+Plugin

To sum up, I think having different stages is worth discussing (it is not going to be implemented in the short term) but there are already many existing ways to make Pipelines DRY.

Kenneth Brooks

unread,
Apr 12, 2017, 6:48:06 PM4/12/17
to Jenkins Users
Quickly trying out your suggestion of pipeline directly inside the if. It sees the stages, but something is not right.
It doesn't see/execute the environment section (which means the use of credentials('cred') isn't being loaded.

Here is what I see when the pipeline {} is on it's own:
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withCredentials
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Feature Build)

Here is what I see when I wrap a simple if around the pipeline:

[Pipeline] stage
[Pipeline] { (Feature Build)
[Pipeline] node

Am I missing something or should I file a bug?

Kenneth Brooks

unread,
Apr 12, 2017, 6:52:34 PM4/12/17
to Jenkins Users
oh, and thanks for the tip on the readTrusted! Totally missed that one, and yes, load is problematic because it requires the workspace (and thus a node)


On Wednesday, April 12, 2017 at 4:53:23 PM UTC-4, Patrick Wolf wrote:

Kenneth Brooks

unread,
Apr 13, 2017, 11:02:06 AM4/13/17
to Jenkins Users
Created a bug for it with evidence and jenkins plugin info: https://issues.jenkins-ci.org/browse/JENKINS-43576

Kenneth Brooks

unread,
Apr 13, 2017, 4:28:02 PM4/13/17
to Jenkins Users
Got a chance to play around with some of the options for DRYing out my pipelines.

We are already using shared libraries (#1 below) very extensively but that is for truly enterprise wide shared functionality. 
I think having teams write those just for steps isn't ideal. They don't want the extra overhead of another place to maintain.

I took a stab @ #2 below.

I was able to define the steps as a closure and then pull them into the pipeline.
Jenkinsfile:
/* Step Implementations */

buildSteps = {
    sh 'mvn clean compile'
}

/* Stage Implementations */

featureBuildStage = {
    agent { label "java-1.8.0_45 && apache-maven-3.2.5 && node4" }
    steps {
        sh 'mvn clean compile'
    }
}

/* Load branch specific Declarative Pipeline */

if (env.BRANCH_NAME.startsWith("develop")) {
  evaluate(readTrusted('develop-pipeline.groovy'))
} else if (env.BRANCH_NAME.startsWith("master")) {
  evaluate(readTrusted('master-pipeline.groovy'))
} else if (env.BRANCH_NAME.startsWith("PR-")) {
  evaluate(readTrusted('pull-request-pipeline.groovy'))
} else {
  evaluate(readTrusted('feature-pipeline.groovy'))
}

feature-pipeline.groovy:
pipeline {
    stages {

        stage('Feature Build') {
            agent { label "java-1.8.0_45 && apache-maven-3.2.5 && node4" }
            steps buildSteps  // This works, pulling in the buildSteps closure defined in Jenkinsfile
        }

        stage('Feature Build') featureBuildStage //This doesn't work
    }
}

Using a step closure it works fine.

Trying to use the Stage closure doesn't. 
This is the one I really think would be useful. Then teams could re-use the same stage across multiple pipelines.

I get the following stack trace:
java.lang.ArrayIndexOutOfBoundsException: 1
	at org.codehaus.groovy.runtime.dgmimpl.arrays.ObjectArrayGetAtMetaMethod.invoke(ObjectArrayGetAtMetaMethod.java:41)
	at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
	at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.invoke(PojoMetaMethodSite.java:51)
	at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:56)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
	at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.getArray(DefaultInvoker.java:51)
	at com.cloudbees.groovy.cps.impl.ArrayAccessBlock.rawGet(ArrayAccessBlock.java:21)
	at org.jenkinsci.plugins.pipeline.modeldefinition.ClosureModelTranslator.methodMissing(jar:file:/var/jenkins_home/plugins/pipeline-model-definition/WEB-INF/lib/pipeline-model-definition.jar!/org/jenkinsci/plugins/pipeline/modeldefinition/ClosureModelTranslator.groovy:130)
	at Script1.run(Script1.groovy:16)
	at org.jenkinsci.plugins.pipeline.modeldefinition.ClosureModelTranslator.resolveClosure(jar:file:/var/jenkins_home/plugins/pipeline-model-definition/WEB-INF/lib/pipeline-model-definition.jar!/org/jenkinsci/plugins/pipeline/modeldefinition/ClosureModelTranslator.groovy:216)
	at org.jenkinsci.plugins.pipeline.modeldefinition.ClosureModelTranslator.methodMissing(jar:file:/var/jenkins_home/plugins/pipeline-model-definition/WEB-INF/lib/pipeline-model-definition.jar!/org/jenkinsci/plugins/pipeline/modeldefinition/ClosureModelTranslator.groovy:168)
	at Script1.run(Script1.groovy:15)
	at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.call(jar:file:/var/jenkins_home/plugins/pipeline-model-definition/WEB-INF/lib/pipeline-model-definition.jar!/org/jenkinsci/plugins/pipeline/modeldefinition/ModelInterpreter.groovy:59)

I think this is because you are attempting to do some pre loading/validations inside ModelInterpreter on the direct source of what is inside pipeline (inside feature-pipeline.groovy in this case).

Thoughts?

-K



On Wednesday, April 12, 2017 at 4:53:23 PM UTC-4, Patrick Wolf wrote:

Andrew Bayer

unread,
Apr 13, 2017, 6:10:29 PM4/13/17
to jenkins...@googlegroups.com
That is the case, yes. And that validation is expecting to find pipeline { ... } at the top-level of the Jenkinsfile - pretty much everything else depends on that.

A.

--
You received this message because you are subscribed to the Google Groups "Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-users+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-users/9c13526d-4c5a-482d-81b9-570bd20c6c1c%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages