Plugin idea - declarative pipeline script builder

48 views
Skip to first unread message

Sharon Grubner

unread,
Jan 3, 2018, 1:51:02 PM1/3/18
to Jenkins Developers
Hello!

Pipeline jobs allow setting up very powerful jobs for the different projects in our group, however a major disadvantage is that it does not support building the script portion of the job using an API, and instead it requires injecting an entire script as a string. This requires all users to know the declarative pipeline syntax and does not allow sharing common functions and practices.

To solve this issue, we've developed in our team a tool that allows you building the script portion of the pipeline job using simple groovy syntax.
For example,
String script = PipelineScript.create()
.setAgent(...)
.addStage(checkoutFromGit(...))
.addStage(Stage.create('Static Code Analysis')
.addStep(sh('./gradlew condenarcMain'))
.addPost(Post.Condition.Always, publishHtml(...)))
.addPost(Post.Condition.FAILURE, deleteDir())


- This system allows using helper methods as shown in the example to simplify the usage for most users.
- Support for using pre-configured stages as well as creating your own.
- Support for parallel stages, conditions along with most of the features already supported in the declarative syntax.
- Extends DSL API

The current way we use this script variable is as follows:
new PipelineJobBuilder(this as DslFactory, projectName)
.build(script)

I'd like to wrap this library as an app to provide this API for other people to use.
Let me know what you think!

Thanks,
Sharon

Jesse Glick

unread,
Jan 3, 2018, 5:09:14 PM1/3/18
to Jenkins Dev
On Wed, Jan 3, 2018 at 1:50 PM, Sharon Grubner <sharon....@gmail.com> wrote:
> requires all users to know the
> declarative pipeline syntax

Why not just use the Declarative visual editor in Blue Ocean, for
users unfamiliar with the syntax?

> does not allow sharing common functions and
> practices.

You can use Groovy libraries from Declarative, with certain restrictions.

I am not sure I see the point here. If you want to expose a
programmatic interface to users, then why are you using Declarative at
all? You can publish libraries for use from Scripted which offer
reusable functional chunks of all kinds already (including
“higher-order” functions such as control operators), and this would be
much more direct as your script would be the Groovy that actually
runs, rather than a builder that creates a script that is interpreted
by Declarative to create something to run. Maybe I am just not
following what your tool actually does.

Liam Newman

unread,
Jan 4, 2018, 6:13:15 PM1/4/18
to jenkin...@googlegroups.com, Sharon Grubner
Sharon, 

From context, it sounds like you're using the Job DSL to create Pipeline jobs. Is that correct?
Could you share an example of what your code would have to look like without your tool? 

Thanks,
-Liam Newman








--
You received this message because you are subscribed to the Google Groups "Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/CANfRfr1_KFxTjnEiddcwZv%3D7jndJvgX%2BgRp%3DUL5DeB5bmx%2B7%3Dg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Sharon Grubner

unread,
Jan 4, 2018, 9:32:49 PM1/4/18
to Jenkins Developers
That's correct - we're using Job DSL to create pipeline jobs.
It allows you to set your pipeline job definition as well as the script portion as part of the same Groovy script using a builder pattern.
The visual editor in Blue Ocean, as far as I know, does not meet our goals for the main reason that we set up all of our jobs using DSL. In addition, as part of our DSL libraries we like to set up different utilities that lead to more consistent pipeline jobs across our different projects/users. I believe the visual editor would not help us achieve this goal.

Here's another simplified example of how you would you this script builder, compared to what it would look like without it.
Without it:
pipelineJob('example'){
definition {
cps{
script("""pipeline {
agent none
stages {
stage('Stage 1') {
stage('Build Stage') {
agent {
node {
label 'windowsBuilderServer'
}
}
steps {
checkout([$class: 'GitSCM', branches: [[name: env.ghprbActualCommit]], browser: [$class: 'GithubWeb', repoUrl: "XX"], doGenerateSubmoduleConfigurations: false, poll: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "YY", url: "XX.git"]]])
stash includes:"ProjectName/**/*", name:"stash1"

}
post {
always {
deleteDir()
}
}
}
}
stage('Stage 2') {
parallel {
stage('Parallel 1') {
agent {
node {
label 'linuxBuildServer'
}
}
steps {
deleteDir()
unstash name:"stash1"
sh """some script"""
}
post {
success {
deleteDir()
}
always {
archiveArtifacts artifacts:"*.e*,*.o*", excludes:"", allowEmptyArchive:true, fingerprint:true, onlyIfSuccessful:false
junit allowEmptyResults: true, testDataPublishers: [[$class: 'AttachmentPublisher'], [$class: 'StabilityTestDataPublisher']], testResults: 'test_results/*.xml'
}
}
}
}
}
}
}
""")
sandbox()
}
}



And using the script builder: **Contains a couple of our internal helper methods but they don't have anything fancy
Stage stage1 = Stage.create('Stage 1')
            .setAgent(Agent.fromExecutor(Executor.WINDOWS))
            .addStep(checkoutFromGit('XX', 'env.ghprbActualCommit'))
            .addStep(stash('stash1', 'ProjectName/**/*'))
            .addPost(Post.Condition.ALWAYS, deleteDir())
ParallelStage stage2 = ParallelStage.create('Stage 2')

//datasetList is a List<String> such that each string is a data set name
//The following will create a prallel stage for each list element
    datasetList.collect { 
        Stage.create(it)
                .setAgent(Agent.fromExecutor(Executor.LINUX))
                .addStep(deleteDir())
                .addStep(unstash('stash1'))
                .addStep(sh("""some script"""))
                .addPost(Post.Condition.SUCCESS, deleteDir())
                .addPost(Post.Condition.ALWAYS, archiveArtifacts('*.e*,*.o*','',true, true,false))
                .addPost(Post.Condition.ALWAYS, publishJUnitXml('test_results/*.xml', true))
    }.each { stage2 = stage2.addStage(it) }

def script = PipelineScript.create()
.addStage(stage1)
.addParallelStage(stage2)


pipelineJob('example'){
definition {
cps{
script(script)
sandbox()
}
}


As you can see you could set up the entire job in a single block (as shown in my first post) or break the definitions to multiple variables for more complex logic.
I simplified most of the above example since it heavily depends on our utilities libraries but hopefully it is sufficiently clear.


This library has been widely used in our team and received positive feedback so I thought it would be useful to share with the community. It is definitely not the only way of doing things but we found it to be pretty useful.

Sharon

Jesse Glick

unread,
Jan 5, 2018, 11:29:35 AM1/5/18
to Jenkins Dev
On Thu, Jan 4, 2018 at 8:17 PM, Sharon Grubner <sharon....@gmail.com> wrote:
> as part of our DSL libraries we like to set up different utilities that lead
> to more consistent pipeline jobs across our different projects/users.

The normal way to accomplish that is to use Pipeline libraries, so the
first example is a strawman. A more realistic comparison would be to
something like

pipelineJob('example') {
definition {
cps {
script('''
library 'the-usual-stuff'
theUsualBuild repo: '…',
label: 'windows',
artifacts: '*.e*,*.o*'
''')
sandbox()
}
}
}

(In such a case, if you want to try changes to the logic, you can just
use the Replay feature in the UI until you get it right, after which
you have a diff which could be applied directly to the Job DSL script
and/or the library definition.)

Anyway, if you have some use case for a builder like this, the usual
approach is to publish on your GitHub account and try to find others
who wish to use it. Probably it could be packaged as a plugin
depending on the SPIs in `job-dsl`.
Reply all
Reply to author
Forward
0 new messages