--
You received this message because you are subscribed to the Google Groups "Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/CANfRfr3fDig6_-X-Pjnm1jRgv3RFZnC59ezcVpuSsjU4Wrw9sQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Hello Kohsuke -I am a developer using Jenkins pipeline quite lot where I work.We are using Jenkins pipelines in two scenarios:
- For CI building and testing some of our internal components (what Jenkins is traditionally used for)
- For running / orchestrating complex automation processes (so Jenkins is talking to some external systems using SOAP / REST etc) via tooling and even directly via REST using plugins.
I have mostly used JenkinsPipelineUnit for testing / validation of the pipelines and I have been looking into the direct / live approach that Oleg demonstrated (running Jenkins locally in Docker and getting the pipeline being developed direct from a host file system volume mount).I think Jenkinsfile Runner would be really useful for developers who don't need or want the overhead of developing tests with JenkinsPipelineUnit. I have worked with some developers wanting to develop Jenkinsfiles for their CI process and the main problem is knowing if the Jenkinsfile will work when they commit it to the repo. They go round this loop of commit / fix running in the production Jenkins or using the Jenkins pipeline "replay" feature. It can be a painful process if you are not familiar with Jenkins pipeline and Groovy syntax!
I think some things to consider are:- How does the Jenkins Runner replicate the agents / slaves identifiers on the target Jenkins?- How to deal with tooling on the target Jenkins (custom tools, JDKs, Gradle, etc)?
I think the perfect Jenkinsfile Runner for me would provide:- Somehow capture the plugins, tooling and agents on our production Jenkins- Validate the Jenkinsfile pipeline syntax
- Validate the Jenkinsfile against the plugins and agents / tooling (fail if it refers to some tool or agent not configured for example).- Run the Jenkinfile in some sort of "no-op" mode : what would it do if I ran it, without actually doing anything
- Actually run the Jenkinsfile locally so I can know it works completely before committing to source control.
- Run the Jenkinsfile on the target Jenkins master server using the resources of that server (so know it works on the server).
Hope that helps!Bill Dennis
On Thursday, 1 March 2018 19:23:15 UTC, Kohsuke Kawaguchi wrote:--Jenkinsfile Runner is an experiment to package Jenkins pipeline execution as a command line tool. The intend use cases include:
- Use Jenkins in Function-as-a-Service context
- Assist editing Jenkinsfile locally
- Integration test shared libraries
Over the past year, I've done some deep-dive 1:1 conversations with some Jenkins users and I felt something like this might move the needle for them in an important way.I'd love to hear any reactions on your side. Could something like this be important for you, does it miss any key points for you? If you mentally picture a perfect version of this, what would that do, and how would you use?Let me know!Kohsuke Kawaguchi
--
You received this message because you are subscribed to the Google Groups "Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/b475ecc2-2807-47c7-809f-5e6c1a51d2e0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Pipeline development
Right now (as described by others in this thread) pipeline development is either a loop of committing / fixing pipelines on production Jenkins, using pipeline replay on production Jenkins or setting up a local instance manually.
With Jenkinsfile Runner we can get faster feedback without polluting our commit or Jenkins build history and don't have to set up a local instance manually.
Shared library development
Shared library development right now works much in the same as pipeline development, except that you have the library code and another (often production) Jenkinsfile to maintain, in order to try out (as opposed to automatically test) your Jenkinsfile.
For shared libraries, we thankfully already have JenkinsPipelineUnit, that makes it easier to implement some tests. However, (as also mentioned by others in this thread) this is unit testing only. It mocks most of our environment. Often, green unit tests mean nothing for productive use of our share library. I even gave up test-driven development for shared libraries, in favor of 90s try-and-error-style programming. Because most of the time when trying the library with green unit tests in production, it turns out that the real Jenkins environment has some restriction that is beyond the scope of JenkinsPieplineUnit (e.g. sandboxing).docker run -v~/src/it/myfunction:/workspace schnatterer/jenkinsfile-runner:1.0-SNAPSHOT-2.108 /workspace
docker.image('schnatterer/jenkinsfile-runner:1.0-SNAPSHOT-2.108').inside {
sh 'jenkinsfile-runner /app/jenkins /app/plugins src/it/myfunction'
}
sh 'docker run --rm -v $(pwd):$(pwd) src/it/myfunction'
Once those issues are solved, we'll have a very basic way of automating integration tests for shared libraries by executing IT Jenkinsfiles from the shared libraries pipeline and failing the build if the IT fails.
Of course, this would be very basic testing. For more sophistiated testing we would want to
On Fri, Mar 2, 2018 at 9:26 AM Bill Dennis <bill....@gmail.com> wrote:Hello Kohsuke -I am a developer using Jenkins pipeline quite lot where I work.We are using Jenkins pipelines in two scenarios:
- For CI building and testing some of our internal components (what Jenkins is traditionally used for)
- For running / orchestrating complex automation processes (so Jenkins is talking to some external systems using SOAP / REST etc) via tooling and even directly via REST using plugins.
I have mostly used JenkinsPipelineUnit for testing / validation of the pipelines and I have been looking into the direct / live approach that Oleg demonstrated (running Jenkins locally in Docker and getting the pipeline being developed direct from a host file system volume mount).I think Jenkinsfile Runner would be really useful for developers who don't need or want the overhead of developing tests with JenkinsPipelineUnit. I have worked with some developers wanting to develop Jenkinsfiles for their CI process and the main problem is knowing if the Jenkinsfile will work when they commit it to the repo. They go round this loop of commit / fix running in the production Jenkins or using the Jenkins pipeline "replay" feature. It can be a painful process if you are not familiar with Jenkins pipeline and Groovy syntax!This kind of context is really helpful. Thank you!
I think some things to consider are:- How does the Jenkins Runner replicate the agents / slaves identifiers on the target Jenkins?- How to deal with tooling on the target Jenkins (custom tools, JDKs, Gradle, etc)?Right, I guess your point is that Jenkinsfile Runner should aim to run Jenkinsfile in much more realistic setup, and that doesn't stop at using real Jenkins and real Pipeline plugins, but it also needs to include other configurations of Jenkins. I think Jesse made a similar observation. I have a few thoughts:
- Configuration-as-code could play a role here in terms of letting people define the configuration of Jenkins once and use it both in production and in setup like Jenkinsfile Runner
- I'm a fan of making Jenkinsfile itself more portable. For example, if people are already in the mode of using docker images to run builds in, then more of the toolings would be packaged in there, and it should allow Jenkinsfile Runner to run your project in much the same way as your production Jenkins. I'm curious how much of this is already reality vs ideal that people are working toward.
I think the perfect Jenkinsfile Runner for me would provide:- Somehow capture the plugins, tooling and agents on our production Jenkins- Validate the Jenkinsfile pipeline syntaxI think this is already happening as a result of actually running the pipeline -- one of the virtue of actually using the real pipeline plugins to run!- Validate the Jenkinsfile against the plugins and agents / tooling (fail if it refers to some tool or agent not configured for example).- Run the Jenkinfile in some sort of "no-op" mode : what would it do if I ran it, without actually doing anythingThis one is interesting. I assumed JenkinsPipelineUnit does this pretty well, though. Can you tell me more about this? I'm imagining you'd want to be able to selectively mock out some steps (e.g., when Jenkinsfile gets to sh "./deploy.sh" don't actually do it and pretend that it succeeded) but more details would be helpful.
- Actually run the Jenkinsfile locally so I can know it works completely before committing to source control.Yeah, this was the first goal for me.- Run the Jenkinsfile on the target Jenkins master server using the resources of that server (so know it works on the server).This got me thinking that maybe all I needed was a Jenkins CLI command that behind the scene creates a temporary/hidden job on the target Jenkins master and run the Pipeline. IOW, keep the same development flow as Jenkinsfile Runner today, but don't run Jenkins locally, just do it on your actual Jenkins.
java.lang.UnsupportedOperationException: Refusing to marshal io.jenkins.jenkinsfile.runner.FileSystemSCM for security reasons; see https://jenkins.io/redirect/class-filter/
java.lang.UnsupportedOperationException: Refusing to marshal io.jenkins.jenkinsfile.runner.SetJenkinsfileLocation for security reasons; see https://jenkins.io/redirect/class-filter/
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/da0e3018-9206-43a3-b814-4a65a73d5c17%40googlegroups.com.--
You received this message because you are subscribed to a topic in the Google Groups "Jenkins Developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/jenkinsci-dev/gjz3CDhi-kk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to jenkinsci-dev+unsubscribe@googlegroups.com.
--
You received this message because you are subscribed to a topic in the Google Groups "Jenkins Developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/jenkinsci-dev/gjz3CDhi-kk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to jenkinsci-dev+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/8ad1c473-5292-4a38-ab6b-ab78c4371092%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/CANfRfr03Mg5r5i-XxMi8TZH3uUE7ER1mox76VuA2mX-OiKciZA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
I think Jenkinsfile Runner brings a lot of opportunities for pipeline developers. The most obvious ones to me are
- Pipeline development (Jenkinsfile)
- Shared library development
Pipeline development
Right now (as described by others in this thread) pipeline development is either a loop of committing / fixing pipelines on production Jenkins, using pipeline replay on production Jenkins or setting up a local instance manually.
With Jenkinsfile Runner we can get faster feedback without polluting our commit or Jenkins build history and don't have to set up a local instance manually.
Shared library development
Shared library development right now works much in the same as pipeline development, except that you have the library code and another (often production) Jenkinsfile to maintain, in order to try out (as opposed to automatically test) your Jenkinsfile.
For shared libraries, we thankfully already have JenkinsPipelineUnit, that makes it easier to implement some tests. However, (as also mentioned by others in this thread) this is unit testing only. It mocks most of our environment. Often, green unit tests mean nothing for productive use of our share library. I even gave up test-driven development for shared libraries, in favor of 90s try-and-error-style programming. Because most of the time when trying the library with green unit tests in production, it turns out that the real Jenkins environment has some restriction that is beyond the scope of JenkinsPieplineUnit (e.g. sandboxing).
Worst thing about the current state is that we don't have reliable regression tests. A change in shared library with green unit tests is far from convincing me that the library will continue to work in production.
With Jenkinsfile Runner we could write small Jenkinsfiles within the shared library repo's test folder and run them from the Jenkinsfile of the shared lib. Pretty much in the same way as we use Maven Invoker Plugin (as mentioned by Jesse) when developing maven plugins.
A first approach to shared library integration testing with Jenkinsfile Runner
My naive first approach was to build a Docker Image that contains Jenkinsfile Runner and all default plugins.runs the ~/foo/Jenkinsfile using Jenkinsfile Runner with all default plugins of Jenkins 2.108.
docker run -v~/src/it/myfunction:/workspace schnatterer/jenkinsfile-runner:1.0-SNAPSHOT-2.108 /workspace
My idea was to eventually do the same in Jenkinsfile of the shared lib like so (not tested)or
docker.image('schnatterer/jenkinsfile-runner:1.0-SNAPSHOT-2.108').inside {
sh 'jenkinsfile-runner /app/jenkins /app/plugins src/it/myfunction'
}
sh 'docker run --rm -v $(pwd):$(pwd) src/it/myfunction'
It turned out that there a two major problems:
- There's no way to add non-default Jenkins plugins.
My local test for cloudogu/ces-build-lib failed because there was no GitHub Groovy Libraries plugin.
Here, my hope is that Configuration-as-code plugin might help automate loading plugins.
Once those issues are solved, we'll have a very basic way of automating integration tests for shared libraries by executing IT Jenkinsfiles from the shared libraries pipeline and failing the build if the IT fails.
Of course, this would be very basic testing. For more sophistiated testing we would want to
So, yes, we're not there yet. But we now have a foundation to build all this upon.
- trigger the ITs from maven or gradle,
- use asserts,
- get the results as JUnit XML.
Thanks for that & best regards,
Johannes
Am Donnerstag, 1. März 2018 20:23:15 UTC+1 schrieb Kohsuke Kawaguchi:--Jenkinsfile Runner is an experiment to package Jenkins pipeline execution as a command line tool. The intend use cases include:
- Use Jenkins in Function-as-a-Service context
- Assist editing Jenkinsfile locally
- Integration test shared libraries
Over the past year, I've done some deep-dive 1:1 conversations with some Jenkins users and I felt something like this might move the needle for them in an important way.I'd love to hear any reactions on your side. Could something like this be important for you, does it miss any key points for you? If you mentally picture a perfect version of this, what would that do, and how would you use?Let me know!Kohsuke Kawaguchi
Am Donnerstag, 1. März 2018 20:23:15 UTC+1 schrieb Kohsuke Kawaguchi:--Jenkinsfile Runner is an experiment to package Jenkins pipeline execution as a command line tool. The intend use cases include:
- Use Jenkins in Function-as-a-Service context
- Assist editing Jenkinsfile locally
- Integration test shared libraries
Over the past year, I've done some deep-dive 1:1 conversations with some Jenkins users and I felt something like this might move the needle for them in an important way.I'd love to hear any reactions on your side. Could something like this be important for you, does it miss any key points for you? If you mentally picture a perfect version of this, what would that do, and how would you use?Let me know!Kohsuke Kawaguchi
--
You received this message because you are subscribed to the Google Groups "Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/fcdcb7e4-f6ca-4ec5-bf7d-0c9ca32618ee%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/3836b845-5e9f-44d6-883d-10caf7e78832%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Thanks Kohsuke, I tried to give some answers to your questions inline below, if I didn't mess up the reply..Bill
On Friday, 2 March 2018 17:57:24 UTC, Kohsuke Kawaguchi wrote:On Fri, Mar 2, 2018 at 9:26 AM Bill Dennis <bill....@gmail.com> wrote:Hello Kohsuke -I am a developer using Jenkins pipeline quite lot where I work.We are using Jenkins pipelines in two scenarios:
- For CI building and testing some of our internal components (what Jenkins is traditionally used for)
- For running / orchestrating complex automation processes (so Jenkins is talking to some external systems using SOAP / REST etc) via tooling and even directly via REST using plugins.
I have mostly used JenkinsPipelineUnit for testing / validation of the pipelines and I have been looking into the direct / live approach that Oleg demonstrated (running Jenkins locally in Docker and getting the pipeline being developed direct from a host file system volume mount).I think Jenkinsfile Runner would be really useful for developers who don't need or want the overhead of developing tests with JenkinsPipelineUnit. I have worked with some developers wanting to develop Jenkinsfiles for their CI process and the main problem is knowing if the Jenkinsfile will work when they commit it to the repo. They go round this loop of commit / fix running in the production Jenkins or using the Jenkins pipeline "replay" feature. It can be a painful process if you are not familiar with Jenkins pipeline and Groovy syntax!This kind of context is really helpful. Thank you!Happy to feedback. Thanks for Jenkins and pipeline as code, it helped me deliver some projects with Jenkins in a way that I thought would not be possible a few years ago.
I think some things to consider are:- How does the Jenkins Runner replicate the agents / slaves identifiers on the target Jenkins?- How to deal with tooling on the target Jenkins (custom tools, JDKs, Gradle, etc)?Right, I guess your point is that Jenkinsfile Runner should aim to run Jenkinsfile in much more realistic setup, and that doesn't stop at using real Jenkins and real Pipeline plugins, but it also needs to include other configurations of Jenkins. I think Jesse made a similar observation. I have a few thoughts:
- Configuration-as-code could play a role here in terms of letting people define the configuration of Jenkins once and use it both in production and in setup like Jenkinsfile Runner
- I'm a fan of making Jenkinsfile itself more portable. For example, if people are already in the mode of using docker images to run builds in, then more of the toolings would be packaged in there, and it should allow Jenkinsfile Runner to run your project in much the same way as your production Jenkins. I'm curious how much of this is already reality vs ideal that people are working toward.
Yes, all of this. I have often thought that we need something like declarative pipeline for the configuration of Jenkins as code instead of going into all those web config pages. Jenkins master as Docker container seems good. In our environment we are not currently using Docker but I have seen that that is the way to go. Getting a larger organisation to adopt the right technology and the associated costs of that is the challenge, so we remain using traditional Jenkins slaves and tooling methods. Hopefully Dockerised soon. We do use Jenkins Enterprise from CloudBees.
I think the perfect Jenkinsfile Runner for me would provide:- Somehow capture the plugins, tooling and agents on our production Jenkins- Validate the Jenkinsfile pipeline syntaxI think this is already happening as a result of actually running the pipeline -- one of the virtue of actually using the real pipeline plugins to run!- Validate the Jenkinsfile against the plugins and agents / tooling (fail if it refers to some tool or agent not configured for example).- Run the Jenkinfile in some sort of "no-op" mode : what would it do if I ran it, without actually doing anythingThis one is interesting. I assumed JenkinsPipelineUnit does this pretty well, though. Can you tell me more about this? I'm imagining you'd want to be able to selectively mock out some steps (e.g., when Jenkinsfile gets to sh "./deploy.sh" don't actually do it and pretend that it succeeded) but more details would be helpful.Yes, in JenkinsPipelineUnit I pretty much mock out everything like calls to tools. Because pipeline is based on Groovy I thought I could do some unit tests using Spock and Groovy for pipelines but then I discovered it was already done and shared as JenkinsPipelineUnit, so hat tip to Ozan there. So the Spock unit tests I write are confirming the execution of the tools is correct but not actually running them. I also write JenkinsPipelineUnit test code for my share library code as well confirming the behaviour, so actually all my use of special tooling is captured in libraries as DSL like constructs. A great feature of JenkinsPipelineUnit is that it can generate the execution callstack of the "mocked" pipeline run. These callstacks can be stored as files and JenkinsPipelineUnit tests can assert the execution callstack of a test run against a file committed as part of the tests. I do this for all my pipelines and share library units. So the callstacks actually capture the execution of tooling with commandline params etc and any time I change the pipeline code I can see what is changed by diffing the callstack file changes. Another thing you can do with this is run the pipeline jobs with different parameters in the tests (Spock parameterised tests) and by comparing the callstack files can see how the pipeline behaviour differs between these runs and assert the correct behaviour. All this without having to hit a real Jenkins server and wait for long running processes to complete! So this is all fine and you can write tests for the pipelines and check the use of tooling etc, but when you come to run the pipeline on the actual server, the tools or agents / slaves labels are not configured as expected for the fully tested pipeline. So this is where something like Jenkins Runner will be good to use. My tests are based on code methods in the project I have here: https://github.com/macg33zr/pipelineUnit
- Actually run the Jenkinsfile locally so I can know it works completely before committing to source control.Yeah, this was the first goal for me.- Run the Jenkinsfile on the target Jenkins master server using the resources of that server (so know it works on the server).This got me thinking that maybe all I needed was a Jenkins CLI command that behind the scene creates a temporary/hidden job on the target Jenkins master and run the Pipeline. IOW, keep the same development flow as Jenkinsfile Runner today, but don't run Jenkins locally, just do it on your actual Jenkins.Yes, this would be great, an anonymous job and just pull back the output and console log. A plugin for IntelliJ or whatever IDE using this would be even better :-)
Hope that helps!Bill Dennis
On Thursday, 1 March 2018 19:23:15 UTC, Kohsuke Kawaguchi wrote:--Jenkinsfile Runner is an experiment to package Jenkins pipeline execution as a command line tool. The intend use cases include:
- Use Jenkins in Function-as-a-Service context
- Assist editing Jenkinsfile locally
- Integration test shared libraries
Over the past year, I've done some deep-dive 1:1 conversations with some Jenkins users and I felt something like this might move the needle for them in an important way.I'd love to hear any reactions on your side. Could something like this be important for you, does it miss any key points for you? If you mentally picture a perfect version of this, what would that do, and how would you use?Let me know!Kohsuke Kawaguchi--
You received this message because you are subscribed to the Google Groups "Jenkins Developers" group.To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/b475ecc2-2807-47c7-809f-5e6c1a51d2e0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
----Kohsuke Kawaguchi
You received this message because you are subscribed to the Google Groups "Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/c1ae5ba9-d54d-4c87-ad3d-d7b1ae004148%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
- Actually run the Jenkinsfile locally so I can know it works completely before committing to source control.Yeah, this was the first goal for me.- Run the Jenkinsfile on the target Jenkins master server using the resources of that server (so know it works on the server).This got me thinking that maybe all I needed was a Jenkins CLI command that behind the scene creates a temporary/hidden job on the target Jenkins master and run the Pipeline. IOW, keep the same development flow as Jenkinsfile Runner today, but don't run Jenkins locally, just do it on your actual Jenkins.Yes, this would be great, an anonymous job and just pull back the output and console log. A plugin for IntelliJ or whatever IDE using this would be even better :-)Great, between your feedback and what I think Johanness is saying, I'm starting to picture the next POTD.
Kohsuke - some more answers to your questions inline below.Also I thought of these other things on my walk to the office this morning that might be of interest:- Parameters to jobs : most of my pipeline jobs have parameters defined inside. Would the Jenkinsfile Runner have a way to pass in parameters so it doesn't just use default values?
- Environment values for jobs: For our pipeline automation deployments, we set "env" values (key value pairs as strings) at the gobal of folder level (using Jenkins Enterprise Folders plugin) and these define some environment for jobs (things like REST endpoints to some RESTful services we use in the jobs, for example). With the Jenkinsfile Runner have a way to supply this?
- Script approvals : I have mentioned this below, but some way to define script approvals or get the required approvals would help possibly. I know if you are hitting script approvals, then probably trying to do too much in the pipeline code but it is easy to hit this with simple stuff like Java date functions or JSON parsing.
- Jenkinsfile As a Service idea (JFAAS?): you mentioned this and I really like this idea. The possibility to externally run a Jenkinsfile supplied and get back some results like build artifacts and console output without having any jobs or builds appear on the Jenkins master or needing a job configuration. The reason is we tend to think about loading all our automation into Jenkins. But our world is not all inside Jenkins and we have other automation solutions that can have extensions and customisation. An example is some of our custom services we build and some off the shelf stuff like the Serena Business Process management server we have. But Jenkins is really good and doing some automation, tooling and spinning / scaling up agent workers in Docker etc, so it would be good to use that from these other services by having them send a Jenkinsfile they own over and getting some results back somehow. Maybe this is something different from Jenkinsfile Runner though.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/5d6df19b-dc98-4c52-9d9d-0f511d3e540c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
On Tue, Mar 6, 2018 at 4:23 PM, Kohsuke Kawaguchi <k...@kohsuke.org> wrote:
> I have another idea. Instead of
> running Jenkinsfile in this CLI and try to emulate your Jenkins instance as
> closely as possible (in terms of configuration), we could just run
> Jenkinsfile in the current Jenkins, in a place that nobody else can see.
In principle this is possible, by defining a new implementation of
`FlowExecutionOwner` that is not based on a `Run`. We have in the past
discussed the possibility of a Pipeline equivalent to the `/script`
console.
I doubt this would be a very satisfactory solution to the use case at
hand, though. Any step which required a `Run` context (either
mandatory, as in `StepDescriptor.getRequiredContext`, or optionally,
by checking for null from `StepContext.get(Run.class)`) would either
fail or behave in some reduced capacity. If you took care to only
write `Jenkinsfile`s that just used `node` and `sh` and the like and
none of these features, then fine. But a lot of common functionality
does assume it has a `Run` context: anything that looks up
folder-scoped environment variables or credentials; `junit`,
`milestone`, `lock`, `stash`; everything based on `SimpleBuildStep` or
`SimpleBuildWrapper`…a lot. You could create a temporary `WorkflowJob`
in the same folder and hack around with access control so that it is
only visible to the invoking user, which would let most of these
things work (probably with some specialized support for branch
projects), but this seems like it is asking for trouble.
I think it would be far more practical to work with the existing
Replay feature, which was designed for precisely this use case. If the
main objection to using Replay is that you do not want these
experimental builds to “pollute” the general build history, then we
can do some minor UI work (for example in Blue Ocean) to hide these
build records by default. There is already a plan to do something very
similar for restarted stages (JENKINS-48643). We could even stream the
build log to the CLI command (JENKINS-33438) and then add an option to
delete the build as soon as it completes—a very simple and safe
feature.
From a user perspective, I like your idea of a "pipeline.sharedlibrary.test" step and/or a "run-pipeline" CLI command. They would definitely solve configuration issues with plugins, docker, etc.
Now, Jesse mentioned some implementation challenges in his post from Wed, 07 Mar 2018 13:04:46, that I can't comment on, because I'm not that deep into Jenkins internals (yet?).
In the meantime, Jesse also provided a workaround for loading a "local" shared lib. With this I implemented my first (very simple) integration test and run it successfully from the Jenkinsfile of the shared lib 🎉
You meant asserts should already work? How would you do those in that scenario?
This approach has of course the limitations we talked about, regarding the Jenkins configuration .
In addition these tests would only be run on Jenkins and not when running the build locally with Maven.
So right now, I think a better approach would be to provide a Java library that allows for running Jenkinsfiles.
It could be used to implement e.g. JUnit tests that are run by build tools such as maven locally (e.g. with the failsafe plugin) and in the same way in the Jenkinsfile.
This library could abstract from the concrete execution with could either be
- Jenkinsfile runner with local plugin folder
- Jenkinsfile runner in Docker
- "pipeline.sharedlibrary.test" step
- a "run-pipeline" CLI command.
The downside would be that such an implementation wouldn't be trivial. And, unfortunately, I can't invest to much of my free time right now.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/25a576e2-8c2b-4da4-b5c1-568b632738d8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.