You can have as many unit tests as you want, which test individual
methods or small self-contained components. They should run more or
less instantaneously, and behave essentially the same on any platform.
You should have a bunch of `JenkinsRule` “functional” tests, which are
slower to run (a few seconds) and a little flakier, but which start up
a real Jenkins server and test your plugin’s functionality in a more
realistic way including extensions and background threads and project
builds and settings and everything. These can even start (open-source
Linux) external services using `docker-fixtures` where necessary,
check basic aspects of HTML page rendering including form submissions
with HtmlUnit, run CLI commands, etc.
Finally you should have a handful of carefully chosen “acceptance”
tests which are harder to develop, very slow to run, and often flaky,
but provide a convincing demonstration that an entire user-level
feature truly works in a realistic environment including
production-style plugin and class loading (perhaps even contacting
outside services) and driven entirely from the same kind of gestures a
user would make in a web browser.
--
You received this message because you are subscribed to the Google Groups "Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/CANfRfr0UfKN8M1JF6pp7qdoKjo5SSHVwTcv_XVvbsGycK_tjPw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
On Thu, Sep 28, 2017 at 10:42 AM, Mark Waite <mark.ea...@gmail.com> wrote:
> Do we have any way of associating historical acceptance test harness
> failures to a cause of those failures?
Not that I am aware of.
> could such data be gathered from some sample of recent runs of the
> acceptance test harness, so that we know which of the tests have been most
> helpful in the recent past?
Maybe. I doubt things are in good enough condition to do that kind of
analysis. AFAIK none of the CI jobs running the ATH have ever had a
single stable run, so there is not really a baseline.
> Alternately, could we add a layering concept to the acceptance test harness?
> There could be "precious" tests which run every time and there could be
> other tests which are part of a collection from which a few tests are
> selected randomly for execution every time.
Yes this is possible.
> is there a way to make the acceptance test harness run
> in a massively parallel fashion
Yes. Does not help with the flakiness, and still wastes a tremendous
amount of cloud machine time.
> As a safety check of that concept, did any of the current acceptance tests
> detect the regression when run with Jenkins 2.80 (or Jenkins 2.80 RC)?
Yes.
> Is there a JenkinsRule test which could reasonably be written to test for
> the conditions that caused the bug in Jenkins 2.80?
Not really; that particular issue was unusual, since it touched on the
setup wizard UI which is normally suppressed by test harnesses.
--
You received this message because you are subscribed to the Google Groups "Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/CANfRfr0LJ6z%3DTiQe8Rt9P_D7RpZDEHt3XAWQxoq%3Dd0dVAD%2BG%3Dw%40mail.gmail.com.
On Thu, Sep 28, 2017 at 11:43 AM Jesse Glick <jgl...@cloudbees.com> wrote:On Thu, Sep 28, 2017 at 10:42 AM, Mark Waite <mark.ea...@gmail.com> wrote:
> Do we have any way of associating historical acceptance test harness
> failures to a cause of those failures?
Not that I am aware of.
> could such data be gathered from some sample of recent runs of the
> acceptance test harness, so that we know which of the tests have been most
> helpful in the recent past?
Maybe. I doubt things are in good enough condition to do that kind of
analysis. AFAIK none of the CI jobs running the ATH have ever had a
single stable run, so there is not really a baseline.
Ah, so that means the current acceptance test harness is unlikely to be trusted in its entirety by anyone. A failure in the entire acceptance test harness is probably only used rarely to stop a release.I support deleting tests of questionable value.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/CAO49JtG5DGH3vHt_bwN972EeHeEcPSJekd0dZ%2BQKx%2Bu%2BzdH0Zg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
On Thu, Sep 28, 2017 at 2:51 PM, Stephen Connolly
<stephen.al...@gmail.com> wrote:
> writing good acceptance tests is a high skill and not something that
> can be easily crowdsourced.
Agreed. In particular we should not be asking GSoC contributors to
write new acceptance tests. Every PR proposing a new test should be
carefully reviewed with an eye to whether it is testing something that
· could not reasonably be covered by lower-level, faster tests
· is a reasonably important user scenario, for which an accidental
breakage would be considered a noteworthy regression worth holding up
or even blacklisting a release (of core or a plugin)
· is sufficiently distinct from other scenarios already being tested
that we think regressions might otherwise slip by
> there is a fundamental flaw in
> using the same harness to drive as to verify *because* any change to that
> harness has the potential to invalidate any tests using the modified
> harness
Probably true but does not seem to me like the highest priority facing us.
> it is all too easy to turn a good test into a test giving a false
> positive
I am not personally aware of any such historical cases (maybe I missed
some). The immediate problems are slowness, flakiness (tests sometimes
fail for no clear reason), and fragility (tests fail due to trivial
code changes especially in the UI).
> We need an ATH that can be realistically run by humans in an hour or two
Yes that seems like a good goal.
--
You received this message because you are subscribed to the Google Groups "Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/CANfRfr04H5Zd4xoq2HHQH0XdUezCOnykcOgRM_9T7npwtwBA0Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Am 28.09.2017 um 16:42 schrieb Mark Waite <mark.ea...@gmail.com>:
[..]
Do we have any way of associating historical acceptance test harness failures to a cause of those failures?
If not, could such data be gathered from some sample of recent runs of the acceptance test harness, so that we know which of the tests have been most helpful in the recent past?
For example, I'd support deleting an acceptance test if it has never detected a bug since it was created.
Alternately, could we add a layering concept to the acceptance test harness? There could be "precious" tests which run every time and there could be other tests which are part of a collection from which a few tests are selected randomly for execution every time.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/CAO49JtHGrvQtypR7UyxrmGqTt65NS2iq2p-_2oezHg6q%2B8QwKA%40mail.gmail.com.
Am 28.09.2017 um 20:51 schrieb Stephen Connolly <stephen.al...@gmail.com>:On Thu, Sep 28, 2017 at 11:43 AM Jesse Glick <jgl...@cloudbees.com> wrote:On Thu, Sep 28, 2017 at 10:42 AM, Mark Waite <mark.ea...@gmail.com> wrote:
> Do we have any way of associating historical acceptance test harness
> failures to a cause of those failures?
Not that I am aware of.
> could such data be gathered from some sample of recent runs of the
> acceptance test harness, so that we know which of the tests have been most
> helpful in the recent past?
Maybe. I doubt things are in good enough condition to do that kind of
analysis. AFAIK none of the CI jobs running the ATH have ever had a
single stable run, so there is not really a baseline.Ah, so that means the current acceptance test harness is unlikely to be trusted in its entirety by anyone. A failure in the entire acceptance test harness is probably only used rarely to stop a release.I support deleting tests of questionable value.I don’t want to knock the contributors to the ATH, but my long standing view is that writing good acceptance tests is a high skill and not something that can be easily crowdsourced.
Acceptance tests are, by their nature, among the slowest tests.My long standing view of the ATH is that there is a fundamental flaw in using the same harness to drive as to verify *because* any change to that harness has the potential to invalidate any tests using the modified harness... it is all too easy to turn a good test into a test giving a false positive... I think an ATH that had two sets of tests, one driving by the filesystem / CLI and verifying via the UI while the other drives by the UI and verified by the filesystem / CLI would be much much stronger (and faster too). In most cases you wouldn’t be changing the two harnesses at the same time, so a failing test will indicate a failing test.We need an ATH that can be realistically run by humans in an hour or two... trim the test cases to get there, moving the tested functionality higher up to be tests that run faster, eg JenkinsRule or better yet pure unit tests.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/CA%2BnPnMxRdYEOxD4gn7CxDS5nRK9PPD_tCzwGzsTS_2gg%2BRvqdg%40mail.gmail.com.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-dev+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/CANfRfr0LJ6z%3DTiQe8Rt9P_D7RpZDEHt3XAWQxoq%3Dd0dVAD%2BG%3Dw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-dev+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/CAO49JtG5DGH3vHt_bwN972EeHeEcPSJekd0dZ%2BQKx%2Bu%2BzdH0Zg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--Sent from my phone--
You received this message because you are subscribed to the Google Groups "Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-dev+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/CA%2BnPnMxRdYEOxD4gn7CxDS5nRK9PPD_tCzwGzsTS_2gg%2BRvqdg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Jenkins Developers" group.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/716A4677-29EF-45CB-A895-2F9EC1BD66C4%40gmail.com.To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-dev+unsubscribe@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-dev+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/6379b1ca-a85d-9041-1695-e9cd02aa042b%40gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/CAM9%3DZ%2B7bxf-Kv68T5F0VPqdB6r18K-aw6MqhfnBFrvet_8sD-g%40mail.gmail.com.