Imagine that we decided to merge Jenkins core and some large number of plugins into a giant monorepo, so that they were all governed by a single version number and released simultaneously, preferably with some plugin manager change to keep all included plugins updated in lockstep, but retained their current dependency structure (a fairly complex directed acyclic graph) just converted to use in-reactor snapshot dependencies like
<version>${project.version}</version>
There are various practical and social difficulties with such a move (who decides which plugins are included and which are “out of tree”? how would you move plugins in or out later?), but just focus on the impact on the build and test process—both locally for developers and on CI. On the one hand, there is a huge advantage that complex changes spanning core and multiple (in-tree) plugins can be done as a single monorepo PR, perhaps even an atomic commit; no need for all the machinery we have built to handle cross-repository dependencies and version skew (PCT, JEP-305, etc.). On the other hand, `mvn verify` in this tree could take hours even with `-T`, running tens of thousands of tests, rendering it impossible for CI builds to offer timely feedback; and developers trying to work on just one plugin
mvn -am -pl my-plugin -Pquick-build install
mvn -pl my-plugin hpi:run
could be frustrated by the first command taking several minutes, and the need to remember to
mvn -pl some-plugin-my-plugin-depends-on -Pquick-build install
whenever making upstream changes (though some IDEs handle this for you at least for Java source code changes).
That scenario is one where the remote caching feature in Gradle Enterprise (or for that matter, more monorepo-focused build tools like Bazel) would be really invaluable. You could say with confidence that a patch to `plugins/lockable-resources/src/main/java/org/jenkins/plugins/lockableresources/LockStep.java` could only possibly affect `plugins/lockable-resources/src/test/java/**/*Test.java` and tests in plugins downstream of it in the dependency tree, so a CI build of such a PR would be reasonably quick (same for the merge commit to `master`). Locally,
mvn verify
would (re-)build and test just the components you actually have local modifications to, and components depending on them, and just skip 95% of the work as it would duplicate results already cached by a CI build of some recent public commit.
This is not the scenario we have, however. Our “build & test result cache” for portions of the dependency graph is effectively released binaries in Artifactory. JEP-229 & Dependabot reduce the friction for percolating changes through the system but do not change the fundamentally distributed workflow.
Disclaimer: I am just speculating on what Gradle Enterprise does, as I have not worked with it, based on
https://docs.gradle.com/enterprise/maven-build-cache/#cache_key and the like. Also here I am focussing on the cache feature, not other features like uploaded scans, failure analysis, test distribution (duplicating what we already do with Jenkins agents AFAICT), flakiness reporting, etc.