FWIW, this is born of Bazel's inability to build tool dependencies for WORKSPACE evaluation. Let's fix that, everything else is a workaround.
In the short-term, if you'd like to propose an alternative that makes sure the checked in PAR file is properly updated and gives the reviewer a way of validating that the PAR (effectively an opaque binary) was updated correctly (by an untrusted third party) then I'd love to hear it. This test basically builds on the premise that Bazel's promise of perfect reproducibility (maybe just for a version of Bazel) is also a promise of perfect verifiability. The approach we've adopted elsewhere (rules_docker) effectively requires splitting the repo and cutting releases, which is an incredibly tedious / heavy process.
tl;dr please don't just disable this.-MOn Fri, Mar 23, 2018 at 6:22 AM Lukács T. Berki <lbe...@google.com> wrote:Hey there,Turns out, it's failing because Bazel at HEAD builds different PAR files than the last release. Therefore, there isn't really a way to fix this test -- it either works with the released Bazel or at HEAD.Thus, I propose to delete that test. I haven't looked what the difference is, but having this test would mean that we'd have to promise that we never change Bazel in a way that makes .par files build differently, which isn't such a great idea (this, or not running the test on the CI, but let's not do that...)--Lukács T. Berki | Software Engineer | lbe...@google.com |Google Germany GmbH | Erika-Mann-Str. 33 | 80636 München | Germany | Geschäftsführer: Paul Manicle, Halimah DeLaine Prado | Registergericht und -nummer: Hamburg, HRB 86891--Matthew MooreContainer Development Uber-TLDeveloper Infrastructure @ Google
In the short-term, if you'd like to propose an alternative that makes sure the checked in PAR file is properly updated and gives the reviewer a way of validating that the PAR (effectively an opaque binary) was updated correctly (by an untrusted third party) then I'd love to hear it. This test basically builds on the premise that Bazel's promise of perfect reproducibility (maybe just for a version of Bazel) is also a promise of perfect verifiability. The approach we've adopted elsewhere (rules_docker) effectively requires splitting the repo and cutting releases, which is an incredibly tedious / heavy process.Do we have untrusted third parties pushing code to rules_python? In that case, they can also push source code, can't they?Mind you, I'm all for verifying that opaque binaries are what we think they are, but a Bazel test seems like the wrong place for it due to the aforementioned dependency on Bazel versions. It essentially guarantees that our CI is red around a release at least for a short time.
Do we have untrusted third parties pushing code to rules_python? In that case, they can also push source code, can't they?
Bazel test seems like the wrong place for it due to the aforementioned dependency on Bazel versions