Hi folks,
A bit context:
Today rules_go's "nogo" static analysis framework is executed as part of package compilation action. So if your static analysis failed, the compilation fail.
There is also noway to automatically apply suggested fixes if the analyzers were to
implement them, thus the developer experience is subpar vs non-bazel workflow.
Currently there seems to be several opinions regarding how to run linters in a bazel repo:
- Bazel's validation action (used for native rules, targeting AOSP devs)
- Aspect-based tests (used by by python community)
- A macro over a test + fix binary pair for each package
I.e. being able to depend on the _validation output group of all targets in the repo would be nice.
Moreover, what is the philosophy behind "linters support" for bazel?
Linters here are defined as:
1. Validation actions that ensure source files in the workspace is following a certain rule set and produce a set of analysis outputs.
2. Consuming analysis result from (1) to build a "fixer" executable that can apply fix suggestions back into the workspace
I noticed many lint/style fixes in bazel.git are coming from automated commits.
Does that mean the responsibility of fixing lint errors is only partially owned by the change's author in Google and partially owned and fixed by automated system?
Cheers,
Son Luong.