Validation Action status

357 views
Skip to first unread message

Son Luong Ngoc

unread,
Jun 30, 2022, 4:34:10 AM6/30/22
to bazel-discuss
Hi folks,


A bit context:
Today rules_go's "nogo" static analysis framework is executed as part of package compilation action. So if your static analysis failed, the compilation fail.
There is also noway to automatically apply suggested fixes if the analyzers were to
implement them, thus the developer experience is subpar vs non-bazel workflow.

Currently there seems to be several opinions regarding how to run linters in a bazel repo:
- Bazel's validation action (used for native rules, targeting AOSP devs)
- Aspect-based tests (used by by python community)
- A macro over a test + fix binary pair for each package

I want to ask about the status of Validation Actions development today https://docs.bazel.build/versions/main/skylark/rules.html#validation-actions and if there is any plan to further expand the functionality of validation actions?
I.e. being able to depend on the _validation output group of all targets in the repo would be nice.

Moreover, what is the philosophy behind "linters support" for bazel?
Linters here are defined as:
1. Validation actions that ensure source files in the workspace is following a certain rule set and produce a set of analysis outputs.
2. Consuming analysis result from (1) to build a "fixer" executable that can apply fix suggestions back into the workspace

I noticed many lint/style fixes in bazel.git are coming from automated commits.
Does that mean the responsibility of fixing lint errors is only partially owned by the change's author in Google and partially owned and fixed by automated system?

Cheers,
Son Luong.

Alex Humesky

unread,
Jun 30, 2022, 5:35:07 PM6/30/22
to Son Luong Ngoc, bazel-discuss
On Thu, Jun 30, 2022 at 4:34 AM Son Luong Ngoc <sluo...@gmail.com> wrote:
Hi folks,


A bit context:
Today rules_go's "nogo" static analysis framework is executed as part of package compilation action. So if your static analysis failed, the compilation fail.
There is also noway to automatically apply suggested fixes if the analyzers were to
implement them, thus the developer experience is subpar vs non-bazel workflow.

Correct, in general bazel doesn't modify the source tree, which is by design. Some of our internal tools print out a command line to run to make it easier for the user to apply the fixes. I believe in bazel, errorprone does this for strict deps, it prints a buildozer command to run. A framework for linters to modify the sourcetree may or may not be categorically out of the question, I just don't think anyone has sat down to design it. Could be worth opening a feature request to at least document the problem if there isn't one already.


Currently there seems to be several opinions regarding how to run linters in a bazel repo:
- Bazel's validation action (used for native rules, targeting AOSP devs)

Validation actions are available to native rules and starlark rules, and the first user of validation actions was an internal ruleset. AOSP is probably using or will use validation actions, but it's worth mentioning that they're not the only ones who we had in mind.

- Aspect-based tests (used by by python community)
- A macro over a test + fix binary pair for each package

I want to ask about the status of Validation Actions development today https://docs.bazel.build/versions/main/skylark/rules.html#validation-actions and if there is any plan to further expand the functionality of validation actions?
I.e. being able to depend on the _validation output group of all targets in the repo would be nice.

There aren't any plans to expand validation actions, mostly because the original use case has been satisfied (i.e., to get certain actions off the critical path of the build).

Could you give some more details about what "being able to depend on the _validation output group of all targets in the repo" means, or what you're trying to solve with that?

If you want to run the validation actions for everything in the repo, but not actually build anything else, then I think something like this might work:

bazel build //... --output_groups=_validation
That is, build only the things in the validation output group, which should pull in only validation actions.

If you mean something more like a target or action that depends on every validation action output in the repo, that would require a target to be able to depend on every other target in the repo, and that's not well supported (e.g. something like deps = ["//..."] is not supported).


Moreover, what is the philosophy behind "linters support" for bazel?
Linters here are defined as:
1. Validation actions that ensure source files in the workspace is following a certain rule set and produce a set of analysis outputs.
2. Consuming analysis result from (1) to build a "fixer" executable that can apply fix suggestions back into the workspace


The way that some linters that run as part of the build work is that they print a command line that the user can copy/paste and run. I don't think a model where a binary is built as a part of the build that will apply all the fixes gathered during the build has been considered.

 
 
I noticed many lint/style fixes in bazel.git are coming from automated commits.
Does that mean the responsibility of fixing lint errors is only partially owned by the change's author in Google and partially owned and fixed by automated system?

In general the responsibility of fixing lint errors is on the change author, but we haven't always had linters enabled for every language and for every part of the codebase. Internally we have a few different systems around linting:

- some IDEs we use have "format on save", and other lint checks built in
- some linters run as part of the build during development time
- linters run as part of a presubmit check, so that you can't check linty code into the repo (though you can bypass the check if there's a reason to)
- there are some systems that automatically go through the repo and run lint checks, generate changes, and send those changes to the code owners (based on some heuristic) for review and approval before being checked in

We also often consider other things beyond code formatting as "linters", for example errorprone checks for code correctness, and for example Android Lint verifies things like that certain API calls are available for the minimum SDK of the app.


Cheers,
Son Luong.

--
You received this message because you are subscribed to the Google Groups "bazel-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bazel-discus...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bazel-discuss/fde77d51-249c-4623-b2cc-c4131385aa0fn%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages