I think we might be thinking different things, so let me try to clarify what I understood the suggestion to be.
The stack overflow response suggested that we essentially write a rule which would create a "dummy" bazel workspace with the files needed, and then run a shell script. The script would cd to the generated folder, run bazel to try to build a target, and then verify that bazel failed (optionally with the right error). The goal is to do a 'compile test'. We'd like to verify that we get compile errors when our library is used in certain ways (a tiny bit of template metaprogramming, for example).
That technically works, but I'm concerned that that will be very inefficient, and practically impossible in practice.
For example, we have a library foo.
cc_library(
name = 'foo',
srcs = [
'foo.cc',
],
hdrs = [
'foo.h',
],
deps = [
'//dep1',
'//dep2',
'//dep3',
...
]
)
For us, 'foo' has about 100 dependencies, and depends on hundreds more generated files. It also is built with a custom CROSSTOOL file, which fetches a compiler using a new_http_repository to use to build the code. There are also a significant number of dependencies in the WORKSPACE file that the library relies on.
The size and complexity of the dependency set makes it tricky in my mind to recreate the dependency set in a new repository dynamically and re-run bazel to verify that you get a compiler error. Especially since the dependency set keeps evolving.
Am I missing a shortcut there?
Independently, as I think you might have suggested by your cc_test_expected_to_fail name, it's valuable to make sure a test fails. That's pretty easy to do with a cc_binary and a sh_test, which we do today with Peloton's code base to great effect.