The first Haskell package will only be included on Windows, the latter only on Linux or MacOS. The structure of `compatible_with` here is inspired by precedent in Bazel like the required providers for an attribute. The outer list is an "or" and the inner list an "and". The `compatible_with` attribute would be handled by `bzlmod` before invoking the module rule. I.e. `haskell.stack.package` wouldn't see `unix` when run on a Windows system. Meaning this is not something that module rule authors have to remember to add ad-hoc, but a universal feature that users can always rely on.
nixpkgs = bazel_dep(name = "rules_nixpkgs", compatible_with = [[condition.os.linux], [condition.os.macos]], ...)
scoop = bazel_dep(name = "rules_scoop", compatible_with = [[condition.os.windows]], ...)
A `compatible_with` on a `bazel_dep` could be used to disable all its module rules on an incompatible platform. In this example Nix module rules would never run on Windows and scoop rules only on Windows. As you described below, only module rule execution is problematic on unsupported platforms. Fetching Bazel dependencies is benign, as they are just static files. We could say then that all `bazel_dep`s are always resolved and fetched, meaning the Bazel dependency resolution algorithm does not have to take `compatible_with` into account. However, all module rules and non-Bazel dependencies of a disabled `bazel_dep` are skipped. I.e. on Windows none of `rules_nixpkgs` module rules are executed and none of `rules_nixpkgs` non-Bazel dependencies are resolved or fetched. I'm uncertain how this should interact with targets defined in a `bazel_dep`'s `BUILD` files, say, `@rules_nixpkgs//...`. Maybe they should all be skipped, though then we'd also want to skip targets defined in `bazel_dep`'s of `rules_nixpkgs` that are not depended on elsewhere as well, which sounds complicated. That question needs a bit more thought.
If we want to make it freely configurable then user defined flags would indeed not be enough. I don't know if that's realistic though. The Scala version in this case will impact the set of external dependencies, compiler flags, warnings, lints, etc. So, in practice a project will only support a finite set of versions such that user defined flags are feasible. With the `compatible_with` attribute suggested above this could maybe look as follows:
...
scala.toolchain(version = "2.12.12", compatible_with = [[self.features.scala2_12]])
scala.toolchain(version = "2.13.3", compatible_with = [[self.features.scala2_13]])
...
rje.maven.dep(coord = "com.typesafe.scala-logging:scala-logging_2.12.12:3.9.2", compatible_with = [[self.features.scala2_12]])
rje.maven.dep(coord = "com.typesafe.scala-logging:scala-logging_2.13.3:3.9.2", compatible_with= [[self.features.scala2_13]])
Of course writing all these different versions of dependencies by hand gets old quickly. Maybe MODULE files could support Starlark features like list comprehensions.
My understanding was that the WORKSPACE file will be fully generated by bzlmod and therefore isn't something you would typically check into version control. Config like rules with `machine_specific = True` would already pose a similar issue, I think. The generated `WORKSPACE` file would contain machine specific items and wouldn't necessarily work on another machine.
Indeed, Cabal and Cargo can do this because they limit themselves to their own package ecosystem. Cabal also doesn't attempt hermeticity when it comes to system dependencies. Packages are allowed to run configure scripts or custom setups to find globally installed system libraries and tools. I'm less familiar with Cargo, but AFAIK it is similarly open about system dependencies.
I don't think all is lost for the design for Bazel. As you observe cross-platform issues only appear with module rules, Bazel dependencies can be resolved and locked once for all platforms. With an approach like `compatible_with` described above, conditional dependencies would be confined to module rules. Lock file multiplexing might quickly get unwieldy, but you suggested an approach before where the lock file records conditions that affected resolution. Maybe this is more feasible when conditions are constrained to module rules. The lock file could have different sections for different sets of constraints that affected a module rule's resolution. Something along the lines of
I hope you'll get to enjoy the holidays and recharge despite all this.
Forwarded Conversation
Subject: Conditional dependencies
------------------------
From:
Xudong Yang <w...@bazel.build>Date: Mon, Dec 7, 2020 at 1:23 PM
To: Herrmann, Andreas <
andreas....@tweag.io>, Yun Peng <pcl...@bazel.build>
Hey Andreas,
After somewhat of a hiatus (I just moved to Germany from Australia), I gave the conditional dependency problem some more thought, and here's what I think could be a workable solution: essentially, don't check in the lockfile if the module rule's resolve function is platform-dependent.
Take the "scoop on windows + nix on linux" use case for example. In the user's MODULE.bazel file, they would write both scoop.dep("mylib") and nixpkg.dep("mylib") alongside each other. The resolve function of either of these module rules would simply do nothing if executed on the wrong platform (or output a warning message). This is okay since, if the BUILD files are set up properly, the repo @scoop.mylib would simply never be referenced if you're building on Linux. And then, if you don't check in the lock file, building on Windows would succeed too.
Note that the user can still use the lock file in a way that suits their setup. For example, if your project is only expected to build on Windows, you can totally check the lock file in (even if you use lots of Scoop rules). For another example, you could even set it up so that you have a Windows lock file and a Mac/Linux lock file, and you multiplex between them, as long as you know that you're only building on these systems and there's no further factors affecting resolution (e.g. architecture).
In other words, this is saying: bzlmod supports using a lock file, but how you manage the lock file is your problem. I actually quite like the cleanness of the boundary here.
Another thing to note is the failure mode. What if the user actually checked in the lock file when they shouldn't have? For example, they ran `bzlmod resolve` on Windows, and now when they build in Linux, they're suddenly trying to execute the fetch function of a Scoop rule. This is mitigable by asking module rule authors to double-check that the runtime environment of their fetch function matches their expectation (which is most likely the environment of when their resolve function was called), and if not, give a nice error message.
Would this be an acceptable solution in your opinion? Please let me know what you think!
Thanks,
Xudong
----------
From:
Xudong Yang <w...@bazel.build>Date: Thu, Dec 10, 2020 at 9:51 AM
To: Herrmann, Andreas <
andreas....@tweag.io>, Yun Peng <pcl...@bazel.build>
Hi Andreas! Gentle ping -- any objections? I'm trying to finalize the doc before the end of the year.
----------
From:
Herrmann, Andreas <andreas....@tweag.io>Date: Thu, Dec 10, 2020 at 12:17 PM
To: Xudong Yang <w...@bazel.build>
Cc: Yun Peng <pcl...@bazel.build>
Hi Xudong,
Sorry for the late reply.
I hope all went well with the move and you're enjoying the new location!
Thanks for sharing your thoughts on conditional dependencies!
Unfortunately, we'd still want to check in the lock file to ensure reproducibility. E.g. dependency resolution via Coursier is not reproducible, so we require a lock file for a build with maven deps to be reproducible.
Multiplexing is a possible workaround: We could check in dedicated lock files per platform and the build would create a symlink from the canonical bzlmod lock file path to the checked in lock file for the current platform. The Nix repo rule would be a no-op on Windows and the scoop rule would be a no-op on Unix. However, at that point it would be easier if bzlmod just accepted a flag to define the lock file path. That would still maintain that boundary that you mention. It would just make the cross platform use-case a little bit more ergonomic.
Agreed, the failure mode is probably fine in most cases. Even without support from the rule author I would expect most errors to be indicating the platform mismatch relatively clearly.
One concern I have is that this puts the burden on rule authors to anticipate and support cross-platform use-cases of their rule sets. E.g. authors of scoop and nixpkgs rules must remember to a) be a no-op on unsupported platforms and b) ideally detect lock file platform mismatch and produce an appropriate error message. From a rule author's perspective that is surprising: If I'm writing rules for scoop I'm only expecting Windows users. I'm also not going to have CI pipelines set up for non-Windows use-cases.
In practice this means that cross platform users of Bazel will have to debug and patch upstream rule sets that are too platform specific. It's worth noting that this issue is transitive, any project with a transitive dependency on such a rule set will be affected. I fear that this can hinder Bazel adoption on platforms that are less well represented in the Bazel community, like Windows, and cross platform projects.
So far it seems like the no-op and lock file multiplexing approach still sacrifices cross-platform lock file and offline, so it doesn't seem to improve much compared to direct support for conditional dependencies.
Another idea for a workaround on the user side instead of the rule implementor side: Will the --override_repository flag still be available on Bazel with the external dependency overhaul? If so, could the end user bundle platform specific dependencies and overrides in a local external repository and select the appropriate one for the current platform using --override_repository?
Best, Andreas
On Mon, Dec 7, 2020 at 1:23 PM Xudong Yang <w...@bazel.build> wrote:
----------
From:
Xudong Yang <w...@bazel.build>Date: Thu, Dec 10, 2020 at 1:11 PM
To: Herrmann, Andreas <
andreas....@tweag.io>
Cc: Yun Peng <pcl...@bazel.build>
Hi Andreas,
Thanks for your response!
> However, at that point it would be easier if bzlmod just accepted a flag to define the lock file path.
Yeah, this seems reasonable.
> E.g. authors of scoop and nixpkgs rules must remember to a) be a no-op on unsupported platforms and b) ideally detect lock file platform mismatch and produce an appropriate error message.
I think a) will happen naturally anyway. Surely, before you call Scoop, you need to verify that it's installed, or at least installable?
b) is indeed a bit more burdensome.
> I'm also not going to have CI pipelines set up for non-Windows use-cases.
I don't think the author of a Scoop rule set would need to set up CI for non-Windows. The logic there is literally "do nothing if this is not Windows".
Moreover, even if we had full-blown conditional dependencies, the situation wouldn't be improved -- how do you make sure that users only call Scoop rules when they're guarded by a "if os == windows" check? Would you then want to make sure Scoop rules fail gracefully if called on a non-Windows machine?
> [...] so it doesn't seem to improve much compared to direct support for conditional dependencies.
Indeed it's not an improvement; its value is more in that we don't need to open the Pandora's box that is a full-blown solution for conditional dependencies. Plus, setting up a cross-platform offline build when conditional dependencies are involved sounds like it would be very convoluted (if possible at all).
Summing up a bit, we're essentially evaluating the two choices: A) no-op + lock file multiplexing, vs B) full-on conditional dependencies. I'm arguing that A has the advantage of being much simpler, and that some of the (very valid) concerns you brought up are shared by both A and B (offline build, platform-specific rules being used on the wrong platform).
> Another idea for a workaround on the user side instead of the rule implementor side: Will the --override_repository flag still be available on Bazel with the external dependency overhaul? If so, could the end user bundle platform specific dependencies and overrides in a local external repository and select the appropriate one for the current platform using --override_repository?
I don't see why the flag couldn't be supported. However, I don't see it representing a full solution -- supposedly the user would need to manage the fetching themselves?
----------
From:
Herrmann, Andreas <andreas....@tweag.io>Date: Thu, Dec 10, 2020 at 3:30 PM
To: Xudong Yang <w...@bazel.build>
Cc: Yun Peng <pcl...@bazel.build>
On Thu, Dec 10, 2020 at 1:11 PM Xudong Yang <w...@bazel.build> wrote:
> E.g. authors of scoop and nixpkgs rules must remember to a) be a no-op on unsupported platforms and b) ideally detect lock file platform mismatch and produce an appropriate error message.
I think a) will happen naturally anyway. Surely, before you call Scoop, you need to verify that it's installed, or at least installable?
b) is indeed a bit more burdensome.
Re a) it's true that a check whether Scoop is installed would be present in some form anyway. The difference though is how it's absence is handled. Naively, the scoop ruleset would just fail telling the user to install Scoop and try again. Of course that would not work on Unix. The no-op approach instead requires an additional check for the OS, if we still want to fail hard on Windows and only make it a no-op on Unix. In rules_nixpkgs we added an attribute to the nixpkgs_packages repository rule that lets the user decide if they want to consider the absence of Nix an error or not:
https://github.com/tweag/rules_nixpkgs/#nixpkgs_package-fail_not_supported .
> I'm also not going to have CI pipelines set up for non-Windows use-cases.
I don't think the author of a Scoop rule set would need to set up CI for non-Windows. The logic there is literally "do nothing if this is not Windows".
The logic for the scoop fetch function would indeed be that simple. However, the no-op approach leaves some labels undefined, e.g. `@scoop//:some_tool` would be undefined. Meaning BUILD files or other Bazel rules that use scoop provided targets must guard these labels with appropriate select expressions. Of course conditional dependencies also leave labels undefined. But, in that case the user can disable the dependency on repositories that use such labels.
Moreover, even if we had full-blown conditional dependencies, the situation wouldn't be improved -- how do you make sure that users only call Scoop rules when they're guarded by a "if os == windows" check? Would you then want to make sure Scoop rules fail gracefully if called on a non-Windows machine?
In that case it would simply be a user error to call a scoop rule on non-Windows. If they forget the `if os == "windows"` check they'll get an error, probably along the lines of "scoop.exe not found please install Scoop".
> [...] so it doesn't seem to improve much compared to direct support for conditional dependencies.
Indeed it's not an improvement; its value is more in that we don't need to open the Pandora's box that is a full-blown solution for conditional dependencies. Plus, setting up a cross-platform offline build when conditional dependencies are involved sounds like it would be very convoluted (if possible at all).
Agreed. Re the offline build, just to clarify, one has to be careful to distinguish cross-compilation projects vs. projects that can build on multiple platforms. The latter is what I mean with cross-platform projects in this discussion. A Windows toolchain wouldn't run on a Linux box, so there is little value in downloading the Windows toolchain on Linux in preparation for an offline build.
Summing up a bit, we're essentially evaluating the two choices: A) no-op + lock file multiplexing, vs B) full-on conditional dependencies. I'm arguing that A has the advantage of being much simpler, and that some of the (very valid) concerns you brought up are shared by both A and B (offline build, platform-specific rules being used on the wrong platform).
Thanks for summarizing, yes that sounds right. I'd add that a benefit of conditional dependencies is that the end-user has the option to disable the entry to point to any such platform specific dependencies. Without conditional dependencies instead every transitive platform specific dependency has to be made cross-platform compatible.
We had an interesting example of this in the daml repository. On Windows we ran into some issues with rules_nodejs's typescript rules and eventually had to conclude that rules_nodejs effectively does not support Windows for our use-case. Some of these issues caused the repository rule yarn_install to fail. So we had to prevent it from running on Windows. Of course we then had to replace some of the generated .bzl files by dummies so that load statements in our BUILD files don't fail. The relevant code is here:
https://github.com/digital-asset/daml/blob/e7b3ac39b5a326fc1113ab8b54ee5517a10a5ed2/WORKSPACE#L799-L804Without conditional dependencies we'd probably have to patch yarn_install itself to not run in that particular case and generate the dummy .bzl files instead. That's more complicated though, since we still have other instances of yarn_install that don't involve typescript and don't trigger this issue that we still want to work.
> Another idea for a workaround on the user side instead of the rule implementor side: Will the --override_repository flag still be available on Bazel with the external dependency overhaul? If so, could the end user bundle platform specific dependencies and overrides in a local external repository and select the appropriate one for the current platform using --override_repository?
I don't see why the flag couldn't be supported. However, I don't see it representing a full solution -- supposedly the user would need to manage the fetching themselves?
Right, we'd need the equivalent of --override_repository on bzlmod itself for this to work.
Best, Andreas
----------
From:
Xudong Yang <w...@bazel.build>Date: Mon, Dec 14, 2020 at 3:33 PM
To: Herrmann, Andreas <
andreas....@tweag.io>
Cc: Yun Peng <pcl...@bazel.build>
Hey Andreas,
Just for the record, I haven't responded to your email not because I haven't read it, but because I found the points you raised very convincing. In particular the point about users not being able to correct oversights by module rule authors, except by applying patches.
So we're somewhat back to square one. If I think back to my objections to conditional deps in the first place, the biggest one is definitely that it would cause the lockfile to be platform-dependent, too. I thought that was avoidable since Bazel modules are essentially platform-independent (at least as far as fetching the source goes -- which is what bzlmod is concerned with). But given that situations such as "scoop on windows + nix on mac/linux" exist, and they can fundamentally never be covered by a platform-independent lockfile, I think that point is moot.
Directly related to the platform-independent lockfile is the use case of vendoring dependencies, i.e. checking fetched dependencies into the source tree. This, again, seems fundamentally at odds with conditional dependencies. I wonder if the only way out is to say "you can use one or the other, but if you try to vendor conditional dependencies, be ready for breakage".
---
Back to the logistics of conditional dependencies itself. I'm entertaining the idea of essentially allowing `if` statements in MODULE.bazel files and not doing anything else (specifically, nothing to manage the lockfile). The conditions allowed in the `if` would be limited to OS and CPU arch. (This still feels a bit dirty since someone will one day want the glibc version included in the set of allowed conditions...)
I wonder if this would be acceptable. Obviously, since the generated lockfile is now potentially platform-dependent, if the user carelessly checks it in and tries to build on another platform, they could get very cryptic errors. So that's not perfect, but the alternative would be that the lockfile records the platform on which it was generated (but only parts of the platform tuple that were used as conditions!), and bzlmod can smartly re-resolve if the current platform doesn't match the recorded platform. This alternative is very complex, especially if we want bzlmod to only partially re-resolve when a previous directive is invalidated by the platform change. We'd also need the resolve_fn of module rules to report the parts of the platform tuple that, when changed, would invalidate the current resolution result. And this is not even the most complex solution where bzlmod natively manages a selection of lockfiles.
That was mostly just me rambling... Did you have any thoughts about this prospect? Particularly about the conflict between vendoring and conditional dependencies?
Thanks,
Xudong
----------
From:
Herrmann, Andreas <andreas....@tweag.io>Date: Tue, Dec 15, 2020 at 5:06 PM
To: Xudong Yang <w...@bazel.build>
Cc: Yun Peng <pcl...@bazel.build>
Hi Xudong,
Thank you for the update!
On Mon, Dec 14, 2020 at 3:33 PM Xudong Yang <w...@bazel.build> wrote:
So we're somewhat back to square one. If I think back to my objections to conditional deps in the first place, the biggest one is definitely that it would cause the lockfile to be platform-dependent, too. I thought that was avoidable since Bazel modules are essentially platform-independent (at least as far as fetching the source goes -- which is what bzlmod is concerned with). But given that situations such as "scoop on windows + nix on mac/linux" exist, and they can fundamentally never be covered by a platform-independent lockfile, I think that point is moot.
Directly related to the platform-independent lockfile is the use case of vendoring dependencies, i.e. checking fetched dependencies into the source tree. This, again, seems fundamentally at odds with conditional dependencies. I wonder if the only way out is to say "you can use one or the other, but if you try to vendor conditional dependencies, be ready for breakage".
In the current repository rule world this could, for example, be achieved with a conditional `local_repository`.
In the overhaul proposal it's less clear how to handle this. I can see that `override_dep(..., local_path = ...)` is somewhat similar to `local_repository`. However, using an if-statement like `if os == "windows: override_dep(...)` doesn't really express the same thing. This would say: only override on Windows and use the original on other systems.
Then there is `workspace_settings.vendor_dir`. IIUC repositories inside `vendor_dir` are automatically imported and don't need to be listed in the module file. Is that correct? That makes it difficult to express a conditional dependency on these.
How do `override_dep` and `vendor_dir` interact? Maybe the following could work: Let's assume we have `vendor_dir = "third_party"` and we have `third_party/some_dep`. Then we could write `if os == "windows": override_dep(name = "some_dep", local_path = "third_party_windows/some_dep")` to override a vendored dependency on Windows. This is not so much a conditional dependency as it is a conditional replacement. However, if `third_party_windows/some_dep` is empty then it would effectively make `some_dep` a Unix only dependency. This does feel pretty hacky though. Also, IIUC `override_dep` doesn't apply to module rule dependencies, only Bazel dependencies.
Back to the logistics of conditional dependencies itself. I'm entertaining the idea of essentially allowing `if` statements in MODULE.bazel files and not doing anything else (specifically, nothing to manage the lockfile). The conditions allowed in the `if` would be limited to OS and CPU arch. (This still feels a bit dirty since someone will one day want the glibc version included in the set of allowed conditions...)
Yes, I think this would cover the scoop/nixpkgs use-case as well as the mentioned yarn_install use-case. I suppose this would look something like this:
```
# MODULE.bazel
if os == "windows":
scoop = bazel_dep(name = "rules_scoop")
scoop.package(name = "toxiproxy")
else:
nixpkgs = bazel_dep(name = "rules_nixpkgs")
nixpkgs.package(name = "toxiproxy")
# BUILD.bazel
alias(
name = "toxiproxy",
actual = select({
"//conditions/os:windows": "@scoop_toxiproxy//:toxiproxy.exe",
"//conditions:default": "@nixpkgs_toxiproxy//:bin/toxiproxy",
}),
)
```
```
# MODULE.bazel
if os == "windows":
self.dummy(name = "yarn_packages", ...)
else:
nodejs.yarn_install(name = "yarn_packages", ...)
```
The concern about the glibc version is unfortunately justified. We recently had just such an issue come up related to the update from Scala 2.12 to 2.13 with rules_scala. We have a Bazel repository that is built with Scala 2.12. However, we want to transition forward to Scala 2.13. But, the JARs produced by the repository are consumed by other projects, internal and external. So, we cannot transition everything from 2.12 to 2.13 in one fell swoop and have to have a transition period where the repository builds with both 2.12 and 2.13. rules_scala configures the scala version in a repository rule and only supports one Scala version at a time. Further, Scala Maven dependencies contain the scala version in their maven coordinate, and some Maven dependencies are conditional on the Scala version. So, there are multiple places in the WORKSPACE file that are conditional on the Scala version that we're targeting. We use a repository rule that looks for an environment variable that configures the scala version. Together with the --repo_env flag this gives us a custom feature flag to switch between scala versions. The corresponding code is visible here
https://github.com/digital-asset/daml/pull/8271/files .
I wonder if this would be acceptable. Obviously, since the generated lockfile is now potentially platform-dependent, if the user carelessly checks it in and tries to build on another platform, they could get very cryptic errors. So that's not perfect, but the alternative would be that the lockfile records the platform on which it was generated (but only parts of the platform tuple that were used as conditions!), and bzlmod can smartly re-resolve if the current platform doesn't match the recorded platform. This alternative is very complex, especially if we want bzlmod to only partially re-resolve when a previous directive is invalidated by the platform change. We'd also need the resolve_fn of module rules to report the parts of the platform tuple that, when changed, would invalidate the current resolution result. And this is not even the most complex solution where bzlmod natively manages a selection of lockfiles.
Also in the Scala example above we switch between lock files on the user side. In that case the lock file path is a repository rule attribute to rules_jvm_external, so we can select the appropriate path there. In the overhaul proposal there is only one global lock file. So, OS, architecture, and feature specific lock files would lead to a combinatorial number of lock files. So long as the number remains low user side switching between lock files is probably still acceptable. But, this can quickly grow out of hand. I suspect in the daml project we'd have the OS axis linux,macos,windows and the scala axis 2.12,2.13 so we would need up to six lock files. For now we only build 2.13 on Linux, so we're only at four lock files. That's still feasible to check those in and point bzlmod at the right one based on the platform/configuration, but it's already unpleasant to have to manually regenerate this many lock files when a PR touches dependencies.
The lock file aspect in particular is quite tricky. Manually switching between lock files isn't pretty, but I'm not sure what a better solution would look like. Re error messages, I suppose bzlmod could record in the lock file if any entries are conditional and their condition and then throw an error if the current platform/configuration doesn't match. At least that's up front and not a cryptic error later on in the build.
I hope this is helpful.
Adding some more people in, since I no longer think this is a "quick check-in", or that we can even finalize this topic before EOY. (See the latter half of this email for the existential dread.)
re vendor_dir: It simply works by saying that the repo "foo" will be fetched into the directory $vendorDir/foo, instead of $magicBzlmodDir/foo. It does not automatically import all repos (subdirectories?) under $vendorDir -- it really only dictates where the contents of deps should be fetched. `override_dep` with `local_path` is a rather orthogonal feature that disables version resolution for the given dep and just says "use whatever's under this directory".
Your example reminds me, however, that I hadn't even thought about the potential problems introduced by allowing `if` statements. I didn't really intend `override_dep` to be conditional-able at all. It sounds like a nightmarish situation just waiting to be abused. (Worse, imagine if the `module` directive could be inside an `if` statement, and you could change the author of the module depending on the platform. ???) I really only intended module rule invocations to be guarable by `if` statements, but that sounds really hard to implement. On the other hand, I can't think of a more declarative syntax either (a la BUILD's `select`), given that module rule attrs can be anything...
re glibc/scala conditions: I don't have much to say here, other than "D:"... In any case, even user-defined flags wouldn't help with selecting between scala versions, right? Presumably you'd need to pass the --feature=scala2.13 flag manually if you're running on a platform with scala 2.13 installed?
re lockfile management: I came to the terrible realization that this means the WORKSPACE file also needs to be selected, since the list of available repos could change based on the platform. That sounds prohibitively cumbersome.
---
I feel like I have nothing useful to contribute in this email, rather than just realization after realization of horror. So to make it worse, I'm going to admit that I feel like conditional dependencies are in many ways fundamentally incompatible with the current state of the new design.
Part of this impasse results from the fact that module rules can run any executable. I see you raising the examples of Cabal and Cargo and I think, "yeah, why can't we do it?" But I soon realize that Cabal and Cargo run on all the platforms they support (duh) so they can resolve "if os==linux" deps even when they're running on Windows. Whereas for us, if we're on Windows, we just absolutely don't know what to do with a Nix rule. Reproducibility across platforms is essentially unattainable.
I wonder if this warrants a drastic change in direction for the proposal: we'll try to create a new system that works better for Bazel module deps and cross-platform custom deps (module rules), but we can't support platform-dependent custom deps very well. This necessarily means that we can no longer claim that our new proposal supports all the use cases of today's Bazel, which in turn means that we can't ask everyone to migrate. This has far-reaching consequences for our migration strategy, since our current thinking is to disallow concurrent usage of bzlmod and today's WORKSPACE. We'd have to scratch that and accept a hybrid world where some deps are specified with bzlmod and some with WORKSPACE repo rules. Ugh.
I'll just continue to beat my brains out during the holidays.