Hi...
The XLA Python bindings aren't included in TensorFlow; JAX is their main user and they are packaged on Pypi as jaxlib, as you have observed. They are really only in the TensorFlow source tree because XLA is in the TensorFlow tree. To my knowledge, there hasn't been any discussion about including them in TensorFlow itself. I also caution you they also aren't a completely stable API (just as XLA's C++ API isn't stable).
If the goal is just for local testing via "bazel test" inside the TensorFlow tree, I believe this can work if you build with --config=monolithic; at some point in the past xla_client_test_cpu in the same directory worked via "bazel test", although it's not a configuration we routinely test ourselves. It's possible the build rules need updating because we don't test this in our CI builds, and there are some details to do with linking that are a little different in opensource as opposed to internal to Google where we do run these tests.
It is possible to build a custom Jaxlib that includes your modifications to XLA, see:
https://jax.readthedocs.io/en/latest/developer.html#building-jaxlib-from-source and note you can change the JAX WORKSPACE file to point to a TensorFlow tree of your choice instead of a fixed Github hash. You may need to rebase your fork off a more recent TensorFlow checkout for this to work though. You can then access the Python bindings as jax.lib.xla_client.
We've also at points in the past talked about perhaps splitting the XLA Python bindings out of jaxlib into a pypi package of their own.
Hope that helps,
Peter