There's no documentation for this, but essentially:
* check out the TensorFlow repository
* run ./configure, enabling XLA, and possibly GPU if that's of interest to you
* You can now build and run targets that use XLA, for example this runs one of the unit tests that invokes XLA directly from C++.
$ bazel test -c opt //tensorflow/compiler/xla/tests:array_elementwise_ops_test_cpu
You can write your own C++ binaries that target the XLA API similarly.
There's also a Python API that you could try, e.g.,
$ bazel test -c opt //tensorflow/compiler/xla/client/python:local_client_test
Peter