Hi XLA team,
I've seen a couple of approaches to running XLA computations, and I'd like to find out what's recommended. I can either use the functionality in
xla/client/client_library.h
xla/client/client.h
xla/client/local_client.h
or the functionality in xla/pjrt/. What's the difference between these? Is one recommended over the other? Are there other approaches I've missed?
I'd also like to make sure that I'm actually using XLA, as when I run it I see
"""
2022-04-01 15:48:43.635974: I tensorflow/compiler/xla/service/service.cc:171] XLA service 0x1d4a830 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2022-04-01 15:48:43.636093: I tensorflow/compiler/xla/service/service.cc:179] StreamExecutor device (0): Host, Default Version
"""
which suggests I could potentially not actually be using XLA.
Regards,
Joel