--
You received this message because you are subscribed to the Google Groups "XLA development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xla-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/xla-dev/80b4fea5-cf6c-4b5d-a987-db041d41744fn%40googlegroups.com.
my end goal is to decide whether to implement an XLA backend for an accelerator rather than implementing a 'native' TF integration.
to decide that, I'm trying to better understand XLA 'under the hood', from various aspects. One of these aspects is the XLA-device vs. XLA-compilation-device where it seems that an implementation can take one of two directions of either implementing/registering XLA device along with an XLA backend (like TPU) and the other is to register a 'regular' device (like CPU or GPU) along with XLA backend for that device, and then "somehow" tell XLA that the device has a JIT compilation backend that can be used as required.
if i understand correctly, this would allow a hybrid execution where clustered regions of graphs would go via the XLA path while non supported ops would take the native TF executor path over either that accelerator or the CPU.
Thanks,Moshe--On Thursday, July 22, 2021 at 1:46:21 AM UTC+3 ches...@google.com wrote:hi Moshe,Some of these questions sound like an XY problem, could you describe what would you like to achieve and we might be able to help?In particular, questions (1) and (4) seem to make incorrect assumptions.GeorgeOn Thu, Jul 8, 2021 at 3:22 AM Moshe Maor <moshe...@gmail.com> wrote:Hi all,it seems, per the code, that XLA_* devices are being deprecated, and that the proper way to run through XLA compiler is by triggering the autocluster. a couple of questions re that:1. how does the mark-for-compilation decide which XLA JIT device to use to lower the cluster and compile it, if the registration of the XLA JIT devices are against XLA_* devices during device registration?2. it seems like DeviceInfoCache::GetIdFor has some hacked way of re-assigning already registered JIT devices to existing CPU/GPU devices. is that related?3. if yes - how can that be supported for other XLA backends?4. per deprecation of XLA_* devices, what is the flow for eager execution via XLA device?Thanks,Moshe--
You received this message because you are subscribed to the Google Groups "XLA development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xla-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/xla-dev/80b4fea5-cf6c-4b5d-a987-db041d41744fn%40googlegroups.com.
You received this message because you are subscribed to the Google Groups "XLA development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xla-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/xla-dev/04eb9f06-e1ef-4b48-b9dd-a029d4517c2bn%40googlegroups.com.
Hi George,a couple of follow up questions:1. Regarding the registration for a device "A" that i already have a "native" integration for (i.e., i have platform/device/streamExecutor for device A). Do I only need to perform XlaOpRegistry::RegisterCompilationDevice("A", registration) for some A-specific DeviceRegistration definition?
2. Am I losing something for not implementing a 'real' XLA-device (like XLA_GPU, XLA_CPU devices that were originally implemented) but rather hooking an XLA backend to a 'regular' device?
3. when using this flow, do I have to explicitly register the special XLA kernels for this 'A' device? I see CPU/GPU are registering these: _XlaCompile/_XlaRun/_XlaMerge. are there any others required?
To view this discussion on the web visit https://groups.google.com/d/msgid/xla-dev/992dbcf6-26bd-411c-ad45-b93a874d931en%40googlegroups.com.