--ThanksKevin
You received this message because you are subscribed to the Google Groups "XLA development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xla-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/xla-dev/be882cb6-9bc5-4d13-9472-6dec87b76d49o%40googlegroups.com.
XLA is a domain-specific compiler for linear algebra that can accelerate TensorFlow and JAX models and has the goal of compiling these workloads to many platforms efficiently. MLIR is a novel approach to building reusable and extensible compiler infrastructure. MLIR aims to address software fragmentation, improve compilation for heterogeneous hardware, significantly reduce the cost of building domain specific compilers, and aid in connecting existing compilers together.
We are using what we learned from building XLA to create MLIR, but MLIR has a different aim and focus (as you can see from the Discourse discussions and open design meetings). We are integrating these and you'll see MLIR codegen being utilized inside XLA and TensorFlow shortly. Integrating with an existing project at this level can be tricky to not disrupt users and so it isn't and won't be a flip of a switch.
MLIR will be supported for a long time and it is a community driven product with many active contributors outside of the team & ML. Similarly XLA is a very important component of codegen, optimization and execution for TensorFlow and JAX, and as such it will keep evolving and be supported (it has some very satisfied users and sets a high bar). Part of the answer also depends on nomenclature: if you change the infrastructure and codegen approach of XLA to use MLIR, is it still XLA? For some the answer depends on the philosophy/goal of the project, for others it is the code, approach or interface, and so the answer may vary. They do play nicely together as David pointed out (updated link pointing to GitHub) and we will continue this tight collaboration.
Also, what won't change is that we are very eager to make compilation a primary element of TensorFlow and enable solving compilation challenges in ML and beyond. We'll be evolving to meet the demands of this fast changing field, but will do as best as we can to ensure everything keeps working as today.
Best,
Jacques
To view this discussion on the web visit https://groups.google.com/d/msgid/xla-dev/CAO4_9jXj12YFmT3NMRQMAWZ%2B_xQtBP72z%2BQtjnfbYR%2BOStg6cQ%40mail.gmail.com.