What is the relation between XLA and MLIR?

3,269 views
Skip to first unread message

t kevin

unread,
Jun 16, 2020, 2:17:11 AM6/16/20
to XLA development
hi folks,

It seems XLA and MLIR have some similarity and both provide a mechanism for customized hardware.

Will MLIR replace XLA completely or  both of them are gonna be supported for a long term.

Thanks
Kevin

David Majnemer

unread,
Jun 18, 2020, 1:22:24 PM6/18/20
to t kevin, XLA development
Hi,

XLA is not going anywhere.
XLA and MLIR play nicely together, see here for an example where XLA's HLO representation is translated into an MLIR dialect. That code also uses MLIR for kernel generation.

It is likely that XLA will use important MLIR technology where it is applicable and makes sense.

Best,
David


Thanks
Kevin

--
You received this message because you are subscribed to the Google Groups "XLA development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xla-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/xla-dev/be882cb6-9bc5-4d13-9472-6dec87b76d49o%40googlegroups.com.

Jacques Pienaar

unread,
Jun 18, 2020, 1:54:08 PM6/18/20
to David Majnemer, t kevin, XLA development
Hey Kevin,

David summarized it well, I'll expand slightly upon David's answer. These projects do indeed overlap but the teams sit [before work from home at least] adjacent to one another. There are even ex-XLA folks on MLIR and ex-MLIR folks on the XLA team! So while there are overlaps, the future is not as separate as David mentioned.

XLA is a domain-specific compiler for linear algebra that can accelerate TensorFlow and JAX models and has the goal of compiling these workloads to many platforms efficiently. MLIR is a novel approach to building reusable and extensible compiler infrastructure. MLIR aims to address software fragmentation, improve compilation for heterogeneous hardware, significantly reduce the cost of building domain specific compilers, and aid in connecting existing compilers together.


We are using what we learned from building XLA to create MLIR, but MLIR has a different aim and focus (as you can see from the Discourse discussions and open design meetings). We are integrating these and you'll see MLIR codegen being utilized inside XLA and TensorFlow shortly. Integrating with an existing project at this level can be tricky to not disrupt users and so it isn't and won't be a flip of a switch.


MLIR will be supported for a long time and it is a community driven product with many active contributors outside of the team & ML. Similarly XLA is a very important component of codegen, optimization and execution for TensorFlow and JAX, and as such it will keep evolving and be supported (it has some very satisfied users and sets a high bar). Part of the answer also depends on nomenclature: if you change the infrastructure and codegen approach of XLA to use MLIR, is it still XLA? For some the answer depends on the philosophy/goal of the project, for others it is the code, approach or interface, and so the answer may vary. They do play nicely together as David pointed out (updated link pointing to GitHub) and we will continue this tight collaboration.


Also, what won't change is that we are very eager to make compilation a primary element of TensorFlow and enable solving compilation challenges in ML and beyond. We'll be evolving to meet the demands of this fast changing field, but will do as best as we can to ensure everything keeps working as today.


Best,


Jacques



t kevin

unread,
Jun 21, 2020, 8:51:05 PM6/21/20
to Jacques Pienaar, David Majnemer, XLA development
Hi David and Jacques

Thank you for the clarification. It's a great help.
Two more additional questions.
1. I googled XLA and got the info that dynamic shape is not supported
by XLA. Like mentioned in this thread:
https://groups.google.com/forum/#!topic/xla-dev/Fznpp32YUa8
But the post is kinda out dated.
I'm wondering if dynamic shape is supported by XLA now.

2. From hardware vendor's perspective, which path is more recommended
Say I have a new hardware accelerator and there is no legacy XLA code.
It looks to me that using XLA-HLO -> MLIR is more suitable for tensorflow?

Best regards
Kevin

Jacques Pienaar <jpie...@google.com> 于2020年6月19日周五 上午1:54写道:
Reply all
Reply to author
Forward
0 new messages