What is the roadmap of MHLO

114 views
Skip to first unread message

Yingwei Zhang

unread,
Feb 13, 2024, 1:50:30 PMFeb 13
to OpenXLA Discuss
Hi community, I saw the message from Eugene about sunsetting the MLIR-HLO repository (https://groups.google.com/a/openxla.org/g/openxla-discuss/c/Mppuv1Edv1s/m/x_U0X0dqBgAJ). I want to verify what is the roadmap for MHLO IR format. Does OpenXLA continue to support MHLO as valid IR representation for most transformations, and keep it updated? Or there is a plan to gradually replace MHLO with another format? 

Thanks for clarification. 

Yingwei Zhang

James Rubin

unread,
Feb 13, 2024, 3:41:52 PMFeb 13
to Yingwei Zhang, OpenXLA Discuss
CC: @Kevin Gleason and @Farid Zakaria for input


--
You received this message because you are subscribed to the Google Groups "OpenXLA Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openxla-discu...@openxla.org.
To view this discussion on the web visit https://groups.google.com/a/openxla.org/d/msgid/openxla-discuss/3ddc60b3-0458-4f3c-ab68-8c7b22eb7fc6n%40openxla.org.
For more options, visit https://groups.google.com/a/openxla.org/d/optout.

Kevin Gleason

unread,
Feb 13, 2024, 7:53:05 PMFeb 13
to OpenXLA Discuss, James Rubin, OpenXLA Discuss, Yingwei Zhang
Hello!

For MLIR-HLO: The existence of this project was to serve MLIR-based projects in the ML ecosystem: MLIR compilers, interchange to/from other IRs, or an IR for frontends to generate and leverage the OpenXLA ecosystem's feature set. We proposed that StableHLO should fill this need, and the plan to sunset the MLIR-HLO repo was approved by the community. Since then many of the larger projects using MLIR-HLO have migrated to use StableHLO (IREE, torch-mlir, onnx-mlir), but there are still a few large projects using it that need to be migrated to StableHLO before that repo can be sunset (byteir, torch-blade), and we haven't gotten around to this work yet but still want to.

For MHLO: There is no official plan to remove this dialect, and it is maintained actively for XLA use, any change to this will require an RFC. Note that MHLO is an XLA-specific dialect, as it contains XLA-specific operations that MHLO/HLO optimizations may introduce, and an MHLO dependency will soon require an XLA dependency. StableHLO on the other hand is the OpenXLA community dialect which only contains hardware agnostic ops / framework agnostic ops. Use of MHLO should be easily replaceable with StableHLO, and if anything is missing a discussion on this forum is a good place to start!


Best,
Kevin

Yingwei Zhang

unread,
Feb 13, 2024, 8:43:05 PMFeb 13
to OpenXLA Discuss, Kevin Gleason, James Rubin, OpenXLA Discuss, Yingwei Zhang
Hi, Kevin, thanks for the reply!

I have a following up question. Given that HLO/MHLO is defined in XLA (I assume this is synonym to the Tensorflow version of XLA), and OpenXLA copied the IR definition and transformations from XLA, can I assume that the longer term vision in OpenXLA community is moving away from the XLA dependency and migrating existing HLO based transformations to StableHLO? There are not many StableHLO based transformations, so I am wondering whether StableHLO is designed for data exchange only or it can be used for internal transformation and optimizations. 

Thanks!

Yingwei

Kevin Gleason

unread,
Feb 14, 2024, 10:27:25 AMFeb 14
to OpenXLA Discuss, Yingwei Zhang, Kevin Gleason, James Rubin, OpenXLA Discuss
> Given that HLO/MHLO is defined in XLA (I assume this is synonym to the Tensorflow version of XLA), and OpenXLA copied the IR definition and transformations from XLA

For some history, XLA used to be more tightly coupled to TF, and the goal of OpenXLA was to open source the XLA compiler for multi-framework / multi-hardware support and to build a community around it. The term OpenXLA generally refers to the (large) component that was refactored into a separate repo (openxla/xla) as well as the tooling / input format (StableHLO) built around it. Now TF can be seen as taking a dependency on OpenXLA, which is where all development occurs.

> can I assume that the longer term vision in OpenXLA community is moving away from the XLA dependency and migrating existing HLO based transformations to StableHLO? [...] I am wondering whether StableHLO is designed for data exchange only or it can be used for internal transformation and optimizations.

No, this is not the plan of record. Some transformations are HLO/XLA-compiler specific and wouldn't make sense to move to StableHLO, which has non-XLA users. Other hawdware-agnostic transformations may make sense to move, but only if they don't require MHLO/StableHLO round-tripping in the compilation pipeline (see the "Out of scope" section from this post, and Geoffrey/Mehdi's comments from this post for context), if compilation requires a specific ordering of passes in MHLO, we don't want to require a roundtrip. If using XLA, we currently recommend transformations go in MHLO. This is a piece of the story that certainly needs some work in the future, but for now I'd say "StableHLO is designed for data exchange" is a more accurate statement.


Best,
Kevin

Kevin Gleason

unread,
Feb 14, 2024, 10:40:17 AMFeb 14
to OpenXLA Discuss, Kevin Gleason, Yingwei Zhang, OpenXLA Discuss, apivo...@gmail.com
Also adding a slightly tangential message that was sent in "Reply to Author" (maybe unintentionally), but regardless I'd like to answer here since other's likely are interested in this question as well :)

+apivo...@gmail.com

---------

I noticed the following difference between MHLO and StableHLO ops td files. 
- MHLO supports Variadic<Tensor> as an input for AllGather, AllReduce, AllToAll ops.
- StableHLO supports just Tensor for these ops

Would it be beneficial to update StablehloOps.td to support  Variadic<Tensor> for these ops?

### MHLO

https://github.com/openxla/xla/blob/main/xla/mlir_hlo/mhlo/IR/hlo_ops.td#1489

def MHLO_AllGatherOp

  let arguments = (ins

    Variadic<MHLO_Tensor>:$operands,


https://github.com/openxla/xla/blob/main/xla/mlir_hlo/mhlo/IR/hlo_ops.td#1539

def MHLO_AllReduceOp

  let arguments = (ins

    Variadic<MHLO_Tensor>:$operands,


### StableHLO

https://github.com/openxla/stablehlo/blob/main/stablehlo/dialect/StablehloOps.td#L1304

def StableHLO_AllGatherOp

  let arguments = (ins

    HLO_Tensor:$operand, /*all_gather_i1*/


https://github.com/openxla/stablehlo/blob/main/stablehlo/dialect/StablehloOps.td#L1340

def StableHLO_AllReduceOp

  let arguments = (ins

    HLO_Tensor:$operand, /*all_reduce_i1*/



Thank you
Alex 

--------------------------

These changes to MHLO were made after StableHLO split off, and were done to benefit horizontal scaling efforts. IMO - Yes, we absolutely want to add these to StableHLO in the near future, as they're features that can be leveraged by multiple frameworks and compilers. Tracking for this is in openxla/stablehlo#1370 and linked tickets.

Contributions are also welcome! We had a slide in the community meeting on how to go about these changes (get a maintainer to be sponsor, post RFC, have governance meeting -- mostly outlined in CONTRIBUTING.md as well), I'd be happy to sponsor this effort and facilitate / review. Otherwise, I'd say this is something we plan to get around to in the next 1-2mo.


Best,
Kevin

Alexander Pivovarov

unread,
Feb 15, 2024, 2:43:49 AMFeb 15
to Kevin Gleason, OpenXLA Discuss, OpenXLA Discuss, Yingwei Zhang
Awesome! 
Thank you, Kevin, for the detailed reply!

Alex
Reply all
Reply to author
Forward
0 new messages