XLA GPU Code Generation using MLIR

瀏覽次數:374 次
跳到第一則未讀訊息

Mehdi AMINI

未讀,
2020年6月23日 晚上11:24:342020/6/23
收件者:MLIR、Tim Shen
Hi all,

Here is some information we just published about one of the subproject involving MLIR in the TensorFlow land: https://www.tensorflow.org/mlir/xla_gpu_codegen

This document describes the high-level plan to integrate the MLIR codegen (using Linalg/Affine/GPU dialect/...) with XLA on the short term. We are starting this transition by hooking MLIR starting at the LHLO dialect level: this dialect models "HLO operating on buffers", instead of immutable tensor values. We convert the entire XLA module to an LHLO module after XLA completes the buffer assignment. The main entry point at the moment for this HLO dialect -> LHLO dialect conversion by involving XLA is https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/xla/transforms/xla_hlo_to_lhlo_with_xla.cc ; note that it is incomplete and actively under development, mainly by Tim (CC).

This is just one subproject that we documented here, hopefully we'll make progress on documenting other aspects of MLIR in TensorFlow. In particular we're also having some pieces to go from HLO->LHLO directly without using XLA buffer-assignment here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/xla/transforms/hlo_legalize_to_lhlo.cc

Cheers,

-- 
Mehdi

 

Mehdi AMINI

未讀,
2020年6月24日 凌晨1:10:362020/6/24
收件者:MLIR、xla...@googlegroups.com、Tim Shen
回覆所有人
回覆作者
轉寄
0 則新訊息