Opt tool for XLA

179 views
Skip to first unread message

Manoj Madala

unread,
Jun 24, 2020, 10:34:16 PM6/24/20
to XLA development
Hi all,

I am trying to understand optimizations in XLA , I understood that I could set the optimization level using `env XLA_FLAGS=--xla_backend_optimization_level=0/1/2/3` for LLVM optimizations . Is there an opt tool for XLA similar to LLVM optimizer manager ? or any leads on how each level operates to optimize generated LLVM IR . Appreciate any help provided on this topic.

Thanks,
Manoj M

Sanjoy Das

unread,
Jun 24, 2020, 10:39:10 PM6/24/20
to Manoj Madala, David Majnemer, Blake Hechtman, XLA development
On Wed, Jun 24, 2020 at 7:34 PM Manoj Madala <mma...@cs.stonybrook.edu> wrote:
Hi all,

I am trying to understand optimizations in XLA , I understood that I could set the optimization level using `env XLA_FLAGS=--xla_backend_optimization_level=0/1/2/3` for LLVM optimizations . Is there an opt tool for XLA similar to LLVM optimizer manager ? or any leads on how each level operates to optimize generated LLVM IR . Appreciate any help provided on this topic.

I don't think so, but it should be easy to write.  The only non-trivial bit would be creating some sort of a "pass registry".

+David Majnemer +Blake Hechtman  Am I missing something?  Maybe there is a closed source one that we can open source?

-- Sanjoy
 

Thanks,
Manoj M

--
You received this message because you are subscribed to the Google Groups "XLA development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xla-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/xla-dev/4d3cd199-b8ed-4558-ba95-ad0c6ea02d34n%40googlegroups.com.

Adrian Kuegel

unread,
Jun 25, 2020, 4:14:23 AM6/25/20
to Sanjoy Das, Manoj Madala, David Majnemer, Blake Hechtman, XLA development
We have an opt tool for the new XLA mlir_gpu backend:
A tool for the XLA GPU backend could potentially be written similarly.

Manoj Madala

unread,
Jun 26, 2020, 3:55:49 PM6/26/20
to XLA development
Thanks akuegel and sanjoy,

Will look into these resources and use them as a start , Any possible leads on how each level operates to optimize generated LLVM IR ? For now I am trying to understand it through  https://github.com/tensorflow/tensorflow/blob/v2.2.0/tensorflow/compiler/xla/service/gpu/llvm_gpu_backend/gpu_backend_lib.cc . Appreciate any help on this.

Thanks,
Manoj M
Reply all
Reply to author
Forward
0 new messages