Breaking of XLA Computation

瀏覽次數:40 次
跳到第一則未讀訊息

Naveen Swamy

未讀,
2021年8月4日 晚上8:16:022021/8/4
收件者:XLA development
I am trying to understand how TF/XLA groups the operations into multiple XLA Computations,  basically our backend expects a single HLO of all operations. I am wondering if there is a way to do this through a flag. I trying backtracking through the code, tf python code is a tough maze, I couldn't figure out when it decides to group the operations. 

Appreciate your input.

Thanks, Naveen

George Karpenkov

未讀,
2021年8月6日 下午1:44:552021/8/6
收件者:Naveen Swamy、XLA development
hi Naveen, 

How do you use XLA from Python?

George

--
You received this message because you are subscribed to the Google Groups "XLA development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xla-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/xla-dev/e7b0b700-dcb5-4b27-b16a-66c5b788d199n%40googlegroups.com.

George Karpenkov

未讀,
2021年8月6日 下午1:46:132021/8/6
收件者:Naveen Swamy、XLA development
Using jit_compile=True will create a single HloModule for the compiled block. A single HloModule does have multiple HloComputations though.

Naveen Swamy

未讀,
2021年8月6日 下午5:18:122021/8/6
收件者:George Karpenkov、XLA development
Hi George,

Thanks for your response. We are using session to use XLA device. We are on Tensorflow 2.2

Thanks, Naveen

Naveen Swamy

未讀,
2021年8月6日 下午5:22:032021/8/6
收件者:George Karpenkov、XLA development
and tf.device 

George Karpenkov

未讀,
2021年8月6日 下午5:32:102021/8/6
收件者:Naveen Swamy、XLA development
Could you provide a code snippet of how do you use it?
回覆所有人
回覆作者
轉寄
0 則新訊息