Guidance is appreciated.
a = tf.placeholder(tf.int16)
b = tf.placeholder(tf.int16)
jit_scope = tf.contrib.compiler.jit.experimental_jit_scope # Using JIT compilation
with jit_scope():
add = tf.add(a, b)
mul = tf.multiply(add, b)
with tf.Session() as sess:
# Run every operation with variable input
print("Addition with variables: %i" % sess.run(add, feed_dict={a: 2, b: 3}))
print("Multiplication with variables: %i" % sess.run(mul, feed_dict={a: 2, b: 3}))--
You received this message because you are subscribed to the Google Groups "XLA development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xla-dev+u...@googlegroups.com.
To post to this group, send email to xla...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/xla-dev/98daf19c-2f68-4895-b563-8d755d133fc9%40googlegroups.com.
> Does tf.Graph include the XLA ops directly, and if so how can one invoke the XLA passes to embed the XLA ops as part of tf.Graph?It should, yes. If they're not there, I think that probably means you're not using XLA in the end.See my previous email:> It sounds like you may want to write TensorFlow code such that it's all guaranteed to be compiled into XLA, so you can analyze the whole thing using XLA. This is not really possible at the moment, but it's something we're actively working on.It's also possible that like XLA isn't linked to your program in or something. I don't know, without seeing your code and concrete steps to reproduce, it's very hard to say.
On Tue, May 15, 2018 at 5:11 PM Hashim Sharif <hashim....@gmail.com> wrote:
--
> Yes. The op name is XlaLaunchOp.
For testing, I am using the mnist_softmax_xla.py script that builds a simple TF computation and enables XLA compilation in the JIT session. After invoking sess.run(graph_output), I printed out the operation names (using the name field of tf.Operation) for the complete graph. However, I do not see any operations of the name "XlaLaunchOp". Does tf.Graph include the XLA ops directly, and if so how can one invoke the XLA passes to embed the XLA ops as part of tf.Graph?
-Hashim
> How are the ops clustered into XLA clusters?
I believe this occurs in mark_for_compilation_pass.cc.
-Justin
You received this message because you are subscribed to the Google Groups "XLA development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xla-dev+unsubscribe@googlegroups.com.
an email to xla-dev+unsubscribe@googlegroups.com.
The variables are inputs to the entry computation, and the outputs are the updated state of the variables, so the XLA representation still remains "pure", since the buffer allocation and assignment happens outside of the function and can be implementation specific.
Are you suggesting that all variables in the TF Graph (for instance all weights in different layers of a DNN) are passed to the XLA entry computation at launch? Also, I am unaware of what an "XLALaunchOp" is. Could you provide more context?