why does TFRT implement C++ style's kernel, instead of writing all kernels through MLIR format?

97 views
Skip to first unread message

Kaiven Long

unread,
Mar 9, 2021, 8:32:47 PM3/9/21
to TensorFlow Runtime

This section(https://github.com/tensorflow/runtime/blob/master/documents/tfrt_host_runtime_design.md#eager-execution-sketch) said that TFRT also use c++ fast path for op. I want to understand the design philosophy. For example, if there is a fast path, maybe no one will pursue the general path, for the consideration on performance.  thanks in advance if someone can help provide usage scenarios on general path and fast path.  

Jing Dong

unread,
Mar 12, 2021, 6:27:18 PM3/12/21
to TensorFlow Runtime, kaive...@gmail.com
As stated in the doc, the idea was to use the general path to handle "composite ops". The general path will use the compiler process to decompose the composite ops into a graph of more lower level ops. This allows the end-user to add ops by composing lower-level ops without writing the kernels for them.
Reply all
Reply to author
Forward
0 new messages