--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CACgUC_DdiKhfj1nDYQm5KD59%3DTfyei3cqaHJyky3JQ3yYoMZxQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CANF-O%3Da1iGa0WtF-A7odU8Gertxq8cKraK4bCbq5ChVfUadk7g%40mail.gmail.com.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CACgUC_DdiKhfj1nDYQm5KD59%3DTfyei3cqaHJyky3JQ3yYoMZxQ%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CANF-O%3Da1iGa0WtF-A7odU8Gertxq8cKraK4bCbq5ChVfUadk7g%40mail.gmail.com.
--N
Hi Hicolas
Nice to see the discussion in the LLVM discussion thread.It is interesting that we add Tensor support in the Linalg dialect to ease the fusion work.One more thing I think that deserve further discussion.As to the codegen path you mentioned:-> Language / Framework-> HLO + Linalg on tensors-> LHLO + Linalg on buffers(note that buffer allocation in Linalg on tensors -> Linalg on buffers can be very progressive intermixing ops with both tensor and buffers arbitrarily)-> Affine/StructuredControlFlow (still named Loops atm ..)-> backendsThere is intermix of HLO + Linalg and LHLO + Linalg dialects during the conversion process.I think one possible reason that we need this intermix is that due to the potential limitation of Linalg, it may not support all the fusion related stuffs at present, so for some sub-graphs we could directly leverage Linalg, while for other left sub-graphs we have to resort to HLO/LHLO'sown optimization support. What is your point of view?
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/cffe416e-2a62-4f85-9b56-5e51653bfbce%40tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CACgUC_DdiKhfj1nDYQm5KD59%3DTfyei3cqaHJyky3JQ3yYoMZxQ%40mail.gmail.com.
--
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CACgUC_DdiKhfj1nDYQm5KD59%3DTfyei3cqaHJyky3JQ3yYoMZxQ%40mail.gmail.com.
--
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CACgUC_DdiKhfj1nDYQm5KD59%3DTfyei3cqaHJyky3JQ3yYoMZxQ%40mail.gmail.com.
--
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CACgUC_DdiKhfj1nDYQm5KD59%3DTfyei3cqaHJyky3JQ3yYoMZxQ%40mail.gmail.com.
--
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CACgUC_DdiKhfj1nDYQm5KD59%3DTfyei3cqaHJyky3JQ3yYoMZxQ%40mail.gmail.com.
--
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CACgUC_DdiKhfj1nDYQm5KD59%3DTfyei3cqaHJyky3JQ3yYoMZxQ%40mail.gmail.com.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/b07e9509-3e0d-4910-80f0-06591082808c%40tensorflow.org.
...
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/7963ebec-995c-4a87-936f-d84d3a68686c%40tensorflow.org.
...
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/41985502-ea98-4a6f-ab1d-0a9f9930cb70%40tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAGhUxBCuJcFz__kuYu3Z1qm3EWRiHybn9i1i9mNGGkv93_7ZSA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CADd-0hvNkC_Y%3D5rsLtnHPe8v3n%2BF5b05TKks%3D-vTPm7rGtiHcg%40mail.gmail.com.
I'd like to work with you and Jacques for this. The dynamic shape supporting topic was in xla and now to mlir for a while. The unclear roadmap and opaque definition make the discuss difficult to closure.We'd better use a public RFC to clear items such as:1. the design principle2. the scope: the ops and its type, and clear which is included and which is not, and why3. the implementation method: in exported hlo or lowering through mlir dialects4. performance and functional trade-off for cpu, gpu & maybe tpu
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAGS7HzbnbJ4qkitFqOWtaTkRt1BXedg-hjU6PK9jxv6sQ3Sjqg%40mail.gmail.com.
just want to clarify the "lowering fully developed". When we are talking about "HLO dialect supports dynamic shape for certain ops", the meaning here for hlo dialect is for exporting the hlo dialect, right? If yes, you have to enhance the current xla. If no, how is your proposal to hlo dialect lowering.
As my understanding, hlo dialect means definition, convertion and xla implementation
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CADd-0hvNkC_Y%3D5rsLtnHPe8v3n%2BF5b05TKks%3D-vTPm7rGtiHcg%40mail.gmail.com.
On Fri, Jan 10, 2020 at 1:16 PM Xiaoyong Liu <xyli...@gmail.com> wrote:just want to clarify the "lowering fully developed". When we are talking about "HLO dialect supports dynamic shape for certain ops", the meaning here for hlo dialect is for exporting the hlo dialect, right? If yes, you have to enhance the current xla. If no, how is your proposal to hlo dialect lowering.Lowering refers to the conversion from TF dialect operations to HLO dialect operations.As my understanding, hlo dialect means definition, convertion and xla implementationXLA is not part of the HLO dialect definition. "HLO dialect" refers purely to the MLIR side independently of XLA, we don't plan to change XLA itself.You can imagine a codegen path that is fully independent of XLA using only MLIR component. The HLO dialect is a stepping stone that allows us to rely on the proven XLA techniques and experience to build an independent MLIR codegen path.There is no detailed plan at the moment because our first milestones have been focused on re-using the XLA Codegen path and reaching parity with the existing bridge. The first components we're replacing are 1) the GraphTransformations passes that implement the TensorFlow graph transformation to extract a cluster of computation to be compiled with XLA, and 2) the set of kernels that emit HLO for each of the TensorFlow Op.As such we haven't prioritized an end-to-end path that supports dynamic shape at the moment, as none of the existing use-cases using XLA requires it and we're limited by XLA anyway to reach our current milestones. However some experimentations have been conducted, using LHLO -> Linalg conversion for now, and many folks are actively playing with alternatives in this domain (mostly targeting CPUs and GPUs right now).
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CANF-O%3DaXg-Ag%2BbvrFHUi%2Bb0P-uiDZ1DHC9jSjw14Jmu7PNOENw%40mail.gmail.com.
On Sun, Jan 12, 2020 at 11:18 AM Mehdi AMINI <joke...@gmail.com> wrote:On Fri, Jan 10, 2020 at 1:16 PM Xiaoyong Liu <xyli...@gmail.com> wrote:just want to clarify the "lowering fully developed". When we are talking about "HLO dialect supports dynamic shape for certain ops", the meaning here for hlo dialect is for exporting the hlo dialect, right? If yes, you have to enhance the current xla. If no, how is your proposal to hlo dialect lowering.Lowering refers to the conversion from TF dialect operations to HLO dialect operations.As my understanding, hlo dialect means definition, convertion and xla implementationXLA is not part of the HLO dialect definition. "HLO dialect" refers purely to the MLIR side independently of XLA, we don't plan to change XLA itself.You can imagine a codegen path that is fully independent of XLA using only MLIR component. The HLO dialect is a stepping stone that allows us to rely on the proven XLA techniques and experience to build an independent MLIR codegen path.There is no detailed plan at the moment because our first milestones have been focused on re-using the XLA Codegen path and reaching parity with the existing bridge. The first components we're replacing are 1) the GraphTransformations passes that implement the TensorFlow graph transformation to extract a cluster of computation to be compiled with XLA, and 2) the set of kernels that emit HLO for each of the TensorFlow Op.As such we haven't prioritized an end-to-end path that supports dynamic shape at the moment, as none of the existing use-cases using XLA requires it and we're limited by XLA anyway to reach our current milestones. However some experimentations have been conducted, using LHLO -> Linalg conversion for now, and many folks are actively playing with alternatives in this domain (mostly targeting CPUs and GPUs right now).As you say in the last statement, I think it is important to be explicit that there are multiple "we's" here, even within Google.