I hope to share with you our plan to contribute an ONNX (Open Neural Network Exchange) dialect and a reference lowering process converting ONNX dialect to existing MLIR dialects (e.g., StandardOps, AffineOps, Linalg/StructuredOps).
ONNX is an open format to represent any neural network models; ONNX format contains a list of standard, framework-agnostic definition of operations and their semantic specifications (https://github.com/onnx/onnx/blob/master/docs/Operators.md). Naturally, we can express ONNX operations as a dialect within MLIR and connect it to the rest of the MLIR infrastructure. To this end, we propose to contribute the following:
After discussing with stakeholders in both MLIR and ONNX community (prior Github discussion at https://github.com/onnx/onnx/issues/2499), we believe it would be beneficial to both MLIR and ONNX communities to upstream these contributions under the MLIR/LLVM umbrella. In doing so, we hope to facilitate the usage and adoption of the ONNX standard/dialect in the MLIR community.
In this section, we present rationales/key aspects of considerations for including ONNX dialect as a core dialect in MLIR following the developer guidelines.
What is the overall goal of the dialect? What is the first implementation milestone?
The goal is to provide an MLIR dialect implementation and a reference lowering of ONNX standard operations. The implication includes the following, to name just a few:
ONNX dialect enables ONNX converters to make use of MLIR infrastructures, which can help tremendously with model conversions to and from ONNX formats in areas such as verification/graph rewriting.
Reference lowering provides a set of IR definitions for ONNX operations. These IR definitions are low-level, testable, and self-contained by construction. Such IR definitions are instrumental for hardware vendors working to support ONNX.
Drawing on my experience working with the ONNX community on some of the converter efforts, the first meaningful milestone will be reached when we can support expressing and lowering the set of model tests embedded in the ONNX standard package. Reaching this mile stone will demonstrate some end-to-end functionalities/correctness of our contribution.
How does it fit into the MLIR dialect ecosystem?
Our contribution lowers the ONNX dialect to other built-in dialects such as Affine and Linalg/StructuredOps dialects. Moreover, we also leverage the efforts of the MLIR community to lower these intermediate representations further down to LLVM IR in order to generate test programs.
What is the community of users that it is serving?
We believe this dialect primarily serves the ONNX community. The benefits to the MLIR community are also substantial, given that hardware vendors will have access to a set of IR definitions of commonly used NN/ML operations as specified by an open, vendor-neutral, widely used standard.
Who are the future contributors/maintainers beyond those who propose the dialect?
The initial support will come from IBM Research. We expect members of the ONNX communities to join our development and maintenance effort imminently.
We are interested in your opinions on how to contribute some supporting components of our contribution. Some of them may not fit well under MLIR directly, and it may be a good idea to upstream them as an LLVM Project, to provide driver code for essentially an ONNX frontend for MLIR. These include:
Thus, ideally, to merge our contribution into MLIR/LLVM, we are considering the following:
Incubate our contribution as an LLVM project, then upstream to MLIR as much as possible. Specifically, we may proceed with three steps:
We will try to upstream ONNX dialect and reference lowering as early as possible so as not to make upstreaming to MLIR a significant burden for MLIR reviewers. Such burden can be further mitigated by involving MLIR reviewers early into our review process as an llvm-project.
An alternative path is to upstream our contribution (minus the three components mentioned above) to MLIR as soon as possible and to iterate with community feedback entirely within upstream MLIR. Driver, python unit tests, and model ingestion code will still be hosted elsewhere (again, preferably under llvm-project).
We would like to hear your opinions on how to best proceed with the contribution to MLIR/LLVM community.
A preliminary version of our code repository has been pushed to https://github.com/clang-ykt/ONNF . In this section, we demonstrate the usage and utility of this ONNF (Open Neural Network Frontend) project.
To ingest an ONNX model protobuf add.onnx which contains an add operation, run the following:
./onnf --EmitONNXIR add.onnx
The output is:
module {
func @main_graph(%arg0: tensor<10x10x10xf32>, %arg1: tensor<10x10x10xf32>) -> tensor<10x10x10xf32> {
%0 = "onnx.Add"(%arg0, %arg1) : (tensor<10x10x10xf32>, tensor<10x10x10xf32>) -> tensor<10x10x10xf32>
return %0 : tensor<10x10x10xf32>
}
}
To see the built-in dialect representation of ONNX model protobuf, run the following:
./onnf --EmitMLIR add.onnx
The output is:
#map0 = () -> (0)
#map1 = () -> (10)
module {
func @main_graph(%arg0: memref<10x10x10xf32>, %arg1: memref<10x10x10xf32>) -> memref<10x10x10xf32> {
%0 = alloc() : memref<10x10x10xf32>
affine.for %arg2 = 0 to 10 {
affine.for %arg3 = 0 to 10 {
affine.for %arg4 = 0 to 10 {
%1 = load %arg0[%arg2, %arg3, %arg4] : memref<10x10x10xf32>
%2 = load %arg1[%arg2, %arg3, %arg4] : memref<10x10x10xf32>
%3 = addf %1, %2 : f32
store %3, %0[%arg2, %arg3, %arg4] : memref<10x10x10xf32>
}
}
}
return %0 : memref<10x10x10xf32>
}
}
This version of Open Neural Network Frontend is contributed by Tian Jin, Doru Bercea, Tung D. Le, Tong Chen, Haruki Imai, and Alex Eichenberger from IBM Research.
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAEmcUquxidogLWi5y2YiFwjCsRztvCzWeTsD2AnEMQqmjhZSRg%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAEmcUquxidogLWi5y2YiFwjCsRztvCzWeTsD2AnEMQqmjhZSRg%40mail.gmail.com.
Hi Tian,this proposal sounds exciting! Since MLIR code is a part of LLVM monorepo as of today, I would suggest involving the broader LLVM community in the discussion of project layering and dependencies.
Personally, I think a case can be made for an ONNX dialect in MLIR and it would make sense to develop the dialect upstream.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAPL655iPFoPW%2BS18z4aS3sLuYE%2B3QZtKTOjaZCXHXB4x85vUYA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAPL655iPFoPW%2BS18z4aS3sLuYE%2B3QZtKTOjaZCXHXB4x85vUYA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CANF-O%3DZ-EAneB7B%3D3%3Djpm%2BS%2BLBnSWZNpaSd19euXucSKURzD8w%40mail.gmail.com.
I hope to share with you our plan to contribute an ONNX (Open Neural Network Exchange) dialect and a reference lowering process converting ONNX dialect to existing MLIR dialects (e.g., StandardOps, AffineOps, Linalg/StructuredOps).
On Tue, Dec 24, 2019 at 2:22 AM 'Alex Zinenko' via MLIR <ml...@tensorflow.org> wrote:Hi Tian,this proposal sounds exciting! Since MLIR code is a part of LLVM monorepo as of today, I would suggest involving the broader LLVM community in the discussion of project layering and dependencies.Moving forward we'll use https://llvm.discourse.group/c/llvm-project/mlir to discuss Core MLIR proposals. This proposal does not seem directly a contribution to MLIR Core right now though, it seems more like positioned as proposal for the LLVM foundation to adopt a new subproject, the llvm-dev@ mailing-list seems like the current best place to start such a discussion.Personally, I think a case can be made for an ONNX dialect in MLIR and it would make sense to develop the dialect upstream.It isn't that clear to me: why wouldn't this be developed in ONNX itself? It seems like one possible producer of MLIR like many others, similarly to the TensorFlow dialect, are we gonna import upstream every possible framework-specific dialects?
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAPL655iG%2BDNjSC%2B4ms__XtW_Kr9jRuZVAzTG_yka--iSZYLU4Q%40mail.gmail.com.
Supporting Video/ML is definitely one goal of ONNX (ONNX has had couple operators supporting traditional ML, btw). Comments/Proposals/Discussions about it are welcome :-)
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAEmcUquxidogLWi5y2YiFwjCsRztvCzWeTsD2AnEMQqmjhZSRg%40mail.gmail.com.
------ Alex
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAEmcUquxidogLWi5y2YiFwjCsRztvCzWeTsD2AnEMQqmjhZSRg%40mail.gmail.com.
------ Alex
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAPL655iPFoPW%2BS18z4aS3sLuYE%2B3QZtKTOjaZCXHXB4x85vUYA%40mail.gmail.com.
---- Alex
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/af1dd066-6522-4620-976c-29f28e73393c%40tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+unsubscribe@tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAEmcUquxidogLWi5y2YiFwjCsRztvCzWeTsD2AnEMQqmjhZSRg%40mail.gmail.com.
---- Alex
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAEmcUquxidogLWi5y2YiFwjCsRztvCzWeTsD2AnEMQqmjhZSRg%40mail.gmail.com.
------ Alex
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAEmcUquxidogLWi5y2YiFwjCsRztvCzWeTsD2AnEMQqmjhZSRg%40mail.gmail.com.
------ Alex
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAEmcUquxidogLWi5y2YiFwjCsRztvCzWeTsD2AnEMQqmjhZSRg%40mail.gmail.com.
------ Alex
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAEmcUquxidogLWi5y2YiFwjCsRztvCzWeTsD2AnEMQqmjhZSRg%40mail.gmail.com.
------ Alex
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAPL655iPFoPW%2BS18z4aS3sLuYE%2B3QZtKTOjaZCXHXB4x85vUYA%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
Hi Jacques:Thanks for your instructive comments!I've proposed the discussion. I assume TBD means to be determined by the MLIR team? Or do people usually propose the discussion with a specific date in mind?I share your concern regarding the peripheral infrastructure around ONNX. However, the core merits of an ONNX dialect does not depend on these peripheral infrastructures. And as per my response to Mehdi, including ONNX as a high-level core dialect brings about substantial benefits in and of itself and the pros may very well outweigh the cons we may experience in order to prevent burdening MLIR core with extra dependencies.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/5ba14c3b-4848-4bbf-bbf4-b5fd8de312d9%40tensorflow.org.
Hi Tian,
On Thu, Jan 2, 2020 at 8:17 PM Tian Jin <tjin...@gmail.com> wrote:Hi Jacques:Thanks for your instructive comments!I've proposed the discussion. I assume TBD means to be determined by the MLIR team? Or do people usually propose the discussion with a specific date in mind?I share your concern regarding the peripheral infrastructure around ONNX. However, the core merits of an ONNX dialect does not depend on these peripheral infrastructures. And as per my response to Mehdi, including ONNX as a high-level core dialect brings about substantial benefits in and of itself and the pros may very well outweigh the cons we may experience in order to prevent burdening MLIR core with extra dependencies.I actually have a pretty conflicting view on this. I find starting with "well established" frameworks to be very bad for building these types of *core* dialects. They generally bring unnecessary technical debt, political baggage, compatibility guarantees, external dependencies, etc. that actively inhibit developing the "best" solution. I can understand why adding an ONNX frontend would be beneficial from the perspective of the ONNX community, but unless LLVM/MLIR had full ability to add/change/remove operations at will, we will likely end up adding a similar-but-different dialect that we can control. What are the benefits from developing in /mlir vs in the ONNX project itself? That location seems like a much more natural fit IMO.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAPXRNp%2BsCDoiZ5z-Mu6ZhZxsbon4pE0ta7ZyMYFL-EvDqta5Eg%40mail.gmail.com.
Hi Tian,
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/5ba14c3b-4848-4bbf-bbf4-b5fd8de312d9%40tensorflow.org.
The thread "Roadmap about adding dynamic shape support in MLIR HLO dialect" should be relevant to this discussion since HLO is known to be Turing-complete and that Tensorflow, JAX and PyTorch -> TPU use it.
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/e442cf9e-7014-4c85-91c0-eab2aeb58b72%40tensorflow.org.
-- River
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/5ba14c3b-4848-4bbf-bbf4-b5fd8de312d9%40tensorflow.org.
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAPXRNp%2BsCDoiZ5z-Mu6ZhZxsbon4pE0ta7ZyMYFL-EvDqta5Eg%40mail.gmail.com.