Hi there,
I’m running some experiments with MLIR and I couldn’t find a way to lower the tensor type all the way to LLVM-IR dialect. The type is currently not supported by the ConvertToLLVMDialect pass. Is this type expected to be lowered to memref somewhere else? Is this lowering not supported atm?
Thanks!
Diego
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/025727872606CE4AA31D9EF6945969BA3C9C2088%40ORSMSX115.amr.corp.intel.com.
From: 'Alex Zinenko' via MLIR [mailto:ml...@tensorflow.org]
Sent: Thursday, April 18, 2019 2:50 PM
Cc: MLIR <ml...@tensorflow.org>
Subject: Re: [mlir] Tensor type lowering
Hi Diego,
MLIR tensors currently don't have a runtime storage abstraction, therefore there is no direct way of representing them as memrefs or LLVM-level types. This is partially intentional: different tensor-level frameworks may want to use different storage structures and we don't want to force a single solution on them. Eventually, we should have a conversion for TensorFlow tensors. Until then, MLIR makes it relatively simple to implement a dialect conversion that converts tensors into a combination of standard types that are supported by the conversions further down the stack, according to the specific tensor model (ownership, layout, device address space, etc.). The easiest way is to implement the conversion for the `tensor_alloc` operation that copies data from tensors to a (contiguous) memref regardless of the tensor structure as long as it can be indexed.
Alex
On Thu, Apr 18, 2019 at 9:16 PM Caballero, Diego <diego.c...@intel.com> wrote:
Hi there,
I’m running some experiments with MLIR and I couldn’t find a way to lower the tensor type all the way to LLVM-IR dialect. The type is currently not supported by the ConvertToLLVMDialect pass. Is this type expected to be lowered to memref somewhere else? Is this lowering not supported atm?
Thanks!
Diego
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/025727872606CE4AA31D9EF6945969BA3C9C2088%40ORSMSX115.amr.corp.intel.com.
--
-- Alex
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.