tf.placeholders instead of tf.Varhandle ops

56 views
Skip to first unread message

Prashant Kumar

unread,
Jun 7, 2021, 3:50:04 AM6/7/21
to MLIR, joke...@gmail.com
module attributes {tf.versions = {bad_consumers = [], min_consumer = 0 : i32, producer = 723 : i32}}  {
  func @main(%arg0: tensor<32x28x28xf32>) -> tensor<32x10xf32> attributes {llvm.emit_c_interface, tf.entry_function = {control_outputs = "", inputs = "x", outputs = "Identity"}} {                               
    %0 = "tf.Const"() {value = dense<[-1, 784]> : tensor<2xi32>} : () -> tensor<2xi32>
    %1 = "tf.Placeholder"() {device = "", shape = #tf.shape<>} : () -> tensor<!tf.resource>
    %2 = "tf.ReadVariableOp"(%1) {device = ""} : (tensor<!tf.resource>) -> tensor<128xf32>
    %3 = "tf.ReadVariableOp"(%1) {device = ""} : (tensor<!tf.resource>) -> tensor<784x128xf32>
    %4 = "tf.ReadVariableOp"(%1) {device = ""} : (tensor<!tf.resource>) -> tensor<10xf32>
    %5 = "tf.ReadVariableOp"(%1) {device = ""} : (tensor<!tf.resource>) -> tensor<128x10xf32>
    %6 = "tf.Reshape"(%arg0, %0) {device = ""} : (tensor<32x28x28xf32>, tensor<2xi32>) -> tensor<32x784xf32>                                                                                                      
    %7 = "tf.MatMul"(%6, %3) {device = "", transpose_a = false, transpose_b = false} : (tensor<32x784xf32>, tensor<784x128xf32>) -> tensor<32x128xf32>                                                            
    %8 = "tf.BiasAdd"(%7, %2) {data_format = "NHWC", device = ""} : (tensor<32x128xf32>, tensor<128xf32>) -> tensor<32x128xf32>                                                                                   
    %9 = "tf.Relu"(%8) {device = ""} : (tensor<32x128xf32>) -> tensor<32x128xf32>
    %10 = "tf.MatMul"(%9, %5) {device = "", transpose_a = false, transpose_b = false} : (tensor<32x128xf32>, tensor<128x10xf32>) -> tensor<32x10xf32>                                                             
    %11 = "tf.BiasAdd"(%10, %4) {data_format = "NHWC", device = ""} : (tensor<32x10xf32>, tensor<10xf32>) -> tensor<32x10xf32>                                                                                    
    %12 = "tf.Identity"(%11) {device = ""} : (tensor<32x10xf32>) -> tensor<32x10xf32>
    return %12 : tensor<32x10xf32>
  }
}

The above MLIR is emitted with the new tf.function API. Do we need to add support for the tf.placeholders in the  -tf-promote-resource-to-args pass (Is tf.placeholder replace tf.Varhandle ops)?

Uday Bondhugula

unread,
Jun 7, 2021, 4:06:24 AM6/7/21
to MLIR, pk5...@gmail.com, joke...@gmail.com

On a note slightly unrelated to the question, the `tf.Placeholder` op in the snippet pasted is missing the shape of the resource's tensor. The graphdef is considered invalid without it because it's not saying what exactly is stored inside the resource. The op is also missing the name of the resource which the VarHandleOp is designed to carry.

Jacques Pienaar

unread,
Jun 7, 2021, 11:30:19 AM6/7/21
to Uday Bondhugula, MLIR, pk5...@gmail.com, joke...@gmail.com
On Mon, Jun 7, 2021 at 1:06 AM Uday Bondhugula <uday...@gmail.com> wrote:

On a note slightly unrelated to the question, the `tf.Placeholder` op in the snippet pasted is missing the shape of the resource's tensor. The graphdef is considered invalid without it because it's not saying what exactly is stored inside the resource. The op is also missing the name of the resource which the VarHandleOp is designed to carry.

For Placeholder here the shape is specified as a scalar (well modulo for GraphDef's produced before 21 where it could be unknown). So the shape is there. The dtype isn't and that would make refinement difficult during shape inference, so something went wrong in this import it would seem (if you could share a reproducer we could look at it).
 

On Monday, 7 June, 2021 at 1:20:04 pm UTC+5:30 pk5...@gmail.com wrote:
module attributes {tf.versions = {bad_consumers = [], min_consumer = 0 : i32, producer = 723 : i32}}  {
  func @main(%arg0: tensor<32x28x28xf32>) -> tensor<32x10xf32> attributes {llvm.emit_c_interface, tf.entry_function = {control_outputs = "", inputs = "x", outputs = "Identity"}} {                               
    %0 = "tf.Const"() {value = dense<[-1, 784]> : tensor<2xi32>} : () -> tensor<2xi32>
    %1 = "tf.Placeholder"() {device = "", shape = #tf.shape<>} : () -> tensor<!tf.resource>
    %2 = "tf.ReadVariableOp"(%1) {device = ""} : (tensor<!tf.resource>) -> tensor<128xf32>
    %3 = "tf.ReadVariableOp"(%1) {device = ""} : (tensor<!tf.resource>) -> tensor<784x128xf32>
    %4 = "tf.ReadVariableOp"(%1) {device = ""} : (tensor<!tf.resource>) -> tensor<10xf32>
    %5 = "tf.ReadVariableOp"(%1) {device = ""} : (tensor<!tf.resource>) -> tensor<128x10xf32>
    %6 = "tf.Reshape"(%arg0, %0) {device = ""} : (tensor<32x28x28xf32>, tensor<2xi32>) -> tensor<32x784xf32>                                                                                                      
    %7 = "tf.MatMul"(%6, %3) {device = "", transpose_a = false, transpose_b = false} : (tensor<32x784xf32>, tensor<784x128xf32>) -> tensor<32x128xf32>                                                            
    %8 = "tf.BiasAdd"(%7, %2) {data_format = "NHWC", device = ""} : (tensor<32x128xf32>, tensor<128xf32>) -> tensor<32x128xf32>                                                                                   
    %9 = "tf.Relu"(%8) {device = ""} : (tensor<32x128xf32>) -> tensor<32x128xf32>
    %10 = "tf.MatMul"(%9, %5) {device = "", transpose_a = false, transpose_b = false} : (tensor<32x128xf32>, tensor<128x10xf32>) -> tensor<32x10xf32>                                                             
    %11 = "tf.BiasAdd"(%10, %4) {data_format = "NHWC", device = ""} : (tensor<32x10xf32>, tensor<10xf32>) -> tensor<32x10xf32>                                                                                    
    %12 = "tf.Identity"(%11) {device = ""} : (tensor<32x10xf32>) -> tensor<32x10xf32>
    return %12 : tensor<32x10xf32>
  }
}

The above MLIR is emitted with the new tf.function API. Do we need to add support for the tf.placeholders in the  -tf-promote-resource-to-args pass (Is tf.placeholder replace tf.Varhandle ops)?

This is the main function, one can't really promote here as this is the main graph at this point. If this were really a function in the main graph, then one could. If this were imported as a function I believe it would look differently too.
 

--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/3ba97f61-ce67-4e98-875a-220a05dbe6bcn%40tensorflow.org.

Prashant Kumar

unread,
Jun 9, 2021, 8:17:02 AM6/9/21
to MLIR, jpie...@google.com, MLIR, Prashant Kumar, joke...@gmail.com, Uday Bondhugula
I have shared the gist here: https://gist.github.com/pashu123/499ce45f6c73756d302645bebefd776c/revisions
 After generation of graphdef run: "tf-mlir-translate  -graphdef-to-mlir --tf-input-arrays=x -tf-enable-shape-inference-on-import --tf-input-shapes=32,28,28 --tf-input-data-types=DT_FLOAT --tf-output-arrays=Identity  graphdef.pbtxt  | tf-opt  -tf-standard-pipeline"

Reply all
Reply to author
Forward
0 new messages