which files are the source code of legalizing from TF dialect to chlo dialect?

26 views
Skip to first unread message

mofheka

unread,
Mar 12, 2021, 9:07:49 AM3/12/21
to MLIR
I could only find the legalize_hlo.cc(hlo to TF), chlo_legalize_to_hlo(chlo to hlo) and something else, but not TF to chlo.

Mehdi AMINI

unread,
Mar 12, 2021, 1:57:29 PM3/12/21
to mofheka, MLIR

On Fri, Mar 12, 2021 at 6:07 AM mofheka <mofh...@gmail.com> wrote:
I could only find the legalize_hlo.cc(hlo to TF), chlo_legalize_to_hlo(chlo to hlo) and something else, but not TF to chlo.

--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/5faf8d66-8184-4f46-b3be-0dd0ae543fcfn%40tensorflow.org.

mofheka

unread,
Mar 12, 2021, 9:13:06 PM3/12/21
to MLIR, joke...@gmail.com, MLIR, mofheka

Thanks a lot! Besides, do you have any idea to fix this bug? It thrown error like this:
./tem_graphdef/tem_new_graph_def.tf_dialect.mlir:242:12: error: 'shape.cstr_broadcastable' op operand #1 must be shape or extent tensor, but got 'tensor<1xi32>'
    %239 = "tf.AddV2"(%223, %238) {device = ""} : (tensor<?xi32>, tensor<?xi32>) -> tensor<?xi32>
           ^
./tem_graphdef/tem_new_graph_def.tf_dialect.mlir:242:12: note: see current operation: %425 = "shape.cstr_broadcastable"(%424, %89) : (tensor<?xindex>, tensor<1xi32>) -> !shape.witness
But I found  there is exactly a test about tf.addv2 with dynamic shape in the file "/mlir/xla/tests/legalize-tf-binary-elementwise.mlir", like this:
// CHECK-LABEL: func @add_dynamic
func @add_dynamic(%arg0: tensor<?xi32>, %arg1: tensor<?x?xi32>) -> tensor<?x?xi32> {
// CHECK-DAG: %[[CSTR_LHS_SHAPE:.+]] = shape.shape_of %arg0
// CHECK-DAG: %[[CSTR_RHS_SHAPE:.+]] = shape.shape_of %arg1
// CHECK-NEXT: %[[WITNESS:.+]] = shape.cstr_broadcastable %[[CSTR_LHS_SHAPE]], %[[CSTR_RHS_SHAPE]]
// CHECK-NEXT: shape.assuming %[[WITNESS:.+]]
// CHECK-DAG: %[[LHS_SHAPE:.+]] = shape.shape_of %arg0
// CHECK-DAG: %[[RHS_SHAPE:.+]] = shape.shape_of %arg1
// CHECK-NEXT: %[[RESULT_SHAPE:.+]] = shape.broadcast %[[LHS_SHAPE]], %[[RHS_SHAPE]] : tensor<?xindex>, tensor<?xindex> -> tensor<?xindex>
// CHECK-NEXT: %[[RESULT_EXTENTS:.+]] = tensor.cast %[[RESULT_SHAPE]] : tensor<?xindex> to tensor<2xindex>
// CHECK-NEXT: %[[LHS_BCAST:.+]] = "mhlo.dynamic_broadcast_in_dim"(%arg0, %[[RESULT_EXTENTS]]) {broadcast_dimensions = dense<1> : tensor<1xi64>}
// CHECK-NEXT: %[[RHS_BCAST:.+]] = "mhlo.dynamic_broadcast_in_dim"(%arg1, %[[RESULT_EXTENTS]]) {broadcast_dimensions = dense<[0, 1]> : tensor<2xi64>}
// CHECK-NEXT: %[[RESULT:.+]] = mhlo.add %[[LHS_BCAST]], %[[RHS_BCAST]] : tensor<?x?xi32>
// CHECK-NEXT: shape.assuming_yield %[[RESULT]]
%0 = "tf.AddV2"(%arg0, %arg1) : (tensor<?xi32>, tensor<?x?xi32>) -> tensor<?x?xi32>
return %0: tensor<?x?xi32>
}
What happens to my transformation from TF dialect to HLO dialect.
Reply all
Reply to author
Forward
0 new messages