Why do I not have information on device ad operations of an active compute graph?

26 views
Skip to first unread message

Hugo Ferreira

unread,
Oct 13, 2022, 5:09:05 AM10/13/22
to j...@tensorflow.org
I am trying to create some documentation on the use of the Java API via Scala as I explore it. I am using test as a starting point:


Based on the `getGraphFunctions()` test:


I have:

def plusFive(tf: Ops) =
val input = tf.placeholder(classOf[TFloat32])
val output = tf.math.add(input, tf.constant(5.0f))
Signature.builder().key("plusFive").input("x", input).output("y", output).build()
def printGraphInfo(g: Graph) =
println(s"g.isEager() = ${g.isEager()}")
println(s"g.isGraph() = ${g.isGraph()}")
println(s"g.environmentType() = ${g.environmentType()}")
val scope = g.baseScope()
println(s"scope.getDeviceString() = ${scope.getDeviceString()}")
println(s"g.getFunctions() = ${g.getFunctions().asScala.map(_.toString())}")
println(s"g.operations() = ${g.operations().asScala.map(_.toString())}")

def getGraphFunctions() =
println("\n\ngetGraphFunctions")
Using.resources(ConcreteFunction.create(plusFive), Graph()) {
(function, g) =>
val tf = Ops.create(g)
tf.call(function, tf.constant(3f))

val attached = g.getFunction(function.getDefinedName())
assert(attached != null)

val x = TFloat32.scalarOf(10f)
val y = attached.call(x).asInstanceOf[TFloat32]
assert(y.getFloat() == 15f)
printGraphInfo(g)
}


I get:

2022-10-13 09:54:11.663410: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
g.isEager()             = false
g.isGraph()             = true
g.environmentType()     = GRAPH
scope.getDeviceString() =
g.getFunctions()        = ArrayBuffer(Signature for "plusFive_arz3ws3F8jo":
Method: "plusFive_arz3ws3F8jo"
Inputs:
"placeholder": dtype=DT_FLOAT, shape=()
Outputs:
"add": dtype=DT_FLOAT, shape=()
)
g.operations()          = <iterator>

My question is: seeing as at the point I access the graph no resources have been released, why don't I have information on the device (I am using a CPU) and operations (I assume the graph is now populated with nodes).

In another test (`createFunctionFromGraph`):

def createFunctionFromGraph() =
println("\n\ncreateFunctionFromGraph")
Using.resource( Graph() ) {
g =>
val signature = plusFive(Ops.create(g))
Using.resource( ConcreteFunction.create(signature, g)) {
f =>
val x = TFloat32.scalarOf(3.0f) // not AutoCloseable
val r = f.call(x)
val y = r.asInstanceOf[TFloat32]
val got = y.getFloat()
println(s"$got = ${x.getFloat()}+5.0")
assert(got == 8.0f)

val attached = g.getFunction(f.getDefinedName())
println(attached)
} }


I cannot even get the attached function. Why is this?

TIA


Reply all
Reply to author
Forward
0 new messages