2.7 converter compatibility

156 views
Skip to first unread message

Daniel Situnayake

unread,
Jan 10, 2022, 8:10:23 PM1/10/22
to SIG Micro
Hi SIG Micro,

I'm upgrading our codebase to use TF2.7. For certain models, the converter in 2.7 outputs a highly mutated graph compared to earlier versions (we were previously using 2.4). For example, the following simple model with a 96x96 RGB input:

# input is 96x96x3
model = Sequential()
model.add(Dense(16, activation='relu'))
model.add(Flatten())
model.add(Dense(classes, activation='softmax'))

The resulting tflite graph is highly complex:


image.png

Some of the ops (such as ReduceProd) don't seem to be available in the latest TFLM; they are only in Lite.

This seems to happen whenever you follow a 2D tensor with more than one fully connected layer, even if you reshape/flatten down to a single dimension first.

Does anyone know if there is a way to force the converter to output a "simple" model that doesn't bring in these extra operators?

Warmly,
Dan

--
Daniel Situnayake
Founding TinyML Engineer, Edge Impulse

Jean-michel DELORME

unread,
Jan 11, 2022, 3:29:52 PM1/11/22
to Daniel Situnayake, SIG Micro

Hi Dan,

 

In our case, you have the possibility to use a concrete function to fix the input shape.

As illustrated the resulting graph is more efficient for TFLM “2.7”. Else you have always the possibility to use the toco instead MLIR backend.

 

Br,

JM

 

model = tf.keras.Sequential()

model.add(tf.keras.Input(shape=(96,96,3)))

model.add(tf.keras.layers.Dense(16, activation='relu'))

model.add(tf.keras.layers.Flatten())

model.add(tf.keras.layers.Dense(12, activation='softmax'))

 

run_model = tf.function(lambda x: model(x))

concrete_func = run_model.get_concrete_function(

    tf.TensorSpec([1, 96, 96, 3], model.inputs[0].dtype))

 

converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])

tfl_model = converter.convert()

 

with open('sig_tfl_conv_2.tflite', "wb") as f:

  f.write(tfl_model)

 

 

 

 

Jean-Michel Delorme | TINA: 041 5105 | Tel: +33 4 76 58 51 05 | Mobile: +33 6 72 81 98 66

MDG/MCD | AI Solution – Senior SW designer

 

 

From: Daniel Situnayake <d...@edgeimpulse.com>
Sent: Tuesday, January 11, 2022 2:10 AM
To: SIG Micro <mi...@tensorflow.org>
Subject: 2.7 converter compatibility

 

Hi SIG Micro,

 

I'm upgrading our codebase to use TF2.7. For certain models, the converter in 2.7 outputs a highly mutated graph compared to earlier versions (we were previously using 2.4). For example, the following simple model with a 96x96 RGB input:

 

# input is 96x96x3
model = Sequential()
model.add(Dense(16, activation='relu'))
model.add(Flatten())
model.add(Dense(classes, activation='softmax'))

 

The resulting tflite graph is highly complex:

 

 

 

Some of the ops (such as ReduceProd) don't seem to be available in the latest TFLM; they are only in Lite.

 

This seems to happen whenever you follow a 2D tensor with more than one fully connected layer, even if you reshape/flatten down to a single dimension first.

 

Does anyone know if there is a way to force the converter to output a "simple" model that doesn't bring in these extra operators?

 

Warmly,

Dan

 

--

Daniel Situnayake
Founding TinyML Engineer, Edge Impulse

--
You received this message because you are subscribed to the Google Groups "SIG Micro" group.
To unsubscribe from this group and stop receiving emails from it, send an email to micro+un...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/micro/CAOu%3DFxatXnTt9jBOWzAp1B%2Bj7mXh78ooYBOPoGkVjoraK%3DPOCw%40mail.gmail.com.

 

ST Restricted

Daniel Situnayake

unread,
Jan 11, 2022, 5:00:56 PM1/11/22
to Jean-michel DELORME, SIG Micro
Hi Jean-Michel,

Thanks for your reply, I appreciate it! First up, my apologies—I specified 96x96 in my email but the actual model had a 160x160 input. That was a mistake on my part.

In terms of the output, the concern I have is that some ops (i.e. ReduceProd) do not seem to be available in the latest TFLM code at all—note that this is based on my reading of the source, I haven't tried to run it yet. Or I incorrect and the latest TFLM code will be able to run the graph I attached?

In my actual code I'm already converting from a concrete function that has the input shape defined, but I still get the output. From your email I understood that in your case, defining the input shape in a concrete function resulted in the simple graph. Let me know if I misunderstood.

Falling back to the original TOCO is the workaround I'm currently using, but for I'd love to find a way to make the MLIR converter work if possible.

Warmly,
Dan

Jean-michel DELORME

unread,
Jan 12, 2022, 7:55:30 AM1/12/22
to Daniel Situnayake, SIG Micro

Hi Dan,

 

Yes, I confirm, in my environment (tensorflow 2.7.0, Python 3.7.9) with the snippet code and an input shape of 160x160, the output is the same (w/o ReduceProd op). The usage of the concrete function seems help the MILR converter to avoid the generation of the subgraph with ReduceOp op which is not requested here and not supported in the current baseline of TFLM (but I have not really checked with the latest TFLM version).

However, you highlight perhaps a specific point, I have not yet find it in the TF documentation (wiki or other), if it is possible to specific some constraints/options in the TFLiteConverter to have a TFLite file more “compliant” with a TFLM runtime.

 

Warn,

JM

ST Restricted

Daniel Situnayake

unread,
Jan 12, 2022, 1:13:22 PM1/12/22
to Jean-michel DELORME, SIG Micro
Excellent, thank you for confirming—I'll try to reproduce on my side. I've been creating the ConcreteFunction a little differently (using the `serving_default` signature from a SavedModel and then calling `set_shape` on its input)—I'll try your exact approach and see how it goes.

It would definitely be helpful to have a flag to force the converter to only produce TFLM-compatible code, although I suppose the lack of fixed versioning for TFLM would be a challenge. It would be helpful if you could provide the converter with a list of permitted ops.

Warmly,
Dan

Daniel Situnayake

unread,
Jan 12, 2022, 3:32:48 PM1/12/22
to Jean-michel DELORME, SIG Micro
Update: this technique works, thank you very much! Interesting that it works where my previous approach did not. Definitely seems like a bug. If I have time I'll write up a notebook to reproduce and file a bug with the TF team.

Warmly,
Dan

Chris Knorowski

unread,
Jan 14, 2022, 12:50:46 PM1/14/22
to Daniel Situnayake, Jean-michel DELORME, SIG Micro
Just to piggyback on this a bit, has there been any thought about forking the TensorFlow converter and including it as part of tflite micro repo to keep it more in line with the optimizations tflite micro would make vs the tflite? Might help with versioning issues as well. 

Reply all
Reply to author
Forward
0 new messages