tflite converter fails to quantize?

57 views
Skip to first unread message

menashe soffer

unread,
Jan 23, 2023, 2:58:18 AM1/23/23
to TensorFlow Lite
I am trying to convert a keras model into tflite, in order to run it on edge device.
UNFORTUNATELY, I SUSPECT THAT SOME OF THE LAYERS ARE NOT QUANTIZED.
the reason is that when I examine the tflite with NETRON, I see many <quantize> and <de-quantize> nodes inside the model.


here is the code that I am using:

    converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_funct])
    converter.optimizations = [tf.lite.Optimize.DEFAULT]
    converter.representative_dataset = dset
    converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
    converter.inference_input_type = tf.uint8
    converter.inference_output_type = tf.int8
    tflite_model = converter.convert()

Blaine Rister

unread,
Feb 27, 2023, 9:20:42 PM2/27/23
to TensorFlow Lite, msof...@gmail.com
I have seen this as well. Sometimes the converter will keep certain ops in floating point mode, even if the opset is TFLITE_BUILTINS_INT8. (De)quantize converts to/from floating point. For example, this always seem to happen with 'tfl.while_loop' ops in RNNs.

Regards,
Blaine
Reply all
Reply to author
Forward
0 new messages