Didn't find op for builtin opcode 'QUANTIZE' version '1'

Skip to first unread message

Audio Xiaomi

Dec 23, 2021, 2:01:11 AM12/23/21
to SIG Micro
Hi TFLM team!

I am trying to run a quantified model in the local test of tflm.

Problem description: In the local test of tflm, the float model can be used to obtain the correct result, but the quantified model will report an error.

The error message is:
Segmentation fault (core dumped)
tensorflow/lite/micro/examples/hello_world/hello_world_test_binary: FAIL - '~~~ALL TESTS PASSED~~~' not found in logs.
Testing LoadModelAndPerformInference
Didn't find op for builtin opcode 'QUANTIZE' version '1'

Failed to get registration from op code QUANTIZE

Failed starting model allocation.

The method to obtain the quantified hdf5 model and tflite model is as follows. I don't know which step the problem occurred.

Step 1: Generate qantization aware file

import tensorflow_model_optimization as tfmot
model = XXmodel(input_shape=input_shape, num_classes=11)
#next two to enable training quantilization
quantize_model = tfmot.quantization.keras.quantize_model
model = quantize_model(model)
optimizer = keras.optimizers.Adam()

Step 2: Convert hdf5 model to tflite model

with tfmot.quantization.keras.quantize_scope():
  loaded_model = tf.keras.models.load_model('test.hdf5')
converter = tf.lite.TFLiteConverter.from_keras_model(loaded_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
q_tflite_model = converter.convert()
full_tflite_model_path = "./test.tflite"
with open(full_tflite_model_path, "wb") as file:
    file.write(q_tflite_model )

Step 3: Generate model files
xxd -i test.tflite>model.cc

Also tried:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
def representative_dataset_generator():
    for value in reference_data:
        yeild [np.array(value, dtype=np.float32, ndmin=2)]
converter.representative_dataset = representative_dataset_generator
tflite_quant_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_quant_model)

and got the same error. My tflm version is 2.4.2.

Audio Xiaomi

Dec 27, 2021, 8:26:11 AM12/27/21
to wan...@google.com, mi...@tensorflow.org
Zhao Weixiao

Tiezhen Wang

Dec 27, 2021, 8:33:27 AM12/27/21
to Audio Xiaomi, TF Lite Micro, mi...@tensorflow.org
+TF Lite Micro Adding the micro team. Sounds like an op supporting issue.

Advait Jain

Dec 29, 2021, 6:30:14 PM12/29/21
to Audio Xiaomi, wan...@google.com, mi...@tensorflow.org
If you could try with tip of tree TFLM from the stand-alone TFLM github repository, that would be great.

Please also make sure that you register the quantize op with the MicroMutableOpResolver (see this for one example).

You received this message because you are subscribed to the Google Groups "SIG Micro" group.
To unsubscribe from this group and stop receiving emails from it, send an email to micro+un...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/micro/CAPzd2fu8rv%3DusiXumGwkkWQqFQE_5g0hWCtOJe%3DpXP%3D%3DN26%2BPQ%40mail.gmail.com.
Reply all
Reply to author
0 new messages