Hi TFLM team!
I am trying to run a quantified model in the local test of tflm.
Problem description: In the local test of tflm, the float model can be used to obtain the correct result, but the quantified model will report an error.
The error message is:
Segmentation fault (core dumped)
tensorflow/lite/micro/examples/hello_world/hello_world_test_binary: FAIL - '~~~ALL TESTS PASSED~~~' not found in logs.
Testing LoadModelAndPerformInference
Didn't find op for builtin opcode 'QUANTIZE' version '1'
Failed to get registration from op code QUANTIZE
Failed starting model allocation.The method to obtain the quantified hdf5 model and tflite model is as follows. I don't know which step the problem occurred.
Step 1: Generate qantization aware file
import tensorflow_model_optimization as tfmot
...
model = XXmodel(input_shape=input_shape, num_classes=11)
#next two to enable training quantilization
quantize_model = tfmot.quantization.keras.quantize_model
model = quantize_model(model)
optimizer = keras.optimizers.Adam()
model.save('test.hdf5')
Step 2: Convert hdf5 model to tflite model
with tfmot.quantization.keras.quantize_scope():
loaded_model = tf.keras.models.load_model('test.hdf5')
converter = tf.lite.TFLiteConverter.from_keras_model(loaded_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
q_tflite_model = converter.convert()
full_tflite_model_path = "./test.tflite"
with open(full_tflite_model_path, "wb") as file:
file.write(q_tflite_model )
file.close()Step 3: Generate model files
xxd -i test.tflite>model.cc
Also tried:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
def representative_dataset_generator():
for value in reference_data:
yeild [np.array(value, dtype=np.float32, ndmin=2)]
converter.representative_dataset = representative_dataset_generator
tflite_quant_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_quant_model)and got the same error. My tflm version is 2.4.2.