Post-training integer quantization seems not to work on microcontroller

Skip to first unread message

James B.

Aug 10, 2021, 3:53:21 PM8/10/21
to SIG Micro
I'd like to print inference time on a serial terminal for a Keras model (with a STM32F401RE microcontroller) which has as an input sample a 2D tensor of shape (500, 1, 1). However, when I try to allocate tensors with the interpreter, the program hangs after entering AllocateTensors() function. Could it be caused by a dimension of the model too large (.tflite file is 67 KB and .h file is 412 KB)? I tried to use post-training integer quantization through TFLite Converter, but the situation doesn't change.
I attached the Keras model (in and the C++ model (in main.cpp).

Best regards.

Rocky Rhodes

Aug 10, 2021, 9:02:06 PM8/10/21
to SIG Micro, James B.
I'd try increasing the arena size.  The AllocateTensors() call allocates all the tensor space from there.

Advait Jain

Aug 11, 2021, 1:04:04 AM8/11/21
to Rocky Rhodes,, SIG Micro, James B.

Combining the threads on the same topic on two different mailing lists.

You received this message because you are subscribed to the Google Groups "SIG Micro" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To view this discussion on the web visit
Reply all
Reply to author
0 new messages