Post-training integer quantization seems not to work on microcontroller

25 views
Skip to first unread message

James B.

unread,
Aug 10, 2021, 3:53:21 PMAug 10
to SIG Micro
I'd like to print inference time on a serial terminal for a Keras model (with a STM32F401RE microcontroller) which has as an input sample a 2D tensor of shape (500, 1, 1). However, when I try to allocate tensors with the interpreter, the program hangs after entering AllocateTensors() function. Could it be caused by a dimension of the model too large (.tflite file is 67 KB and .h file is 412 KB)? I tried to use post-training integer quantization through TFLite Converter, but the situation doesn't change.
I attached the Keras model (in tcn_scratch.py) and the C++ model (in main.cpp).

Best regards.
tcn_scratch.py
main.cpp

Rocky Rhodes

unread,
Aug 10, 2021, 9:02:06 PMAug 10
to SIG Micro, James B.
I'd try increasing the arena size.  The AllocateTensors() call allocates all the tensor space from there.

Advait Jain

unread,
Aug 11, 2021, 1:04:04 AMAug 11
to Rocky Rhodes, tfl...@tensorflow.org, SIG Micro, James B.

Combining the threads on the same topic on two different mailing lists.


--
You received this message because you are subscribed to the Google Groups "SIG Micro" group.
To unsubscribe from this group and stop receiving emails from it, send an email to micro+un...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/micro/5535106b-c161-4e26-89d7-9760f5e5c703n%40tensorflow.org.
Reply all
Reply to author
Forward
0 new messages