Post-training integer quantization seems not to work on microcontroller

23 views
Skip to first unread message

James B.

unread,
Aug 10, 2021, 4:02:39 PMAug 10
to TensorFlow Lite
I'd like to print inference time on a serial terminal for a Keras model (with a STM32F401RE microcontroller) which has as an input sample a 2D tensor of shape (500, 1, 1). However, when I try to allocate tensors with the interpreter, the program hangs after entering AllocateTensors() function. Could it be caused by a dimension of the model too large (.tflite file is 67 KB and .h file is 412 KB)? I tried to use post-training integer quantization through TFLite Converter, but the situation doesn't change.
I attached the Keras model (in tcn_scratch.py) and the C++ model (in main.cpp).


tcn_scratch.py
main.cpp

Jaesung Chung

unread,
Aug 10, 2021, 9:11:27 PMAug 10
to James B., Pete Warden, Advait Jain, TensorFlow Lite

On Wed, Aug 11, 2021 at 5:02 AM James B. <cristinaa...@gmail.com> wrote:
I'd like to print inference time on a serial terminal for a Keras model (with a STM32F401RE microcontroller) which has as an input sample a 2D tensor of shape (500, 1, 1). However, when I try to allocate tensors with the interpreter, the program hangs after entering AllocateTensors() function. Could it be caused by a dimension of the model too large (.tflite file is 67 KB and .h file is 412 KB)? I tried to use post-training integer quantization through TFLite Converter, but the situation doesn't change.
I attached the Keras model (in tcn_scratch.py) and the C++ model (in main.cpp).


--
You received this message because you are subscribed to the Google Groups "TensorFlow Lite" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tflite+un...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tflite/57f0ba7b-523b-40cd-865d-768af3bd5cddn%40tensorflow.org.

Advait Jain

unread,
Aug 11, 2021, 1:04:05 AMAug 11
to Rocky Rhodes, tfl...@tensorflow.org, SIG Micro, James B.

Combining the threads on the same topic on two different mailing lists.


On Tue, Aug 10, 2021, 6:02 PM 'Rocky Rhodes' via SIG Micro <mi...@tensorflow.org> wrote:
I'd try increasing the arena size.  The AllocateTensors() call allocates all the tensor space from there.

On Tuesday, August 10, 2021 at 12:53:21 PM UTC-7 James B. wrote:
I'd like to print inference time on a serial terminal for a Keras model (with a STM32F401RE microcontroller) which has as an input sample a 2D tensor of shape (500, 1, 1). However, when I try to allocate tensors with the interpreter, the program hangs after entering AllocateTensors() function. Could it be caused by a dimension of the model too large (.tflite file is 67 KB and .h file is 412 KB)? I tried to use post-training integer quantization through TFLite Converter, but the situation doesn't change.
I attached the Keras model (in tcn_scratch.py) and the C++ model (in main.cpp).

Best regards.

--
You received this message because you are subscribed to the Google Groups "SIG Micro" group.
To unsubscribe from this group and stop receiving emails from it, send an email to micro+un...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/micro/5535106b-c161-4e26-89d7-9760f5e5c703n%40tensorflow.org.
Reply all
Reply to author
Forward
0 new messages