int8 quantization is not working for LSTM op in TensorFLow Lite

127 views
Skip to first unread message

Niranjan Yadla

unread,
Jun 3, 2021, 4:00:35 PM6/3/21
to iree-discuss
We are using LSTM op in our model and could verify accuracy using TF, TFLite float32 models.
Please see below Colab for more information.
https://colab.research.google.com/drive/1LIqNXYszOB4eUHFhqnbGLESdZs4Ipu_n?usp=sharing
However when I generate TFLite int8 model using post training quantization and int8 model doesn't work with my test data.
Note: If I use uint8 post training quantization it works fine.
For some reason int8 quantization is not working with LSTM op.

Geoffrey Martin-Noble

unread,
Jun 7, 2021, 3:07:41 PM6/7/21
to Niranjan Yadla, iree-discuss
Hi Niranjan,

Maybe I'm missing something, but it seems like this is not connected to IREE. Maybe you want to direct your question to the TensorFlow and TFLite folks?

Best,
Geoffrey

--
You received this message because you are subscribed to the Google Groups "iree-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to iree-discuss...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/iree-discuss/83490e20-2304-4d2f-ab9f-830a3d7bf7c5n%40googlegroups.com.

Niranjan Yadla

unread,
Jun 7, 2021, 3:33:13 PM6/7/21
to Geoffrey Martin-Noble, iree-discuss
Thanks I will post to TF and TFLite folks.

Thanks,
Niranjan

Reply all
Reply to author
Forward
0 new messages