How to save model's tensors after getting inference results to the original tflite model file?

1,112 views
Skip to first unread message

Simon King

unread,
Apr 13, 2021, 10:23:41 PM4/13/21
to TensorFlow Lite
Hi tflite team and friends,

After using tflite python APIs to do inference, I would like to save the invoked tensors to model file and override the original tensors. Is that possible? Is there any API that I can leverage?

To better explain my question, let's say I have a script containing following python codes to do inference:

from tflite_runtime.interpreter import Interpreter
import tensorflow as tf

interpreter = Interpreter(model_path="model.tflite")
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.set_tensor(input_details[0]['index'], tf.constant(3))
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])

After calling the invoke() method, all the input and output tensors of each layer will be changed so I can get the inference result output_data. But when the python script execution ends, these tensors will be restored to original states which are saved in the model file.

What I want is to save the invoked tensor states into model file and override the original tensors. Is there any way to achieve this?

Best, 
Simon

Jaesung Chung

unread,
Apr 13, 2021, 10:52:07 PM4/13/21
to Simon King, TensorFlow Lite
Hi Simon King,

We don't have such a feature available yet. The allocated tensor memory spaces will persist until the life cycle of the interpreter object is ended. All the tensor memory will be always available at the end of the graph execution. For example, intermediate tensors can be freed anytime to lower the overall memory overhead. Only the input and output tensors are guaranteed to be accessed as a contract.


--
You received this message because you are subscribed to the Google Groups "TensorFlow Lite" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tflite+un...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tflite/7d9c255b-2cbd-4ff3-b714-b7a3b1bc2c2cn%40tensorflow.org.

Jaesung Chung

unread,
Apr 13, 2021, 10:53:30 PM4/13/21
to Simon King, TensorFlow Lite
Correction: 

We don't have such a feature available yet. The allocated tensor memory spaces will persist until the life cycle of the interpreter object is ended. Only a subset of tensors will be available to access at the end of the graph execution. For example, intermediate tensors can be freed anytime to lower the overall runtime memory overhead. Only the input and output tensors are guaranteed to be accessed as a contract.


Jaesung Chung

unread,
Apr 14, 2021, 6:10:24 AM4/14/21
to Simon King, TensorFlow Lite
Actually, we have an experimental flag to preserve all tensors for debugging purposes: 

interpreter = tf.lite.Interpreter( model_path="test.tflite", experimental_preserve_all_tensors=True) # Run evaluation interpreter.invoke() # Look at all tensors including intermediates. print({ t['name']: interpreter.get_tensor(t['index']) for t in interpreter.get_tensor_details() })

You can rely on this to store the tensor data back to the original flatbuffer file. However, those steps should be manually done.

Simon King

unread,
Apr 15, 2021, 5:34:11 PM4/15/21
to TensorFlow Lite, Jaesung Chung, TensorFlow Lite, Simon King
Hi Jaesung,

I tried to enable the experimental_preserve_all_tensors=True argument on tensorflow 2.4.1. But it says "TypeError: __init__() got an unexpected keyword argument 'experimental_preserve_all_tensors'"

Which version of tensorflow are you using?

Best,
Simon

Jaesung Chung

unread,
Apr 15, 2021, 5:38:34 PM4/15/21
to Simon King, TensorFlow Lite
Hi Simon,

The experimental_preserve_all_tensors flag is a new feature available since TensorFlow 2.5 version. See this release note.

Best regards,
Jaesung

CRISTOBAL BELLES

unread,
Jul 29, 2021, 2:29:22 PM7/29/21
to TensorFlow Lite, Jaesung Chung, TensorFlow Lite, Simon King
Hi Jaesung,

experimental_preserve_all_tensors is implemented in TfLite C ++ ? 

Best regards,
Cristobal

Jaesung Chung

unread,
Jul 29, 2021, 8:29:56 PM7/29/21
to CRISTOBAL BELLES, TensorFlow Lite, Simon King

Sandeep V

unread,
Jul 30, 2021, 7:24:04 AM7/30/21
to TensorFlow Lite, Jaesung Chung, TensorFlow Lite
Sir,I used experimental_preserve_all_tensors flag to fetch intermediate tensors from an 8 bit quantized tflite model.I did the same thing on both TensorFlow v2.5.0 and TensorFlow v2.6.0 versions . Both of them dumped out all tensors,but the values dumped are not exactly the same.Why is this so?Is the experimental_preserve_all_tensors feature different for TF2.5 and TF2.6?

Jaesung Chung

unread,
Jul 30, 2021, 8:01:19 AM7/30/21
to Sandeep V, TensorFlow Lite
I am not sure that I have not looked thoroughly at the actual model but there are possibilities that the implementations of some of the op kernels for the same model have been changed a little bit to fix some issues in the op kernels between versions. That might impact the calculation results.

Sandeep V

unread,
Jul 30, 2021, 8:06:03 AM7/30/21
to TensorFlow Lite, Jaesung Chung, TensorFlow Lite, Sandeep V

Also sir,I am currently running on TFv2.6.0 . I got intermediate tensors using experimental_preserve_all_tensors flag.

I did the same process on another computer,which is also having TFv2.6.0 ,all the tensors are there,but some values are different.
How is this possible?Why same model with inputs gave different tensor outputs on same TF versions?Kindly help me out.

Jaesung Chung

unread,
Jul 30, 2021, 8:17:52 AM7/30/21
to Sandeep V, TensorFlow Lite
Hi Sandeep V,

If possible, it would be better to post a GitHub issue about this behavior at the TensorFlow github with reproducible steps. It is really hard to know the reason without having enough information. Also, this kind of question may be discussed at the GitHub forum.

Best regards,
Jaesung

2021년 7월 30일 (금) 오후 9:06, Sandeep V <sandeep...@gmail.com>님이 작성:
Reply all
Reply to author
Forward
0 new messages