Re: How to convert ONNX to TFLite?

676 views
Skip to first unread message
Message has been deleted

Tom Gall

unread,
May 10, 2022, 4:01:09 AMMay 10
to Rohini, TensorFlow Lite
Hi Rohini,

You could also export the model to tflite instead of converting it from ONNX.

See : https://github.com/ultralytics/yolov5/releases

Note the command : python export.py --include saved_model pb tflite tfjs

On Tue, May 10, 2022 at 1:03 AM Rohini <rohi...@gmail.com> wrote:
>
> Hi,
>
> What are the steps to be followed to convert an ONNX model to TFLite Model? It looks like we need to first convert ONNX to Tensorflow and then Tensorflow to TFLite. Are there any standard tools/procedure to do this conversion? If so, what versions of TF and TFLite do I need to install and what are the steps to be followed? Thanks for your help.
>
> By googling, I figured out that we need to install tensorflow 2.9 in order to make onnx-tf command work which converts ONNX to TF. But I am still wondering if onnx-tf is implemented fully.
>
> These are the warnings I see.
>
> After this command, how to convert the output of onnx-tf to TFLite? onnx-tf created saved_model.pb file and also a variables directory which has variables.data-00000-of-00001 and variables.index. The assets directory is empty.
>
> onnx-tf convert --infile ./yolov5n.onnx --outdir output
> 2022-05-09 16:41:21.470109: I tensorflow/core/util/util.cc:168] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
>
> /home/.../.local/lib/python3.8/site-packages/tensorflow_addons/utils/ensure_tf_install.py:53: UserWarning: Tensorflow Addons supports using Python ops for all Tensorflow versions above or equal to 2.6.0 and strictly below 2.9.0 (nightly versions are not supported).
>
> The versions of TensorFlow you are currently using is 2.9.0-rc2 and is not supported.
> Some things might work, some things might not.
> If you were to encounter a bug, do not file an issue.
> If you want to make sure you're using a tested and supported configuration, either change the TensorFlow version or the TensorFlow Addons's version.
>
> You can find the compatibility matrix in TensorFlow Addon's readme:
> https://github.com/tensorflow/addons
> warnings.warn(
> 2022-05-09 16:41:22,586 - onnx-tf - INFO - Start converting onnx pb to tf saved model
> 2022-05-09 16:41:22.965584: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.006994: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.007564: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.010737: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA
> To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
> 2022-05-09 16:41:23.013716: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.014199: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.014596: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.655326: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.655926: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.655955: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Could not identify NUMA node of platform GPU id 0, defaulting to 0. Your kernel may not have been built with NUMA support.
> 2022-05-09 16:41:23.656701: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.656819: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1620 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3050 Ti Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.6
> WARNING:absl:Found untraced functions such as gen_tensor_dict while saving (showing 1 of 1). These functions will not be directly callable after loading.
> 2022-05-09 16:41:30,450 - onnx-tf - INFO - Converting completes successfully.
> INFO:onnx-tf:Converting completes successfully.
>
> --
> You received this message because you are subscribed to the Google Groups "TensorFlow Lite" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to tflite+un...@tensorflow.org.
> To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tflite/6e3de28e-129f-4d2c-aca0-621ae454dd90n%40tensorflow.org.



--
Regards,
Tom

Director, Vertical Technologies
Linaro.org │ Open source software for ARM SoCs
irc, slack, discord: tgall_foo
Message has been deleted

Tom Gall

unread,
May 11, 2022, 4:52:04 AMMay 11
to Rohini Jayaram, TensorFlow Lite
Hi Rohini,

The code I shared, as you've observed, is specific to yolo v5.

In the past I've just resorted to a bit of python to convert ONNX to
tflite. Basically load the ONNX model, save as a pb and then load the
pb and go through the steps to output as tflite. This multistep
process is important as tlfite doesn't support all operators. It
leverages the tensorflow framework to use the right subset of
operators.

Being a bit lazy, here is a stack overflow that at a quick glance
looks right : https://stackoverflow.com/questions/63418506/converting-onnx-model-to-tensorflow-lite


On Tue, May 10, 2022 at 8:12 PM Rohini Jayaram <rohi...@gmail.com> wrote:
>
> Hi Tom,
>
> Thank you very much for your kind reply.
>
> I want to export an ONNX pretrained model (inference) to a TFLite model. Is this also possible with the command you have given? I guess this ultralytics repo is only for Yolo. What if we need to export a different model (say WDSR) from ONNX to TFLite?
>
> This is your command : python export.py --include saved_model pb tflite tfjs
>
> Thanks for your help.
> Rohini
Reply all
Reply to author
Forward
0 new messages