Hi Rohini,
You could also export the model to tflite instead of converting it from ONNX.
See :
https://github.com/ultralytics/yolov5/releases
Note the command : python export.py --include saved_model pb tflite tfjs
On Tue, May 10, 2022 at 1:03 AM Rohini <
rohi...@gmail.com> wrote:
>
> Hi,
>
> What are the steps to be followed to convert an ONNX model to TFLite Model? It looks like we need to first convert ONNX to Tensorflow and then Tensorflow to TFLite. Are there any standard tools/procedure to do this conversion? If so, what versions of TF and TFLite do I need to install and what are the steps to be followed? Thanks for your help.
>
> By googling, I figured out that we need to install tensorflow 2.9 in order to make onnx-tf command work which converts ONNX to TF. But I am still wondering if onnx-tf is implemented fully.
>
> These are the warnings I see.
>
> After this command, how to convert the output of onnx-tf to TFLite? onnx-tf created saved_model.pb file and also a variables directory which has variables.data-00000-of-00001 and variables.index. The assets directory is empty.
>
> onnx-tf convert --infile ./yolov5n.onnx --outdir output
> 2022-05-09 16:41:21.470109: I tensorflow/core/util/util.cc:168] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
>
> /home/.../.local/lib/python3.8/site-packages/tensorflow_addons/utils/ensure_tf_install.py:53: UserWarning: Tensorflow Addons supports using Python ops for all Tensorflow versions above or equal to 2.6.0 and strictly below 2.9.0 (nightly versions are not supported).
>
> The versions of TensorFlow you are currently using is 2.9.0-rc2 and is not supported.
> Some things might work, some things might not.
> If you were to encounter a bug, do not file an issue.
> If you want to make sure you're using a tested and supported configuration, either change the TensorFlow version or the TensorFlow Addons's version.
>
> You can find the compatibility matrix in TensorFlow Addon's readme:
>
https://github.com/tensorflow/addons
> warnings.warn(
> 2022-05-09 16:41:22,586 - onnx-tf - INFO - Start converting onnx pb to tf saved model
> 2022-05-09 16:41:22.965584: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.006994: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.007564: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.010737: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA
> To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
> 2022-05-09 16:41:23.013716: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.014199: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.014596: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.655326: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.655926: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.655955: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Could not identify NUMA node of platform GPU id 0, defaulting to 0. Your kernel may not have been built with NUMA support.
> 2022-05-09 16:41:23.656701: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
> Your kernel may have been built without NUMA support.
> 2022-05-09 16:41:23.656819: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1620 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3050 Ti Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.6
> WARNING:absl:Found untraced functions such as gen_tensor_dict while saving (showing 1 of 1). These functions will not be directly callable after loading.
> 2022-05-09 16:41:30,450 - onnx-tf - INFO - Converting completes successfully.
> INFO:onnx-tf:Converting completes successfully.
>
> --
> You received this message because you are subscribed to the Google Groups "TensorFlow Lite" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
tflite+un...@tensorflow.org.
> To view this discussion on the web visit
https://groups.google.com/a/tensorflow.org/d/msgid/tflite/6e3de28e-129f-4d2c-aca0-621ae454dd90n%40tensorflow.org.
--
Regards,
Tom
Director, Vertical Technologies
Linaro.org │ Open source software for ARM SoCs
irc, slack, discord: tgall_foo