The TensorFlow Lite converter is switching to use a new version by default -- This was announced earlier last quarter.
Enables conversion of new classes of models, including Mask R-CNN, Mobile BERT, and many more
Adds support for functional control flow (enabled by default in TensorFlow 2.x)
Tracks original TensorFlow node name and Python code, and exposes them during conversion if errors occur
Leverages MLIR, Google's cutting edge compiler technology for ML, which makes it easier to extend to accommodate feature requests
Adds basic support for models with input tensors containing unknown dimensions
Supports all existing converter functionality
The switch to use the new converter by default was submitted at git commit 06db91.
We’ve extensively tested correctness and runtime performance against a variety of models generated by the new converter. However, if there is something that we missed and you observed an unexpected failure or regression:
Please create a GitHub issue with the component label “TFLiteConverter.” Please include:
Command used to run the converter or code if you’re using the Python API
The output from the converter invocation
The input model to the converter
If the conversion is successful, but the generated model is wrong, state what is wrong:
Producing wrong results and / or decrease in accuracy
Producing correct results, but the model is slower than expected (model generated from old converter)
If you are using the allow_custom_ops feature, please read the Python API and Command Line Tool documentation
Switch to the old converter by setting --experimental_new_converter=false (from the tflite_convert command line tool) or converter.experimental_new_converter=False (from Python API)
Thanks,