What is the tflite-micro version?

316 views
Skip to first unread message

Michael O'Cleirigh

unread,
Jul 27, 2021, 3:04:13 PM7/27/21
to SIG Micro
Hello, 

I'm building  a micropython api on top of the tflite-micro c++ api and I include a version string for the tensorflow part so the end user can know which version of tensorflow they are using.

Before the creation of the tflite-micro repository I was using the TFLITE_VERSION_STRING which was defined in tensorflow/lite/version.h as the version.

it would return something like "2.6.0".

However that file no longer exists post migration.


defined both the version string and schema version.

The schema version is now located here:

But the version string is missing.

Are there plans to version tflite-micro or should I switch gears and try to version based on the git commit id I build from.

Thanks,

Michael

Pete Warden

unread,
Jul 27, 2021, 8:04:22 PM7/27/21
to Michael O'Cleirigh, SIG Micro
Hi Michael,
                  that's a great question! TFL Micro has always only been loosely following the wider TensorFlow versioning process, so I think it might be more honest to use the git hash. Do you think that will be confusing to users? We're keen to support the MicroPython integration, it's a common request, so we can discuss it in more depth if you think it will be problematic.

Pete

--
You received this message because you are subscribed to the Google Groups "SIG Micro" group.
To unsubscribe from this group and stop receiving emails from it, send an email to micro+un...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/micro/6aef3cd8-b658-43a4-87c1-edc8e5c9ba8cn%40tensorflow.org.

Daniel Fauvarque

unread,
Oct 14, 2021, 4:07:10 AM10/14/21
to SIG Micro, SIG Micro
I have a similar issue, do you plan to tag stable releases of TensorFlow Lite Micro inline with the corresponding Tensorflow release tag ?

As an integrator, I feel more comfortable taking the same level of releases for both Gits and of course integrating what can be considered as a stable release.

Regards
Daniel

Advait Jain

unread,
Oct 14, 2021, 4:50:17 PM10/14/21
to Daniel Fauvarque, SIG Micro
Thanks for the requests.

Couple of comments to illustrate where we are at w.r.t. releases:
  • There isn't any mapping between TF versions and TFLM. TFLM only supports a subset of the functionality in TF and that will continue to be the case. As such, the separation of TFLM into its own repository helped make this explicit.
  • While adding a tag to the TFLM repository is feasible, there aren't any additional tests that we would do (on top of what we have as part of CI).
  • We currently only make bug fixes on the main branch, so a tagged release will basically be a snapshot in time with no updates.
  • The optimized kernel implementations are a very important part of TFLM and having a TFLM release would require a coordinated effort with the maintainers of the optimized kernel implementations which is not resourced at this time.
We would be happy to learn more about what users would expect to see from a release as that might help change how we think about this in the future.

Daniel Situnayake

unread,
Nov 11, 2021, 6:17:50 PM11/11/21
to SIG Micro, Advait Jain, SIG Micro, dfau...@gmail.com, Kwabena W. Agyeman
Hi SIG members,

Kwabena from OpenMV and I recently wrote up an argument for bringing versioning to TFLM. You can find it in this document:


We've been chatting with Pete and Advait on the TFLM team and it sounds like there aren't any plans to start versioning any time soon, but Advait recommended sharing our document here for broader discussion.

Advait mentioned the work around moving hardware-specific code out of the main repo as a reference point:


The idea is that we could put our integrations in a separate repo with CI that can be used to understand when changes cause problems. I think this is a good approach generally, but I'm not sure it will help with API breakage, since with the integration code in a different repo the TFLM team won't "feel the pain" of breakage until the changes have already been submitted.

Big thanks to Pete and Advait for engaging with our discussion so far. Would love to hear the community's thoughts on this issue!

Warmly,
Dan

Daniel Situnayake

unread,
Nov 18, 2021, 12:52:30 PM11/18/21
to SIG Micro, Daniel Situnayake, Advait Jain, SIG Micro, dfau...@gmail.com, Kwabena W. Agyeman
Fredrik just added a great suggestion to the document:

  • Numerical consistency between TFL and TFLM runtimes. It would be great if an end user could be guaranteed that the TFL runtime of a given version matches (numerically) a certain version of the TFLM runtime. It would be sufficient to limit this numerical aspect to the behavior to reference kernels, since optimized kernels may behave differently. For example: “TFL x.y.z matches TFLM x.y.z”, or “TFL x.y.z matches TFLM a.b.c”
This is definitely something a lot of people have asked us about (e.g. which TFLM "version" is compatible with which TensorFlow/TF Lite version).

Warmly,
Dan

Andrew Cavanaugh

unread,
Jun 29, 2022, 3:22:42 PMJun 29
to SIG Micro, d...@edgeimpulse.com, Advait Jain, SIG Micro, dfau...@gmail.com, Kwabena W. Agyeman
Are the reference Kernels in the micro repo simply not versioned at all in the current framework? I cannot find that written down anywhere in the code the way it is in the main TF repo:


In the meantime is there a best practice / workaround for figuring out what version of an operator the micro repo is on?  I was hoping to update an op locally by following this guide and doing some cherry-picking:


There doesn't appear to be a well-labeled commit bumping the operator version, and even following git blames on the main TF repo doesn't tell me when CONV2D_v3 was added.

Am I missing something obvious here?

Advait Jain

unread,
Jul 22, 2022, 4:21:39 PMJul 22
to Andrew Cavanaugh, SIG Micro, d...@edgeimpulse.com, dfau...@gmail.com, Kwabena W. Agyeman
On Wed, Jun 29, 2022 at 12:22 PM Andrew Cavanaugh <andrewc...@gmail.com> wrote:
Are the reference Kernels in the micro repo simply not versioned at all in the current framework? I cannot find that written down anywhere in the code the way it is in the main TF repo:


You are right. The kernels in the micro repository are not versioned.
 

In the meantime is there a best practice / workaround for figuring out what version of an operator the micro repo is on?  I was hoping to update an op locally by following this guide and doing some cherry-picking:


There doesn't appear to be a well-labeled commit bumping the operator version, and even following git blames on the main TF repo doesn't tell me when CONV2D_v3 was added.

Am I missing something obvious here?

For TFLM, inspecting the code and/or running a model through the interpreter is the only way to know what features are supported for an op at any given time. If some functionality is missing in micro that is supported in Lite we are open to PRs that bring the micro kernel up to date (without any explicit version change).
Reply all
Reply to author
Forward
0 new messages