Hi! My name is Michael, and I'm clueless about ML. :D (Not really, entirely; but I'm approaching it as an integrator / systems engineer, not an ML wizard.)
I'm prototyping a widget around the current generation of ST BLE SoCs, which are supported by a developer kit compatible with the current Keil MDK-ARM for Cortex-M0+. One way to tackle classification of some relevant sensor signals would be to run a TFLite model inference on-SoC. So I'm tinkering with this, trying to get one of the TFLite-Micro examples (magic-wand) integrated against ST's SensorDemo example.
It seems like the sensible way to get the code gen steps done is to build the tests on the dev host, so I did that (using Msys2 / MinGW64; that was every bit as much fun as it sounds). Now I'm working on getting the example itself plugged into the SensorDemo build, after which I'm going to hook up the accelerometer data streams, which is sure to be a joyride unto itself...
... but before I deal with that, I need to at least get the TFLite-Micro library code integrated into the cross build. Some combination of ST and Keil have helpfully provided a set of "MDK Packs" corresponding to some snapshot of TFLite-Micro, which they have labeled version "0.4.0" (presumably corresponding to https://github.com/MDK-Packs/tensorflow-pack/tree/0.4
, which looks to be vintage September 2021). Is this likely to be fresh enough to be usable? Or would I be better off trying to cobble together a "library release" of my own, either as "MDK Packs" or buckets of .o files? Or should I abandon the Keil toolchain altogether and use whatever the cool kids use to cross-compile for Cortex-M0+? (And cross-debug and cross-profile; that's the sticky wicket.)
More generally, are any of youse guys shipping products (that you can talk about) around this library yet? On actual low-memory MCUs, not Raspberry Pis? If so, how?