Thanks for sharing, Pete, this is exciting—I think moving to Python for most of the project generation infrastructure will be helpful.
I wanted to share an insight you may find interesting (but may just as easily be of no use whatsoever). One of the challenges we have at Edge Impulse is that we need to provide a single library of C++ code—including TensorFlow Lite—that can be compiled for multiple different targets.
Some of these devices may have special kernel support within TF Lite. For example, some Arm cores can benefit from CMSIS-NN kernels. Because we want our library to support both Arm and non-Arm cores, we have to include the source for both kernels.
The way we do this right now is:
1. Generate a TF Lite project with reference kernels
2. Generate a TF Lite project with CMSIS-NN kernels
3. Append the contents of each CMSIS-NN kernel into the same file as its reference kernel equivalent
4. Insert a preprocessor conditional that looks at a macro (e.g. 'USE_CMSIS_NN') to determine at compile time which kernel implementation to use
5. In another file, define the USE_CMSIS_NN macro and gate it on something that is provided by the target's compiler (e.g. "defined(__TARGET_CPU_CORTEX_M0) || defined(__TARGET_CPU_CORTEX_M0PLUS) || defined(__TARGET_CPU_CORTEX_M3) || defined(__TARGET_CPU_CORTEX_M4) || defined(__TARGET_CPU_CORTEX_M7)")
The resulting code (packaged along with dependencies) can then be compiled for whatever platform.
We've found this extremely helpful, although it may be rare that other folks want to be so general purpose. Either way, I thought it might be useful to share. Perhaps it would be nice if there was an option in the build system to create this type of multi-target output if a list of targets is provided instead of a single one?
Warmly,
Dan