Hello,
I'm working on a micropython implementation of the micro_speech example.
The out of the box example requires the wav to mfcc conversion to be done externally to the tensor layers.
Within that code there is an op called 'AudioMicrofrontend':
There is also a c++ mfcc op defined in tensorflow lite:
There is also a c++ implementation in main tensorflow:
Could someone comment on what the differences are for the 3 different tensorflow implementations?
As they appear to exist at the regular, lite and micro level why are they not in the out of the box supported set of convertible ops?
At the moment as a first step I want to just passthrough to the same call that the micro_speech example uses but I thought longer term it could make sense to figure out how to support having the mfcc tensor op work directly.
Thanks for any help on this,
Michael