Well, yes and no!
I do think there is a place for ML in the Kivy context. That is on Android and IOS.
I don't think there is any point in reinventing the wheel, there are TensorFlow And Scikit modules for Python, and Kivy is a Python module.
Kivy provides the UI for touch OSes such as Android and IOS, so it can be a control system for ML on those OS.
The thing about ML is it needs horsepower, these Python modules are really written in C for speed, and that C is best executed on a GPU rather than CPU. Or on a custom TensorFlow engine as found on some Android phones. In general I don't think TensorFlow apps require a lot a float precision (is this correct?) so, speed aside, the single precision float hardware in most phones may give sufficient functionality (?).
So because of reinventing the wheel and performance I don't thing a new ML module written in Python is something that will ring people's bells.
So I'd suggest the project would be a compile of TensorfFlow to Android/IOS. I imagine that a compile that accessed the custom hardware on a Pixel could possibly rapidly remove (both) socks - but that is just my imagination, might even be true.
"pip3 search tensorflow" shows there is already a port for ARM64 but the version is way old. The Python/Kivy tools for Android/IOS compile have a "-requirements" mechanism for managing compiling Python modules that are implemented in C.
To get started:
1) Decide if it is feasible (and you are inspired). Read the TensorFlow build files, see if you can figure out the the build options for ARM32/64 and Tensorflow hardware (keep in mind Apple hardware)
2) Either a) release a Python module compiles for these OSes and keep it updated, or b) find somebody in the Kivy developers group to explain "-requirements" implementation.
That is my 2 cents, hope it helps you look at the issues from another angle. Full disclosure: I have a Pixel ;)