Great question. A MIDI note-based system can only from notes, but ML/Magenta can be extended to learn from the raw frequencies (pitch lines or f0 fundamentals)
or even from the raw audio.
The onsets and frames software which is part of magenta extracts Piano notes because it is trained on piano music and so it 'quantizes' the pitches to a piano keyboard notes.
**
Regarding Carnatic gamagams the most breathtaking rule based implementation is Mr.Subramanians Gaayaka software which is freely available to try,
I came across it only a couple of weeks ago. (Pardon me if YOU ARE Mr.Subramanian)
As you have correctly said, Indian classical music when it chooses a scale, has a specific tuning (non-equitemperamental microtones, e.g, anandha bhairavi's
Ga and Shiva Ranjani's Ga may be different though a key board player is forced to used the same note for both)
In addition to the microtone, the slides and ornamentations are also selective not to disturb the mood.
What I was so amazed was Mr.Subramanian's excellent study of it using spectrograms and his software that can take a melody outline and embellish it into
a slide-y flute or veena rendering.
**
With Machine learning it is possible to learn such a conversion routine. Here is how we would go about it:
1. From audio rendering extract f0 (using excellent continuos pitch extraction algorithms
2. Label the discrete note outlines as in SRGM notation.
3. Use supervised learning to learn the mapping from the note outlines to pitch outlines.
Definitely doable, if someone has the time.
**
With such population, such talent in music, such rich long tradition, it is a shame that the country cannot yet recognize, support and build upon such
rare work as Rasika/Gaayaka software. Hope the time comes soon.
Ravi