Many of the generative models in Magenta.js require music to be input as a symbolic representation like MIDI, But what if you only have audio?
We have just finished porting our piano transcription model, Onsets and Frames to JavaScript using TensorFlow.js and have added it to the Magenta.js library in v1.2. Now you can input audio of solo piano performances and have them automatically converted to MIDI in the browser.
Try out the demo app Piano Scribe shown below to see the library in action for youself. If you don’t have recordings of a piano handy, you can try singing to it, and it will do its best!

Learn how to use the library in your own app in the documentation and share what you make using #madewithmagenta!
--
Magenta project: magenta.tensorflow.org
To post to this group, send email to magenta...@tensorflow.org
To unsubscribe from this group, send email to magenta-discu...@tensorflow.org
---
You received this message because you are subscribed to the Google Groups "Magenta Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to magenta-discu...@tensorflow.org.
Not giving any answer here. Just to share I am experiencing similar.
My runs show that, at my Windows device, it can only transcript .wav files no longer than 8 seconds.
Here is the logs of my runs - https://docs.google.com/spreadsheets/d/1NS_IF38j0S3EY2O20ItYrzkH9ppMCtGoS5zQYh5pIDI/edit?usp=drivesdk
I am still trying different environments. And make sense of the paper to try to figure out how to make it work.
I haven't thoroughly read all the related documents. Lazily, here, I'd like to ask the app's dev team, does this web app has any minimum system requirements?
--
Magenta project: magenta.tensorflow.org
To post to this group, send email to magenta...@tensorflow.org
To unsubscribe from this group, send email to magenta-discu...@tensorflow.org
---