Idea for an AI Music Platform

Skip to first unread message

Daniel Chung

Mar 10, 2022, 11:15:39 AMMar 10
to Magenta Discuss
Hi there!

I have an idea for an open-source application and commercialisable platform that builds on the work I've seen with projects like Magenta and MuseNet, and I was wondering to whom I might speak if I'd like to find help and/or collaborators in developing it.

Here's a high-level summary of my idea:

First off, I believe the advances we've seen in AI-generated music are pushing artists towards involving their audiences as active participants in their musical experiences.

How can this be done? Imagine a tool that empowers listeners to generate personalised music in the styles of their favourite artists. All while protecting copyrights and remuneration for copyrighted artists, by allowing listeners to subscribe to regular releases of training data by active artists, or to outright purchase training data from inactive artists.

What would be the first step? I'd like to build a proof-of-concept machine learning composer application with free training data from famous long-dead composers. Participants would choose the composers' styles they want in their music, and get unlimited music for free.

Thank you for your time,

Daniel C.


Mar 10, 2022, 12:44:47 PMMar 10
to Daniel Chung, Magenta Discuss
You just want to distribute training data?

Or, you want to distribute AI generated music?

You need to decide what format your training data will be in. Are you training on raw audio/ sheet music encoded as text / MIDI…

Your training data should go into a database where it could be filtered by artist or genre… which means it should be labeled.

It would be good to have a prototype algo for generating music.

I have not participated in this field for many years, but when I tried to generate music from Alicia Keys MIDI music, using RNN, the output sounded like an Alien wrote it.

I’m suspicious that you could never get enough training data from a single artist.

Sent from my iPhone

On Mar 10, 2022, at 8:15 AM, Daniel Chung <> wrote:

Hi there!
Magenta project:
To post to this group, send email to
To unsubscribe from this group, send email to
To unsubscribe from this group and stop receiving emails from it, send an email to

Daniel Chung

Mar 10, 2022, 2:01:44 PMMar 10
to Magenta Discuss,, Magenta Discuss, Daniel Chung

Thank you for your response 😊 Alright, no nonsense, I'll do my best to address your excellent points!

Firstly, I want to distribute training data, not music. I have seen commercial projects that use stems, but I plan to distribute training data in the form of MIDI-based tokens as discussed in the MuseNet article (these tokens currently consist of instrument, velocity, and pitch data). Users would train their models on these data. I believe I could leverage the OpenAI API and their new GPT-3 technology, like the MuseNet team with GPT-2. I will also look into the Magenta research on long-form compositions to create 2 minute songs. I previously reached out to the OpenAPI team asking for guidance, and they said they couldn't help me because MuseNet isn't a part of their API offering (understandably).

I plan to keep a database of training sets organised primarily by artist. And my current thinking is that a user would select a few artists, and the data from these artists would be pooled without distinguishing between who's who in the set. (Which raises the question of whether differing proportions of training data yield substantially different results.)

I have only just begun learning the basics of deep learning from the TensorFlow self-guided curriculum, and since I'm at least months away from building a prototype, I wanted to ask for help and/or collaborators. We probably aren't the first to have come up with this.

The paucity of data is one of my biggest concerns too! It seems like the MuseNet team needed artists' lifetime corpus. But they also could generate music in the style of less prolific artists. The only workaround I can think of is to fill in the gaps with a basic training set or with other artists.

I also want to mention that I want everything to be open-source, because the real value of the platform is in the artists who choose to adopt it. And this is a passion project for me, because I want to help make the future!


Daniel C.

Daniel Chung

Mar 10, 2022, 2:08:54 PMMar 10
to Magenta Discuss, Daniel Chung,, Magenta Discuss
Correction: "I previously reached out to the OpenAPI OpenAI team asking for guidance"

Jonathan Belanger

Mar 10, 2022, 7:49:31 PMMar 10
to Magenta Discuss,,, Magenta Discuss


I've been working on something similar. Not open source. Feel free to get in touch if you ever want to chat more. jbb at jbb (d0t) dev.

Re: the question about Alicia Keys and training... if we try and build a model from scratch then yes I agree... but methinks it is possible by using larger, more generic pre-trained models and running an artist's music through _that_ model. Think about the success t3x has had in transcribing multi-instrumental music... by leveraging a large text model!

We're still scratching the surface... many possibilities!

Look forward to connecting with you all.

Daniel Chung

Mar 11, 2022, 11:51:35 AMMar 11
to Magenta Discuss,, Daniel Chung,, Magenta Discuss
Hi jb,

Sounds promising, I'll be in touch shortly, but do let me know if I could be of help in your project! My background is in programming and IT, but not AI/machine learning. I'm somewhat active in the Sonic-Pi community, where in the past I've developed various open source algorithmic music projects exploring polyphony. I've also produced music for years as a hobby, recently incorporating Magenta Studio in my work.

I agree that such generic pre-trained models could mitigate the problem of scarce training sets. Definitely an idea to iterate upon, once there's a working prototype.

I foresee several problems in dealing with multi-instrumental music:
- With the exception of the piano, most instruments' MIDI data include "continuous controller" values for natural performances (the big CCs being expression, dynamics, and vibrato). The simple token format used by MuseNet won't cut it for realistic performances.
- Convincing orchestrations require high-quality virtual instruments, whether they be sample-based or model-based. Either way, they won't be cheap to license. VSCO 2 Community Edition would only be bearable for a proof of concept.
- Contemporary music makes extensive use of audio samples, and acquiring these samples would require negotiation with sample providers, if we can't find them for free.

The possibilities are indeed endless, and I think the time has come for this idea. An MVP should focus on the low-hanging fruit, like piano reductions. Do it well enough, and people might be willing to fund increasingly ambitious projects. Only then will artists sign up to be distributed.


Daniel C.

Daniel Chung

Mar 11, 2022, 12:32:51 PMMar 11
to Magenta Discuss, Daniel Chung,,, Magenta Discuss
I just recalled that there may be an easier way to perform multi-instrumental works, that might bypass CCs and expensive instruments. A deal could be negotiated with NotePerformer to use their "Artificial Intelligence-based playback engine".

Trade Sharp

Apr 12, 2022, 6:54:59 AMApr 12
to Magenta Discuss,,,, Magenta Discuss
I've also had a similar idea lately. I actually know the owner of a fairly large music distributor and am in discussions with him on getting access to his artists' music for training data. It would be very interesting to see where this field can evolve to.

My thought is: provide the artists royalties each time their song is used for training data. Then companies can wholesale the training data, and retail sell high quality AI songs generated from such data.

Ultimately the project will require some talented AI engineers to get the right models made.

My goal ultimately is to have the world's 'best' sounding AI music- commercial quality for licensing and resell.
Reply all
Reply to author
0 new messages