Pre-trained models

400 views
Skip to first unread message

Curtis "Fjord" Hawthorne

unread,
Sep 14, 2016, 8:04:21 PM9/14/16
to Magenta Discuss
Hi all,

We've just released pre-trained checkpoint bundles for the 3 models we have published. These models have been trained on thousands of midi files and should make it easier to get started playing around with generating new midi sequences.

If you have Docker installed, generating some midi sequences is as easy as:

docker run -it -p 6006:6006 -v /tmp/magenta:/magenta-data tensorflow/magenta
bazel run //magenta/models/lookback_rnn:lookback_rnn_generate -- \
--bundle_file=/magenta-models/lookback_rnn.mag \
--output_dir=/magenta-data/lookback_rnn/generated \
--num_outputs=10 \
--num_steps=128 \
--primer_melody="[60]"

More details are on our main README, and you can find links to the pretrained checkpoints on the documentation pages for each of the models: Basic RNN, Lookback RNN, Attention RNN.

Happy generating!

-Fjord

NetDFS Com

unread,
Sep 15, 2016, 3:24:22 AM9/15/16
to Curtis Fjord Hawthorne, Magenta Discuss
Thanks for providing this because training can be quite time consuming.

--
Magenta project: magenta.tensorflow.org
To post to this group, send email to magenta...@tensorflow.org
To unsubscribe from this group, send email to magenta-discuss+unsubscribe@tensorflow.org
---
You received this message because you are subscribed to the Google Groups "Magenta Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to magenta-discuss+unsubscribe@tensorflow.org.

Frank Brinkkemper

unread,
Sep 19, 2016, 11:11:34 AM9/19/16
to Magenta Discuss
Hi,

Thanks for this! I got some nicely sounding melodies out of it, especially compared to my own trained model.

Using the same primer gives very different results, as expected, but the quality also greatly differs. In 10 runs there are 2-3 which create a proper melody, while the rest don't sound like a melody. These have long silences for example.

Could this be due to that the transition from midi files to the the training set is not lossless currently? (i.e. multiple notesequences generated from a single midi, and other non lossless transitions) And therefore the training set is not really what was originally meant to train on?

I used the attention rnn for this by the way.



Op donderdag 15 september 2016 02:04:21 UTC+2 schreef Curtis Hawthorne:

Curtis "Fjord" Hawthorne

unread,
Sep 19, 2016, 12:45:15 PM9/19/16
to Frank Brinkkemper, Magenta Discuss, ellio...@google.com
The process of translating midi files to training data is certainly not lossless. The different models have different encoding schemes, and we quantize the data before it goes into training. We have also noticed the issue of long silences or stretches of repeated notes in the output. We're not completely sure what causes that. I could be due to a problem with out input data or how we process it. Or it could be due to the model itself.

+Elliot Waite do you have thoughts on this?

--
Magenta project: magenta.tensorflow.org
To post to this group, send email to magenta...@tensorflow.org
To unsubscribe from this group, send email to magenta-discu...@tensorflow.org

---
You received this message because you are subscribed to the Google Groups "Magenta Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to magenta-discu...@tensorflow.org.

Kyle Kastner

unread,
Sep 19, 2016, 2:04:16 PM9/19/16
to Curtis Fjord Hawthorne, Frank Brinkkemper, Magenta Discuss, ellio...@google.com
Repetitions in the output is an extremely common 'failure mode' of RNN
models run for generation. See for example this twitter discussion
about the best RNNLM in the game today (courtesy of Brain!) - it has
similar issues even though it is *by far* the best LM out there with
respect to perplexity!
https://twitter.com/tallinzen/status/776406902867578884 .

Usually you get by this in NLP with beam search or some kind of
fanciness in the training or decode (such as sequence level training
https://research.facebook.com/publications/sequence-level-training-with-recurrent-neural-networks/),
but in the case of Magenta that would be pretty difficult I think.

One common trick used in other areas is to keep track of what has been
put out at the last n timesteps (think moving windows of 1, 2, 3, 4,
and 5 grams over the last T timesteps), and if it is a repetition of
what happened recently, resample again until something new happens, or
turn up the temperature, or a number of other heuristics. I think this
kind of thing for specific applications can be done fairly easily, but
doing it in a general enough way for Magenta users might be very
tough.

Another simple method that can also work is class reweighting during
training and/or generation - unfortunately this too is fraught with
peril, as you are effectively changing the importance of different
data. Sometimes this is used in conjunction with the above heuristics
to avoid outputs that have happened already.

One way I explored a bit this summer for my polyphonic work was NPAD
by Kyunghyun Cho https://arxiv.org/abs/1605.03835 - but an issue in
music is that we have no followup metric for choosing the best "beam"
from the group without some kind of hand coded heuristic method (NLP
has BLEU, METEOR, and so on). I also wonder if adding time-wise skip
connections could help avoid some of this (c.f.
http://arxiv.org/abs/1602.08210) issue, or adding noise/dropout on
specific parts of the recurrent connection.

No answers really, but some stuff to think about!

Elliot Waite

unread,
Sep 19, 2016, 6:19:07 PM9/19/16
to Kyle Kastner, Curtis Fjord Hawthorne, Frank Brinkkemper, Magenta Discuss
Also, the default script settings were used when extracting melodies from the midi files, which allowed gaps of silence less than a bar. This means some of the melodies could even be just a single note per bar. Perhaps it would be a good idea to try using a lower gap value, maybe only allow half-bar or quarter-bar gaps, to filter the training dataset to only be the denser melodies. I'll try creating a new checkpoint with the quarter-bar gap limit to see how much it helps.


>> ---
>> You received this message because you are subscribed to the Google Groups
>> "Magenta Discuss" group.
>> To unsubscribe from this group and stop receiving emails from it, send an

>
> --
> Magenta project: magenta.tensorflow.org
> To post to this group, send email to magenta...@tensorflow.org
> To unsubscribe from this group, send email to

> ---
> You received this message because you are subscribed to the Google Groups
> "Magenta Discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an

Johnny Lu

unread,
Mar 22, 2017, 7:54:43 PM3/22/17
to Magenta Discuss
Hi everyone,

I was wondering if there's some kind of repository to share and to try out pre-trained models that people have created. I've been looking around a bit and have seen them scattered about, but haven't seen anywhere that's done a good job of aggregating.

Thanks in advance!
Reply all
Reply to author
Forward
0 new messages