--
Magenta project: magenta.tensorflow.org
To post to this group, send email to magenta...@tensorflow.org
To unsubscribe from this group, send email to magenta-discuss+unsubscribe@tensorflow.org
---
You received this message because you are subscribed to the Google Groups "Magenta Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to magenta-discuss+unsubscribe@tensorflow.org.
Yay amazing! Can't wait to try it out.Sean
3) Sample from the model.
# Construct a chord conditioning vector (skip if not using chord conditioning).
index = 1 # 1-12 = major, 13-24 = minor, 25-36 = augmented, 37-48 = dimished
c_input = np.zeros([TOTAL_LENGTH, NUM_CHORDS])
c_input[0, 0] = 1.0
c_input[1:, index] = 1.0
# Generate samples.
seqs = model.sample(1, TOTAL_LENGTH, temperature=TEMPERATURE, c_input=c_input)
# Write to MIDI.
mm.sequence_proto_to_midi_file(seqs[0], 'sample.mid')
That should be enough to get you started. Let me know if things aren't working.
-Ian
--
Magenta project: magenta.tensorflow.org
To post to this group, send email to magenta...@tensorflow.org
To unsubscribe from this group, send email to magenta-discu...@tensorflow.org
---
You received this message because you are subscribed to the Google Groups "Magenta Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to magenta-discu...@tensorflow.org.
Gracias!
tensorflow (1.8.0)
magenta (0.3.8)
!convert_dir_to_note_sequences \
--input_dir=RancidMidi \
--hparams=sampling_rate=1000.0\
--output_file=tmp/notesequences_RancidMidi.tfrecord \
--recursive
!music_vae_train \
--config=hier-multiperf_vel_1bar_med \
--run_dir=/tmp/music_vae/ \
--hparams=batch_size=32,learning_rate=0.0005 \
--mode=train \
--examples_path=tmp/notesequences_RancidMidi.tfrecord
INFO:tensorflow:Reading examples from: tmp/notesequences_RancidMidi.tfrecord
INFO:tensorflow:Building MusicVAE model with HierarchicalLstmEncoder, HierarchicalLstmDecoder, and hparams:
{'learning_rate': 0.0005, 'decay_rate': 0.9999, 'use_cudnn': False, 'free_bits': 0.0, 'sampling_rate': 0.0, 'conditional': True, 'batch_size': 32, 'clip_mode': 'global_norm', 'residual_decoder': False, 'dec_rnn_size': [512, 512, 512], 'beta_rate': 0.0, 'grad_norm_clip_to_zero': 10000, 'dropout_keep_prob': 1.0, 'min_learning_rate': 1e-05, 'max_seq_len': 512, 'max_beta': 1.0, 'grad_clip': 1.0, 'enc_rnn_size': [1024], 'sampling_schedule': 'constant', 'residual_encoder': False, 'z_size': 512}
INFO:tensorflow:
Hierarchical Encoder:
input length: 512
level lengths: [64, 8] INFO:tensorflow:Level 0 splits: 8
INFO:tensorflow:
Encoder Cells (bidirectional):
units: [1024] INFO:tensorflow:Level 1 splits: 1
INFO:tensorflow:
Encoder Cells (bidirectional):
units: [1024] INFO:tensorflow:
Hierarchical Decoder:
input length: 512
level output lengths: [8, 64] INFO:tensorflow:
Decoder Cells:
units: [512, 512, 512] Traceback (most recent call last):
File "/opt/conda/bin/music_vae_train", line 11, in <module>
sys.exit(console_entry_point())
File "/opt/conda/lib/python3.5/site-packages/magenta/models/music_vae/music_vae_train.py", line 325, in console_entry_point
tf.app.run(main)
File "/opt/conda/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "/opt/conda/lib/python3.5/site-packages/magenta/models/music_vae/music_vae_train.py", line 321, in main
run(configs.CONFIG_MAP)
File "/opt/conda/lib/python3.5/site-packages/magenta/models/music_vae/music_vae_train.py", line 303, in run
task=FLAGS.task)
File "/opt/conda/lib/python3.5/site-packages/magenta/models/music_vae/music_vae_train.py", line 164, in train
optimizer = model.train(**_get_input_tensors(dataset, config))
File "/opt/conda/lib/python3.5/site-packages/magenta/models/music_vae/base_model.py", line 296, in train
input_sequence, output_sequence, sequence_length, control_sequence)
File "/opt/conda/lib/python3.5/site-packages/magenta/models/music_vae/base_model.py", line 260, in _compute_model_loss
x_input, x_target, x_length, z, control_sequence)[0:2]
File "/opt/conda/lib/python3.5/site-packages/magenta/models/music_vae/lstm_models.py", line 1121, in reconstruction_loss
hier_input = self._reshape_to_hierarchy(x_input)
File "/opt/conda/lib/python3.5/site-packages/magenta/models/music_vae/lstm_models.py", line 1081, in _reshape_to_hierarchy
perm.insert(num_levels, perm.pop(0))
AttributeError: 'range' object has no attribute 'insert'