magenta - noise

70 views
Skip to first unread message

doctor x

unread,
Jul 7, 2020, 2:04:29 AM7/7/20
to Magenta Discuss
Hi, I doubt this will appeal to many of you because it is a noise experiment 

* I took 500 short wav files of random noise & ran them through Onsets&Frames, converting them to midi drum patterns
* I prepared the data & built a training model & made a bundle file
* Used the bundle & drums_rnn to generate ten tracks (with a high QPM value, so it would be more tonal)
* Wrote a little function that overlays the ten tracks to make a 'rough mix'
* converted the midi to wav


If you are not familiar with noise, chances are you will find nothing of interest here.
If you are familiar with noise, chances are you will find nothing of interest here!

What interests me? 'Outsider music'. Noise. Sound. Computers making new forms of music & looking at how 'acceptable' that is to humans



Adam Roberts

unread,
Jul 7, 2020, 7:51:25 AM7/7/20
to doctor x, Magenta Discuss
Thanks for sharing!

Did you find the result to be different from other techniques you've used in the past?

--
Magenta project: magenta.tensorflow.org
To post to this group, send email to magenta...@tensorflow.org
To unsubscribe from this group, send email to magenta-discu...@tensorflow.org
---
To unsubscribe from this group and stop receiving emails from it, send an email to magenta-discu...@tensorflow.org.

doctor x

unread,
Jul 8, 2020, 1:54:03 PM7/8/20
to Magenta Discuss, paulcla...@gmail.com
Hi,
Well, this was my first attempt at auto-generating a track as such, so process wise it was fun and I learnt a few things... Quite happy with the result, layering the tracks (in a mindless way) created some nice bits & because I created my own drums data-set (starting with non drum sounds) it's minimalist - which I like.

Part of what I'm interested in is whether people will accept computer generated music, what processes they use to make that judgement & whether they use different processes to evaluate 'human music'

I know a lot of music that I listen to has all kinds of strands, the group members, the group name, the group history, their image, what they were about. There's a whole backstory to get lost in. That's what I like. Maybe it's because I'm an old duffer who bought records. Would I still value those things if I had mainly consumed music via Spotify playlists?





On Tuesday, July 7, 2020 at 12:51:25 PM UTC+1, Adam Roberts wrote:
Thanks for sharing!

Did you find the result to be different from other techniques you've used in the past?

On Tue, Jul 7, 2020 at 2:04 AM doctor x <paulcla...@gmail.com> wrote:
Hi, I doubt this will appeal to many of you because it is a noise experiment 

* I took 500 short wav files of random noise & ran them through Onsets&Frames, converting them to midi drum patterns
* I prepared the data & built a training model & made a bundle file
* Used the bundle & drums_rnn to generate ten tracks (with a high QPM value, so it would be more tonal)
* Wrote a little function that overlays the ten tracks to make a 'rough mix'
* converted the midi to wav


If you are not familiar with noise, chances are you will find nothing of interest here.
If you are familiar with noise, chances are you will find nothing of interest here!

What interests me? 'Outsider music'. Noise. Sound. Computers making new forms of music & looking at how 'acceptable' that is to humans



--
Magenta project: magenta.tensorflow.org
To post to this group, send email to magenta...@tensorflow.org
To unsubscribe from this group, send email to magenta...@tensorflow.org
---
To unsubscribe from this group and stop receiving emails from it, send an email to magenta...@tensorflow.org.

Prashanthi Atukuri

unread,
Mar 16, 2021, 3:23:42 PM3/16/21
to Magenta Discuss, doctor x
Hi Paul,

can we connect sometime to discuss this in detail. We are presently working on a similar project and could use some deliberation from you.

Thanks
Prashanthi

Reply all
Reply to author
Forward
0 new messages