Mapping data to music

53 views
Skip to first unread message

L Haham

unread,
May 7, 2022, 10:14:42 PM5/7/22
to 'Ian Simon' via Magenta Discuss
Would it be possible to create a mapping from a data source to
A specific trained magenta model?

I'm interested in training a mapping from data to music such that as the data comes in the music can be played in real time.

For example traffic data that I train to map to a classical music magenta model. Then as the data stream comes in, classical music plays based on the data.


Would it be a question of training a mapping from data to the latent space?  And if we use an electronic music trained model that's what will play in real time? 


Thanks,
Lemuel

Adam Sporka

unread,
May 8, 2022, 4:58:01 AM5/8/22
to L Haham, 'Ian Simon' via Magenta Discuss
I am certain that from the technical standpoint such thing would be possible.

However, from the music design (dramaturgy) perspective, I would suggest to remain very conservative in terms of amount of information presented. Music, in order to be an aesthetic experience, has a lot of redundancy which constraints possible bandwidth to transfer your payload (sonified data).

It might be advisable to start with just a handful of parameters (three might be already too much) that have a small set or levels (low, medium, high, etc.) and do not change too often (a few times a minute) so that listeners are given enough musical cues to understand what's happening in the data.

Good luck!
Adam

--
Magenta project: magenta.tensorflow.org
To post to this group, send email to magenta...@tensorflow.org
To unsubscribe from this group, send email to magenta-discu...@tensorflow.org
---
To unsubscribe from this group and stop receiving emails from it, send an email to magenta-discu...@tensorflow.org.

Konstantinos Vasilakos

unread,
May 8, 2022, 5:12:18 AM5/8/22
to Adam Sporka, 'Ian Simon' via Magenta Discuss, L Haham
From what I understand, you want to modulate some synthesis parameters of a music system which sonifies data in real time. Sonification is an interesting topic in the field of generative art and computer music or whatnot. There are many articles on specific projects dealing with this but maybe a collection that compiles lots of essays can be found at The Sonification Handbook, by Hermann and others.

From my experience, in terms of practical issues of mapping strategies you may try what’s working for you and the musical objectives of your project. Some ideas might come from trying to implement a one to one mapping or one to many, the latter having the benefit of letting also a set of parameters to interact with each other. That said, range specs will be one of the main things you may experiment the most since not all synthesis parameters share the same optimal range, that is amplitude vs frequency etc. One of the ideas that I have found to be working is the dynamic adjustment of these specs using other approaches such as live coding and on the fly arrangement of the mapping implementation.

Other than that, mapping is one of the really interesting questions in the field of interactive music but with more than one answers which are answered by hearing the results and constantly modifying until something musically meaningful pops out.

Hope that helps a bit.

K
--
Save our in-boxes! http://emailcharter.org

L Haham

unread,
May 8, 2022, 11:11:06 AM5/8/22
to Konstantinos Vasilakos, Adam Sporka, 'Ian Simon' via Magenta Discuss
Thanks for the ideas. Especially the many to many mapping. 
It doesn't have to be obvious what affects what, but I want musicality.

Could training a neural network from the many variables of input to the dimensions of the latent space within its ranges work?
The speed of input while running could be throttled as necessary.

Matthew Grossman

unread,
May 11, 2022, 5:20:50 PM5/11/22
to L Haham, 'Ian Simon' via Magenta Discuss
I have done a similar type of mapping data to music, in my case using brainwave data. I have not done so in Python, but Max/MSP or PureData is a good language for this kind of work.

Matt

--

L Haham

unread,
May 14, 2022, 12:25:53 AM5/14/22
to Matthew Grossman, 'Ian Simon' via Magenta Discuss
Sounds cool. Is it open sourced?

Konstantinos Vasilakos

unread,
May 14, 2022, 1:32:57 AM5/14/22
to L Haham, 'Ian Simon' via Magenta Discuss, Matthew Grossman
Pure Data is, Max is not free.

You may try also SuperCollider, a state of the art and open source sound design language, ideal for this kind of work.
Reply all
Reply to author
Forward
0 new messages