Hi there! I thought several people on this list might be interested in the following task.
With apologies for cross-posting...
Call for participation: Patterns for Prediction task @ MIREX2018
http://www.music-ir.org/mirex/wiki/2018:Patterns_for_Prediction
We are searching for researchers whose computational models and/or algorithms have the ability to predict the next musical events (continuation), based on given, foregoing events (prime). We are also interested in models that may not make explicit predictions, but can estimate the likelihood of several alternative continuations.
One facet of human nature comprises the tendency to form predictions about what will happen in the future. Music provides an excellent setting for the study of prediction, and we hope that this task will attract interest from fields such as psychology, neuroscience, music theory, music informatics, and machine learning.
Why "patterns" in "Patterns for Prediction"? This new task has emerged from an existing Pattern Discovery task, which ran 2013-17. The last five years have seen an increasing interest in discovering or generating patterned data, leveraging methods beyond typical (e.g., Markovian) limits. How might exact and inexact repetition, occurring over the short, medium, and long term in pieces of music, interact with expectations in order to form a basis for successful prediction?
MIREX stands for the Music Information Retrieval Evaluation eXchange. Since 2005, it has provided a forum for researchers to (1) train algorithms to perform specific, music-technological tasks on publicly available datasets, (2) submit algorithms that are run and evaluated by MIREX organizers on private datasets, (3) compare their work with one another and shed light on research questions informed by and informing diverse fields intersecting with music informatics.
The deadline for submitting to this task is Saturday August 25th, 2018. If you are interested in participating in this task but do not think you will have time until the 2019 iteration, please let us know so we can keep you in mind for next year.
For more details, please refer to the MIREX page: http://www.music-ir.org/mirex/wiki/2018:Patterns_for_Prediction
Your task captains for Patterns for Prediction are Iris Yuping Ren (yuping.ren.iris), Berit Janssen (berit.janssen), and Tom Collins (tomthecollins all at gmail.com). Feel free to copy-in all three of us if you have questions/comments.
Thanks for reading!
This is exactly what I'm interested in! I have an open-source project called The Amanuensis (https://github.com/to-the-sun/amanuensis) that uses an algorithm to predict where in the future beats are likely to fall. To describe the project further:"The Amanuensis is an automated songwriting and recording system aimed at ridding the process of anything left-brained, so one need never leave a creative, spontaneous and improvisational state of mind, from the inception of the song until its final master. The program will construct a cohesive song structure, using the best of what you give it, looping around you and growing in real-time as you play. All you have to do is jam and fully written songs will flow out behind you wherever you go."My algorithm right now is only rhythm-based and I'm sure it's not sophisticated enough to be entered into your contest, but I would be very interested in the possibility of using any of the algorithms that are, in place of mine in The Amanuensis. I wonder if any of your participants would be interested in some collaboration? What I can bring to the table would be a real-world application for these algorithms, already set for implementation.The use case here would be in making the program smarter about it what it chooses to record from your jam or doesn't. At any given moment it can be predicting what it expects will come next, based on what you've already been playing. If you play something too out of line with this prediction, it would stop recording. But since good music requires the establishment of patterns and then also the breaking away from them, if you're being too repetitious, it could also stop recording. The ideal target might then be to stay in a range of, say, 50 to 75% "predictable".That's just a basic outline of what I have in mind. Fundamental aspects like rhythm and the specific location of note onsets might want to be stressed.
--
Magenta project: magenta.tensorflow.org
To post to this group, send email to magenta...@tensorflow.org
To unsubscribe from this group, send email to magenta-discu...@tensorflow.org
---
You received this message because you are subscribed to the Google Groups "Magenta Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to magenta-discu...@tensorflow.org.
I am also very interested in moving my project beyond MIDI into the realm of pure audio. Under Seeking Contributions, when you say "We would like to evaluate against real (not just synthesized-from-MIDI) audio versions…" is it a question of deciding on an algorithm to judge the "correctness" of the participants' generated audio, compiling a large enough test set of audio recordings in the first place, or something else?
--
Magenta project: magenta.tensorflow.org
To post to this group, send email to magenta...@tensorflow.org
To unsubscribe from this group, send email to magenta-discuss+unsubscribe@tensorflow.org
---
You received this message because you are subscribed to the Google Groups "Magenta Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to magenta-discuss+unsubscribe@tensorflow.org.