Music note prediction: Computational modeling task

129 views
Skip to first unread message

Tom Collins

unread,
Jul 25, 2018, 2:15:49 PM7/25/18
to Magenta Discuss

Hi there! I thought several people on this list might be interested in the following task.


With apologies for cross-posting...


Call for participation: Patterns for Prediction task @ MIREX2018

http://www.music-ir.org/mirex/wiki/2018:Patterns_for_Prediction


We are searching for researchers whose computational models and/or algorithms have the ability to predict the next musical events (continuation), based on given, foregoing events (prime). We are also interested in models that may not make explicit predictions, but can estimate the likelihood of several alternative continuations.


One facet of human nature comprises the tendency to form predictions about what will happen in the future. Music provides an excellent setting for the study of prediction, and we hope that this task will attract interest from fields such as psychology, neuroscience, music theory, music informatics, and machine learning.


Why "patterns" in "Patterns for Prediction"? This new task has emerged from an existing Pattern Discovery task, which ran 2013-17. The last five years have seen an increasing interest in discovering or generating patterned data, leveraging methods beyond typical (e.g., Markovian) limits. How might exact and inexact repetition, occurring over the short, medium, and long term in pieces of music, interact with expectations in order to form a basis for successful prediction?


MIREX stands for the Music Information Retrieval Evaluation eXchange. Since 2005, it has provided a forum for researchers to (1) train algorithms to perform specific, music-technological tasks on publicly available datasets, (2) submit algorithms that are run and evaluated by MIREX organizers on private datasets, (3) compare their work with one another and shed light on research questions informed by and informing diverse fields intersecting with music informatics.


The deadline for submitting to this task is Saturday August 25th, 2018. If you are interested in participating in this task but do not think you will have time until the 2019 iteration, please let us know so we can keep you in mind for next year.


For more details, please refer to the MIREX page: http://www.music-ir.org/mirex/wiki/2018:Patterns_for_Prediction


Your task captains for Patterns for Prediction are Iris Yuping Ren (yuping.ren.iris), Berit Janssen (berit.janssen), and Tom Collins (tomthecollins all at gmail.com). Feel free to copy-in all three of us if you have questions/comments.


Thanks for reading!



Tom Collins, PhD http://tomcollinsresearch.net https://musicintelligence.co Visiting Assistant Professor Department of Computer Science Lafayette College


To the Sun

unread,
Jul 26, 2018, 10:00:21 AM7/26/18
to Magenta Discuss
This is exactly what I'm interested in! I have an open-source project called The Amanuensis (https://github.com/to-the-sun/amanuensis) that uses an algorithm to predict where in the future beats are likely to fall. To describe the project further:

"The Amanuensis is an automated songwriting and recording system aimed at ridding the process of anything left-brained, so one need never leave a creative, spontaneous and improvisational state of mind, from the inception of the song until its final master. The program will construct a cohesive song structure, using the best of what you give it, looping around you and growing in real-time as you play. All you have to do is jam and fully written songs will flow out behind you wherever you go."

My algorithm right now is only rhythm-based and I'm sure it's not sophisticated enough to be entered into your contest, but I would be very interested in the possibility of using any of the algorithms that are, in place of mine in The Amanuensis. I wonder if any of your participants would be interested in some collaboration? What I can bring to the table would be a real-world application for these algorithms, already set for implementation.

The use case here would be in making the program smarter about it what it chooses to record from your jam or doesn't. At any given moment it can be predicting what it expects will come next, based on what you've already been playing. If you play something too out of line with this prediction, it would stop recording. But since good music requires the establishment of patterns and then also the breaking away from them, if you're being too repetitious, it could also stop recording. The ideal target might then be to stay in a range of, say, 50 to 75% "predictable".

That's just a basic outline of what I have in mind. Fundamental aspects like rhythm and the specific location of note onsets might want to be stressed.

 Any thoughts?

Tom Collins

unread,
Jul 27, 2018, 9:23:18 AM7/27/18
to To the Sun, Magenta Discuss
Hi!

First off, thanks to everyone who has shown an interest in the task. We're excited to see the submissions over the coming weeks!

Second, if you've published papers or code in this area, feel free to let us know by copy/pasting a reference or link at this public doc and we'll check it out (e.g., for writing up the whole project):
Third, to answer To the Sun's reply, yes I think it's possible that one of the participants would be interested in collaborating with you on your Amanuensis package. We'll write something about Amanuensis into the Q & A section of the task description soon, so that readers who aren't on the magenta list see it too.

All best,
Tom


--
Magenta project: magenta.tensorflow.org
To post to this group, send email to magenta...@tensorflow.org
To unsubscribe from this group, send email to magenta-discu...@tensorflow.org
---
You received this message because you are subscribed to the Google Groups "Magenta Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to magenta-discu...@tensorflow.org.

To the Sun

unread,
Jul 28, 2018, 9:39:13 AM7/28/18
to Magenta Discuss, to_th...@gmx.com
That would be great, thank you!

To the Sun

unread,
Jul 30, 2018, 12:40:25 PM7/30/18
to Magenta Discuss, to_th...@gmx.com
I am also very interested in moving my project beyond MIDI into the realm of pure audio. Under Seeking Contributions, when you say "We would like to evaluate against real (not just synthesized-from-MIDI) audio versions…" is it a question of deciding on an algorithm to judge the "correctness" of the participants' generated audio, compiling a large enough test set of audio recordings in the first place, or something else?

Tom Collins

unread,
Jul 30, 2018, 4:01:52 PM7/30/18
to Magenta Discuss
Hi To the Sun!

Yes, one issue is to get hold of real audio data, which for this task ought to be synchronized to quality symbolic representations. One option, continuing with Colin Raffel's Lakh MIDI Dataset, would be to work with the 45,129 files mentioned under LMD-matched and LMD-aligned here.

The second issue, as you mention, is to agree on what constitutes a "correct" continuation, and all the shades of gray in between that and a completely "incorrect" continuation. Comparison of algorithm-output and true-continuation spectra might suffice, but would need to be substantiated/informed by (1) review of existing work on perception of audio similarity and (2) perceptual validation (e.g., a listening experiment).

That's too much for this year, but is certainly worth bearing in mind for 2019.

All best,
Tom


On 30 July 2018 at 17:40, To the Sun <to_th...@gmx.com> wrote:
I am also very interested in moving my project beyond MIDI into the realm of pure audio. Under Seeking Contributions, when you say "We would like to evaluate against real (not just synthesized-from-MIDI) audio versions…" is it a question of deciding on an algorithm to judge the "correctness" of the participants' generated audio, compiling a large enough test set of audio recordings in the first place, or something else?

--
Magenta project: magenta.tensorflow.org
To post to this group, send email to magenta...@tensorflow.org
To unsubscribe from this group, send email to magenta-discuss+unsubscribe@tensorflow.org

---
You received this message because you are subscribed to the Google Groups "Magenta Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to magenta-discuss+unsubscribe@tensorflow.org.

To the Sun

unread,
Aug 22, 2018, 1:33:54 PM8/22/18
to Magenta Discuss
So with the Patterns for Prediction deadline drawing closer, is there a way I might be able to get a hold of the participants directly? I subscribed to the EvalFest mailing list but I'm still waiting for moderator approval.

M4 speers

unread,
Aug 22, 2018, 7:29:52 PM8/22/18
to To the Sun, Magenta Discuss
I've been reading these posts for awhile now, but I'll finally break the ice:

So Cool... I'm a noob for code, fan of you rocket surgeons on this group...  but I wanted to give you all a shout out of praise for working on the tech that will bring my dream as a musician to life.
I have to use a looper now,  but someday you guys will let me be a 1 person, 3 bot band, improvising new songs as we go. 

Here is what it will look like when you finish building my cyborg clone band: Visualization of 4 joes: https://www.youtube.com/watch?v=531NNRIEJNI&t=65s
Thank you so much,  I can't wait to see what you do next!

Sorry if this was too off topic for the group...
Also please send me a bot for me to train to run my science classroom when I retire in 2040...
PEACE...

Reply all
Reply to author
Forward
0 new messages