No, the MusicDNS fingerprints were completely separate from the
"mixing" technology. When you wanted to create a new PUID, you had to
contribute to the database of musical features, but it was not
technically necessary for the fingerprinting part.
> This is a capability I would dearly hate to lose and am wondering if
> such a capability would be possible with acousticids? I'm still trying
> to figure out the questions, but here goes for now:
> 1) Is it possible to compare two acousticids and generate some idea of
> similiarity?
> 2) If so, has it already been done?
No, it isn't and it hasn't be done. :)
> 3) If not what would it require to implement? What would the basic
> algorithm look like? What would be some of the issues involved? How
> would one get started?
The answers to these questions are the tricky part of the problem.
There is no "standard" way to do something like this. MFCCs (timbre)
features are used a lot as the base information for many algorithms,
but you usually have to consider way more features. I've started
collecting some links here:
https://github.com/lalinsky/chromaprint/wiki/Music-Analysis
If you find something interesting, please add it to the page.
There is also a music similarity category of the MIREX contest, you
can find some details about it here:
http://www.music-ir.org/mirex/wiki/Audio_Music_Similarity_and_Retrieval
> Depending on the complexity and amount of time required, this is a
> project I would be interested in.
I recently got interested in this too, but I haven't done much
research yet due to lack of time. Given that I don't know that much
about the state-of-the-art research, my approach will be the same as
with the fingerprinting -- spend a few months of reading various
research papers before actually attempting to implement anything.
Lukas
Well, they didn't really develop they own fingerprinting technology.
They just used http://www.cs.cmu.edu/~yke/musicretrieval/ which I
understand was not their research.
Lukas
The former is much more challenging (hence all the research), but if it ever works out, it would allow for superior recommendations. This is effectively what Pandora does today, except that they have humans extract the information (which makes it expensive to scale). The latter approach is much simpler, and is what Amazon, Apple, and Last.fm do.... but the biggest downside is that people have to use a song in relation to other songs enough to make good (or any) recommendations, and that doesn't always happen.
Lukas
On Jun 30, 2011, at 11:35 AM, Lukáš Lalinský wrote:
There's all sorts of interesting questions here starting with just what
exactly is similiarity? Key? Chord progression? Bass line? Tempo?
I do know that I was/am impressed with MusicMagicMixer's results. It may be a
case of hearing what your listening for, but it seems that I can always find a
thread of similarity. For instance selecting a Bruce Springsteen song as a
seed inevitably seems to result in a playlist containing a lot of Bruce
Springsteen songs (which generally are consistent in terms of key, chord
progressions and tempo)
And I agree that the best system will combine some sort of algorithmic
analysis of the waveform (or some projection thereof) along with human input.
I'll take a look at the papers you mention (though I suspect they will be way
beyond me).
It would also be great that as resources, papers, etc. become available if you
would post them here.
-steve
> >> Lukas%