One of my early, longstanding information retrieval passions is Music Information Retrieval — especially content-based music IR. I began research in MIR back in 1998 as a graduate student at UMass Amherst. In 2000, we organized the very first ISMIR conference, in Plymouth Massachusetts.
Music IR hits a sweet spot between information retrieval, information seeking, pattern recognition, data mining and information extraction, and user information needs. It is a field that is full of rich but solvable problems. And it’s music.. who doesn’t like that?
So one of the blogs that I have been reading for a number of years now is Paul Lamere’s. He started in 2004 with Duke Listens! and recently joined the Echo Nest, where he writes a new blog, Music Machinery. His 2nd-day-on-the-job post includes a description of the types of music searches that Echo Nest enables:
From this analysis of the social context, the user behavior and the actual audio, the Echo Nest gets a deep understanding of the entire world music. It knows which artists are getting the most buzz, which artists are getting stale, how and why artists are related, what words are being used to describe the music. This data goes far beyond the “if you like Britney, you might like Christina” level. The Echo Nest understands enough about music to be able to answer queries such as “make me a playlist of songs with a tempo of 90 beats per minute by an unknown emo artist that sounds something like Dashboard Confessional, and has violins”. The really neat thing is that the Echo Nest is exposing a lot of this functionality in their developer API. This lets anyone who is building a music application to tap into this large resource of music intelligence.
I cannot think of a better domain for research and development into exploratory search. Music is an area where users do not just want to look up one particular song (the useful Shazam application, which has been around since 2000-2001, notwithstanding). They want to engage with the music, to find new and interesting music that they might not have been aware of previously. There less of a desire to have an “answer” handed to the searcher, and more of a desire to explore the space.
So giving the user access to all the various facets and aspects of music, from a social/context set of facets (keywords, tagging) to activity facets (buzz) to musicological facets (tempo, rhythm, timbre, etc.) is valuable. It highlights the need for exploratory search.
One of my firm beliefs about innovation is that cross-fertizilation of domains and tasks often leads to unexpected breakthroughs. By being aware of exploratory search approaches in the music domain, it can help inform the work we do in text, web, and other domains as well.
I agree with almost every word of Jeremy’s comment. (Really no surprise, since it was my grant Jeremy started working on 1998, and I chaired ISMIR 2000; he and I saw eye-to-eye from the beginning.) Another reason why music is a good domain for studying exploratory search is the enormous variety of “facets and aspects” music has; cf. the draft of a paper of mine, Studying Music is Difficult and Important: Challenges of Music Knowledge Representation. And Don Swanson’s “Historical Note: Information Retrieval and the Future of an Illusion”, reprinted in Readings in Information Retrieval, suggests why “cross-fertilization leading to unexpected breakthroughs.” Finally, I’m working hard on a content-exploring system I call the General Temporal Workbench. It was originally the General Music Workbench, but I realized that music is so demanding already, if it could really handle music content in a very general way, it could handle a lot more — and there’s so much potential for leverage from tools developed for one field being used in another. Unexpected breakthroughs, yes.
Pingback: Information Retrieval Gupf » Music Search: Exploration or Lookup?