If you haven’t heard (probably not likely), Apple announced a number of upgrades to its iPod/iTunes product line today. It is interesting to me because I see more and more Music Information Retrieval making it into consumer products. Genius added smart playlisting a year or two ago (though from the reviews it doesn’t perform too well). And just today, the new ipod nano with a built-in FM reciever allows you to mark a song that is currently playing on the radio and iTunes will identify that song for you the next time you sync.
Granted, this musical audio fingerprinting is one of the oldest forms of Music Information Retrieval out there. Fraunhofer had a working version at ISMIR 2000, nine years ago, and Avery Wang (the fellow behind Shazam) had also finished an implementation around that time. Countless implementations have been done by numerous companies since then. Apple isn’t even doing the “hard” version of the problem, i.e. in a crowded, noisy bar with lots of audio interference. They’re instead using the pure radio signal. This new Nano has a built-in microphone; you’d think that they’d take advantage of that do the noisy pub song ID thing, too. Nevertheless, it’s nice to see more Music IR work being integrated into consumer products.
If you’re interested in research on creating better playlists, and doing all sorts of interesting search and exploration of music information, check out this year’s ISMIR in Kobe, Japan. It will be the 10th one. Amazing!
I don’t think it’s even as advanced as audio fingerprinting. Seems like it’s actually taking the track name from the radio broadcast and storing it as text. The fineprint says “iTunes Tagging is currently available only in the U.S. on radio stations that support iTunes Tagging.” iTunes Tagging doesn’t seem to be a new thing either: http://www.engadget.com/2007/09/10/hd-radio-rolls-out-itunes-tagging/
You’re absolutely right, Dave. Thanks for catching that detail!
That’s kinda.. lame. Apple pitches the feature on their website as: “You’re listening to the radio and you hear a song you like, but when you go to iTunes, you can’t remember the name or even who sings it. Enter iTunes Tagging.” That doesn’t even make sense anymore. How would you not remember the name or even who sings it.. when it’s right on the screen in front of you? So tagging is just “save to file” of that information?
Much more useful to me would have been the audio fingerprinting feature. I could use it not only for full songs on the radio, but for song snippets (as they occasionally play on NPR news and talk programs) and for the noisy pub scenario as well. Oh well.. Apple 2009 hasn’t even caught up to ISMIR 2000, I guess.
Then again, much more useful feature to me would have been to put 32 GB into the nano, rather than a video camera.
Pingback: Information Retrieval Gupf » Google Music China: Unprecedented?
I agree that it’s fun to see more and more MIR technologies released in the commercial world. I work at Zune, and we launched our SmartDJ feature this year. I was curious to see how people would react to it, given that Apple had beaten us to the market and that Last.fm and Pandora also do a good job with recommendations and playlists. My basic concern was that Apple, Last.fm, and Pandora all have much more volume of data to power their CF algorithms. So could we compete?
Our feature is a little different than the others, and given the comments from our users, I’m glad to hear people like it. We didn’t have the luxury of conducting online experiments with it before launching, but in this case we threw it out there, and people love it. It’s very gratifying.
Incidentally, Zune had Buy from FM about a year ago, but it’s not much smarter than what the iPod does. Even though devices like the iPod and now the Zune HD have a lot of processing power to extract audio features and ship them to the cloud, it takes a Shazam or other focused third-party to see an opportunity and take the risk. Speaking from my experience in the product world, getting to market soon and cheaply often trumps investment in the state of the art.
Tom: I absolutely think that you should still compete, even though your the volume of your user interaction data is much smaller than Apple/Last.fm etc. Why? Because I think that there is more to recommendation than collaborative filtering. Collaborative filtering is basically user-action driven, and does not (imo) adequately capture the full range of recommendation possibilities. That is, it does not take automated audio analysis into account. See this paper for a good overview:
You may be correct in saying that it takes a Shazam or other focused third-parties to see these opportunities and take these risks, though. Back at SIGIR 2004, I approached Sue Dumais from Microsoft Research and proposed doing music retrieval/recommendation. She basically said no, because (she said) Microsoft had already solved that problem. I think she basically meant that MSR had solved the audio song fingerprinting problem.. as Chris Burges had done at MSR back in 1998 (beating Shazam by 2 years, actually). But there is so much more to MIR than audio fingerprinting.
And in 2004, that is when the space was really starting to heat up. Last.fm had only been founded a few months earlier. Pandora was around, but it wasn’t B2C yet. So back then was the right time that Microsoft should have put an initial offering into the marketplace. I tried my best to convince ’em.. but ultimately could not.