"The Music Recommendation System is an automated system that provides music recommendations specifically tailored to each user to find new music that they might like. This system, designed by students at the University of Illinois (Champaign-Urbana), operates by taking ratings from your own iTunes playlists and comparing them against other users who have used the recommendation system."
Sounds good, eh? So I went there, downloaded, and then submitted the few hundred tracks I’ve rated in my iTunes database. Then I get this:
"Your estimated time remaining for results is 2.23 day(s)."
And that was with only 7% of my "catalogue" hand-rated. Ouch. Waiting 2.23 days for results points to a question of scale, methinks.
I don’t mean to knock a bit of work which is clearly in prototype stages, but perhaps this is an exhibit B backing up Clay Shirky’s intriguing argument about situated software. This looks like a case of ‘Web School’ software design; something built for mass use, trying to scale and failing. As I write, there are 2727 users in their database, 527494 songs, approx 322 songs per user, and by actually proudly displaying these kind of stats the project team seem to be broadcasting the notion that more recommendations here would necessarily be better (due to collaborative filtering’s requirement for ‘scale’). More is more. Their software isn’t scaling though, and I wonder whether their recommendations are really going to deliver too. Even when something scales extremely well – say Amazon – I don’t get particularly interesting recommendations.
Better to create a situated recommendations service built around my friends and colleagues perhaps? No need to scale there. Amongst those I know, there are some I’d turn to for music recommendations – and indeed I do, all the time, always doing a near-subconscious, complex level of filtering, reputation modelling, and drawing on history, context, intent, and all the richness of pattern in close, often physical, human interaction. In the complex cultural field of music recommendation, less may be more. I guess a Shirkian model would be a ‘local’ application enabling and nurturing that level of recommending amongst these <150-strong social groups. Basically, amongst me and my mates. Even then, reputation within a knowledge domain would be handled in more sophisticated fashion than simply "my default online social group", as I have good friends, online a lot, whose music I wouldn’t go near with a barge-pole. So something a lot closer to the (culturally, physically) localised applications described in Clay’s piece make a lot more sense.
"Situated software isn’t a technological strategy so much as an attitude about closeness of fit between software and its group of users, and a refusal to embrace scale, generality or completeness as unqualified virtues." [Clay Shirky]
Compared with the alternative of staring one of the biggest ‘hourglasses’ I’ve ever encountered, it might be worth pursuing. Moreover, I’m not convinced that the eventual recommendations will be necessarily superior to the ones I get from my social set anyway. So there could well be a better ‘closeness of fit’ to explore here. But I remain open-minded, and will let you know.
University of Illinois: Music Recommendation System
Clay Shirky: Situated Software
Leave a comment