Following on from the previous post on contextual information we could infer from increasingly smart, self-aware social informational products - or, how devices learn - is a pointer to Matt Webb's great observations on using the Shuffle here and here. I particularly like his thoughts on navigation in this screenless, backgrounded, 'blinking' context.
"Or maybe a better interface would be this: The shuffle should have two slider controls: volume and more/less like this. Don't like a track? Hit Less Like This and the next track is more randomised. Like a track? Hit More Like This are the next track is more likely to be from the same genre--hit it again and it's more likely to be from the same artist, the same album, share a BPM."
There are several axes we might want to enable, ultimately. The classic ones Webb describes - artist, genre, holding the album container inviolate or fluid etc. But also the ones in this old chestnut about music's rich facets.
This was more in the context of building navigation in a web context, but some of these axes may be worth considering too. More research required on context of use around music (unless you know of some already?). Is it all artist, album, genre? Or mood, function, utility, musical structure etc. I suspect it's more the latter than we tend to think, but have focused on the former due to the organisational characteristics of the music industry and, well, the fact it's easier.
Anyway, given we have an axis of sorts to navigate through, how might we present - in a non-visual, gestural sense - choice to the user, which doesn't massively impinge on their otherwise-engaged thought processes?
Perhaps an audio preview might be a way of doing it? An audible signal indicating you're about to enter preview mode, and then a 3-second burst of the three tracks which match the closest across various axes. Click back and forth - or rock the device back and forth, or tug up and down - to shuffle quickly through them and hit play to select?
Bit like how we used to - or still do - drop the needle randomly in the middle of a track on a piece of vinyl for a second or so, to 'hear a glimpse' of what the track was like, almost bouncing the needle progressively through some broadly representative grooves (the physical record provides a great sense of duration of track of course, as well as boundaries in between and sequence in context of album. Top interface Mr Emile Berliner!) ... We can effortlessly preview a side of vinyl very quickly, with tangible physical and aural feedback, before selecting a track. It's very gestural. While it does provide information around duration, sequence and position in the context of a larger whole, it doesn't provide information about genre, artist, mood etc - which it didn't have to, given its characteristics. But might be something to look at in the the context of Matt's interesting thoughts - it may be still too much to ask for heavily backgrounded use, but then this is where predetermined choice, yet still based on active learning from user behaviour and therefore with implied agency, could be useful. Hmmm.
It's akin to Fabio Sergio's ever-smart and ever-prescient comments about navigation through rich media on mobile phones. We can't be asking people to actively navigate through virtual space while they actively concentrating are navigating themselves through physical space. These kind of casual, instinctive, gestural interactions would seem more in tune with music listening on mobile devices than the complex, rich agency afforded by browsing a 2-foot GUI on a large screen with keyboard etc.