The tech lovers at last week's MEX Mobile User Experience conference in London were treated to all manner of fantastical visions of our further mobile empowered futures; big data, connected cars, smart homes, Internet of Things, gestural interfaces, personal mini-drones—the lot.
Few presentation this year will be complete without at least passing reference to the game changing nature or dystopian social implications of soon-to-be-unleashed Google Glass. Surprisingly, however, a couple of jaw-dropping demonstrations were enough to leave many of those attending wondering whether we might be missing a slightly quieter revolution taking hold. Could immersive audio be about to come of age in mobile user experience?
Having played second fiddle to the visual interface for decades, being so often the reserve of experimental art installations or niche concepts for the blind, audio has yet to find mass interaction application outside of alarms, alerts, ringtones and the occasional novelty bottle opener. All of this, however, could be set to change, if the two fields of binaural sound and dynamic music can find their way into the repertoire of interaction designers.
Binaural Audio Spatializes Interaction
Hardly a new phenomenon (though not always well known), Papa Sangre is regarded as the 'best video game with no video ever made.' Since it's release back in 2011, the audio app game for iOS has been a hit with both the visually impaired and fully sighted. The game plunges players into a dark, monster-infested fantasy with only their ears to navigate the three dimensional underworld and rescue the damsel in distress. The incredible 3D sound effects are achieved with headphones and binaural audio—an effect that replicates the experience of hearing a sound-wave originating from a certain direction, hitting one ear before the other. Use of the screen is disconcertingly limited to only a rudimentary compass-like dial (determining the player's virtual direction of movement) and two feet buttons, pressed to take steps into the darkness. Never has a computer game monster been so terrifying than when you can't actually see it.
The creators, London-based SomethinElse, developed the game by first mapping out the experiences of sound from hundreds of directions using a binaural microphone—a stereo mic the exact shape and density of a human head with pick-ups for ear drums. The algorithmic engine this produced could then be put to work transforming any ordinary mono audio into a spacialised, stereo output for listeners wearing headphones (with a fair dose of clever coding, of course).
Following a successful sequel NightJar, developed with chewing gum manufacturer Wrigley for their 5 brand, SomethinElse took to the floor at MEX to announce the release of their new PapaEngine that not only promises to improve the experience of their current and future audio games but will also provide "external developers access to the powerful 3D audio functionality of the library." Whilst this will likely mean a deluge of dubious quality copycat games hitting app stores any day now, interaction-inclined minds out there might also be humming with ideas on how this gaming technology could be repurposed for the real world. Rather than relying on our terribly obtrusive Glass all the time (switching to 'coma face' mid-conversation as we consult our Silicon Valley spectacles), what if mobile devices could help headphoned urbanites navigate streets through spacialised sound? Could our ears be transported to the stands at a distant concert or sports event as we watch on screen, with all the noises of the crowd and action accurately recreated in three dimensions around us? Might recorded music become a spacial experience? Could we listen to what it might be like to stroll over the mainstage at Glastonbury, or wander through an orchestra at the New York Philharmonic?
Whether binaural audio can be achieved through the bone conduction favoured by student headphone design concepts, and reportedly to be a feature of Glass, is yet to be seen (if anyone does have any idea, please do enlighten us). A merry alliance between bone vibration and binaural may give audio the augmentative power that Glass is eagerly anticipated to bring to the visual realm.
Dynamic Music Is the Soundtrack to Your Life
Another old concept finding new mobile tech application, is that of recorded music that never sounds the same twice—varying randomly or with listener interaction—Brian Eno's generative music being the first and most well known example. Combining the skills of composition and programming, reactive and interactive music has been making waves in the app store in recent years—Eno himself having released a number of soundscape creating apps for touchscreens, to some acclaim. Bluebrain, an interactive music two-piece, made headlines when they released their location-specific music apps for the National Mall in D.C. and for Central Park, the music only playing when in the correct geo-location, the pieces progressing movement by movement as the listener walks through the outdoor environment.
With a passion for all things musically interactive, music technologist Yuli Levtov of Reactify wowed MEX attendees by sharing his vision for a future where recorded music is experienced as an augmentation of our everyday experiences. With an incredible beat-dropping performance using his CTRL interactive music application, Yuri first demonstrated how music could be manipulated dynamically through input from a smartphone—devices with a plethora of inputs, that quietly track our movements in our pockets, day in and day out. The young musician then went on to imagine an idealistic future where the majority of our music is dynamic—music recorded, that is, as a set of compositional, programmed parameters rather than a static line of notes and beats—available as an everyday alternative to conventional tracks. Video games are already using such dynamic soundtracks to raise and lower suspense in virtual action depending on the players progress, but what if, he wonders, the music on our devices reacted and developed with our real world movements, similarly to BlueBrain's Central Park experience. What if, a track started playing slowly as you began your evening jog, the drums kicking in as you begin to speed up and then the bass dropping and volume boosting the moment you break out into a full sprint (isn't it frustrating when ordinary music doesn't quite keep up with your need for exercise motivation)?
Ambitious though this vision may be, could mass adoption of dynamic music also present a new business model for the code-savvy musician? Would a move from producing passively consumed commodities to interactive and intelligent experiences be something music fans would be more inclined, or indeed more tied into, paying for? Maybe. But maybe not.
What may be more worthy of a moment's imagining, is how the much heralded wearable technology movement might interface with music. What if dynamic music varied with our heart rates as we make our morning commute? Will a swipe of a bracletted or smart-watched arm change the volume, switch the track, drop a beat? Could playlists be influenced by our pedometers? If Nike aren't already all over this, somebody else no doubt is.
Create a Core77 Account
Already have an account? Sign In
By creating a Core77 account you confirm that you accept the Terms of Use
Please enter your email and we will send an email to reset your password.