Imagine a wearable device that lets you physically interact with interfaces that appear in front of you on any surface, where and when you want them. You can watch a video on your newspaper's front page, navigate through a map on your dining table, and flick through photos on any wall. The "Sixth Sense" system from Patti Maes' Fluid Interfaces Group at the MIT Media Lab does all this through a prototype built from $300 worth of off the shelf components. You can even take a photograph by simply holding your hand in the air and making a framing gesture.
Though the system appears to be in a state of "frankenstein"-type assemblies of webcams, projectors, mirrors, fingertip color markers and helmets it's not hard to imagine a streamlined device that could be easily donned.
The SixthSense prototype is comprised of a pocket projector, a mirror and a camera. The hardware components are coupled in a pendant like mobile wearable device. Both the projector and the camera are connected to the mobile computing device in the userâ€™s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks user's hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tip of the userâ€™s fingers using simple computer-vision techniques. The movements and arrangements of these fiducials are interpreted into gestures that act as interaction instructions for the projected application interfaces. The maximum number of tracked fingers is only constrained by the number of unique fiducials, thus SixthSense also supports multi-touch and multi-user interaction.
The SixthSense prototype implements several applications that demonstrate the usefulness, viability and flexibility of the system. The map application lets the user navigate a map displayed on a nearby surface using hand gestures, similar to gestures supported by Multi-Touch based systems, letting the user zoom in, zoom out or pan using intuitive hand movements. The drawing application lets the user draw on any surface by tracking the fingertip movements of the userâ€™s index finger. SixthSense also recognizes userâ€™s freehand gestures (postures). For example, the SixthSense system implements a gestural camera that takes photos of the scene the user is looking at by detecting the â€˜framingâ€™ gesture. The user can stop by any surface or wall and flick through the photos he/she has taken. SixthSense also lets the user draw icons or symbols in the air using the movement of the index finger and recognizes those symbols as interaction instructions. For example, drawing a magnifying glass symbol takes the user to the map application or drawing an â€˜@â€™ symbol lets the user check his mail. The SixthSense system also augments physical objects the user is interacting with by projecting more information about these objects projected on them. For example, a newspaper can show live video news or dynamic information can be provided on a regular piece of paper. The gesture of drawing a circle on the userâ€™s wrist projects an analog watch.
The current prototype system costs approximate $350 to build.
Adam Greenfield, author of Everyware: The Dawning Age of Ubiquitous Computing and current head of design direction for service and user-interface design at Nokia, has a new manifesto on his blog called "Towards urban systems design," based on a talk he'll be delivering in November at the Pompidou. The abstract...
Weekly finds from the 3D world.SolidWorks
The future of CAD 2019, as predicted by SolidWorks
Video: Design optimization in SW2010
Assembly tip: Showing configurations in imported parts
Surfacing tutorial: Making a mouse head
Eyecandy: Lots and lots and lots of pics of Blue Realm Studios' helmet models for Halo 3 ODST, modeled in SW (sample...
In an entry titled "The Looming Dark Horizon: When the IP Mess Hits Industrial Design & Co.," the reBang weblog envisions a file-sharing future that ought to terrify ID'ers, or at least their parent companies:
At some point, p2p networks won't have just mp3 files, they'll have CAD files. When they...
Although we found the Microsoft Research technologies demonstrated here mostly uninspiring (unlike last week's cool multi-touch demonstration by BumpTop), they will be of interest to anyone studying or working on interface design. Microsoft's five experimental mice use a variety of sensing technologies to translate user gestures into on-screen actions, giving...