We assume that gesture control will be the wave of the future, if you'll pardon the pun. And we also assumed it would be perfected by developers tweaking camera-based information. But now Elliptic Labs, a spinoff company from a research outfit at Norway's University of Oslo, has developed the technology to read gestures via sound. Specifically, ultrasound.
In a weird way this is somewhat tied to Norway's oil boom. In addition to the medical applications of ultrasound, Norwegian companies have been using ultrasound for seismic applications, like scouring the coastline for oil deposits. Elliptic Labs emerged from the Norwegian "ultrasonics cluster" that popped up to support industrial needs, and the eggheads at Elliptical subsequently figured out how to use echolocation on a micro scale to read your hand's position in space.
With Elliptic Labs' gesture recognition technology the entire zone above and around a mobile device becomes interactive and responsive to the smallest gesture. The active area is 180 degrees around the device, and up to 50 cm with precise distance measurements made possible by ultrasound... The interaction space can also be customized by device manufacturers or software developers according to user requirements.
Using a small ultrasound speaker, a trio of microphones and clever software, a smartphone (or anything larger) can be programmed to detect your hand's location in 3D space with a higher "resolution" (read: accuracy) than cameras, while using only a miniscule amount of power. And "Most manufacturers only need to install the ultrasound speaker and the software in their smartphones," reckons the company, "since most devices already have at least 3 microphones."
The demo of the technology, which they're calling Multi Layer Interaction, looks pretty darn cool:
Once upon a time, industrial designers, animators, graphic designers and illustrators physically used acetate or mylar sheets as overlays on drawings. Newer generations of creatives now understand this concept as Photoshop layers, which can easily be clicked on and off digitally. But now a team of researchers has combined the physical and digital with "a new thin-film, transparent sensing surface" they're calling FlexSense.
Developed in collaboration between two Austria-based outfits—the human-computer interaction researching Media Interaction Lab and the Institute for Surface Technologies and Photonics—and Microsoft Research, the FlexSense appears to be nothing more than a good ol' acetate overlay, albeit embedded with thin sensors. But since this sheet can precisely sense the manner in which the user is deforming it, when coupled with clever software this can lead to some interesting interactions. You can skip the first half of the video below, which is mostly egghead-speak, but be sure to tune in at 2:05 to see the proposed applications:
While the interface is probably too abstruse for your average consumer, it's easy to see applications that would be perfect for ID and other creative fields. I'd love to see Wacom buy this technology and incorporate it into their stuff.
If you want to call your friend Jim, you can say "Call Jim" into your phone and it dials him. Five years ago you'd click on the name "Jim" in your phone and it would dial him. Twenty-five years ago, you'd call Jim by punching his number into a touch-tone phone. Fifty years ago you'd dial Jim's number on a rotary dial.
Before that is where it gets interesting.
Sixty years ago, you'd lift your telephone receiver and be met with silence. (There was no such thing as "dial tone" yet.) You'd tap the hang-up mechanism a few times and an operator—an actual human being sitting in a room waiting for just this moment—would come on the line. You'd then say "Please connect me to [two-letter district code followed by five-digit phone number]." The operator would then plug freaking wires into a switchboard and connect you to Jim.
So when Bell Systems started incorporating this amazing new interface called a "rotary dial" into their telephones, they needed to show consumers how to use them. Watch and be amazed:
The iPhone 6 and 6 Plus roll out today, and uptake will be massive. In addition to the insane sidewalk lines you'll shortly see on the news, Apple has racked up a staggering 4 million pre-orders. iOS app developers who upgrade their offerings will have a ready market, but they "can't just treat screens in the 5.5-inch range simply as a scaled-up version of a smaller phone," writes mobile products developer Scott Hurff, citing basic ergonomics. "[With the larger sizes] grips completely change, and with that, your interface might need to do so, as well."
To help app developers who haven't already made their bones on already-large Android devices, Hurff has released "Thumb Zone" maps on his blog. Research from Steven Hoober, author of Designing Mobile Interfaces, concluded that the majority of users prefer to use smartphones one-handedly, and Hurff used Hoober's data to create visual representations of where your thumb can, can't, and can kind of reach on various models of iPhone:
Then he puts Thumb Zones for the 6 and 6 Plus side-by-side:
This is where you start to see a sharp difference brought about by a much larger screen size. The sheer width of the 6 Plus means the thumb can no longer naturally reach all the way to the left edge, while the different grip required to support the larger device also changes the shape of the "Natural" area.
As designers, we find it amusing that there are Apple lovers that hate Samsung and vice versa. What the layperson doesn't seem to grasp is that the rivalry is good for the advancement of UI design. While Apple typically marches to their own drum, and reportedly had no interest in producing a smartphone with a larger screen, Samsung's dominance in that area has driven Cupertino to increase the size of the new iPhones they'll be announcing tomorrow; and in desperate anticipation of that event, Samsung has attempted to steal the march by announcing their new Galaxy Note Edge last week.
At first glance the unusual, asymmetrical, curved-glass design of the Galaxy Note Edge just seems plain weird. But look at this video by Marques Brownlee demonstrating the intended functionality:
An interface design is not successful just because you can figure out how to work it. The true test is whether you can explain to your parents, over the phone, how to work it. For any of you who have served as de facto tech support for your folks in this manner, this spot-on video by comedian Ronnie Chieng will be the funniest thing you'll see all week:
YouTube is of course a Google product, and they've got a lot more to worry about than how to delete comments—namely, their Android mobile OS intended for the next generation of smartphones, tablets, smartwatches and Glass. To that end, the Google Design site aims to spread the gospel of their design approach while laying down guidelines for those looking to operate within the Googleverse.
They've coined their approach to interface design "Material Design." By this they mean that interface design ought mimic the design of something involving a physical material. This does not refer to skeuomorphism, like Apple's scuttled faux-stitched-leather etc.; rather they mean that physical materials have easily comprehensible properties and that this predictability ought be emulated. You can pick a piece of paper up, flip it over, fold it in half, write on one side, write on the other. It does not zoom around your desk on its own nor spontaneously change color, but instead obeys the laws of physics and your physical manipulations.
There's a good reason we are experiencing the rise of the so-called "visual web." Our minds were destined to be attracted to visuals over text—since most of our brain real estate is devoted to sight. The visual cortex makes up one third of our brain. And the emerging trend of curved screens for smartphones and TVs feeds right into our desire for awesome images.
There are a few concave screens already on the market and some say the iPhone 6 will show up with a curved bend in the screen. It may be the case that market research found that the user feels it makes for a more immersive experience, but there are scientific studies that show we have desire for curved things.
Such reports are coming from a relatively new field in science: Neuroaesthetics. This is where neuroscience (the study of the brain) meets our appreciation for art or beauty.
A group from the University of Toronto recently studied how our brains react to rooms in a house. They had subjects look at photos of rooms while their brains were scanned in an fMRI (functional magnetic resonance imaging) machine.
And the scans revealed that the pleasure centers of their brains "lit up" when they looked at rooms that had curved features as opposed to having the more typical sharp edges. The latter type of rooms actually lit up areas of the brain normally associated with detecting threats.
The curved screens for digital hardware have been constrained by manufacturing—but no longer.
One can't help but notice all of the experimentation going on in the wearable devices field. Nothing has gained ubiquitous traction, but that's not for lack of trying; the field includes Google Glass, Nike's Fuelband (R.I.P.), Jawbone's Up, a variety of bluetooth earpieces, Samsung's Galaxy Gear Smartwatch, and whatever Apple's forthcoming iWatch will be, to name a few.
There is of course a real estate issue with the human body, as there's only so many places you can park a device. With the eyes, ears, and wrists already being targeted, industrial design firm Whipsaw (like Autodesk before them) is looking to the fingers. Their Nod device is a touchless gesture control device meant to be worn as a ring:
I'm definitely among those who have been waiting for Minority Report-like gesturing to become a reality. While light beams on desks and walls seems close, it's not our hands manipulating objects in thin air. But now researchers at the University of Bristol have developed the starting point, called MisTable. And they're doing it with mist.
Words will only fail to properly describe the look of this thing, but a tabletop computer system projects images onto a thick blanket of fog. They appear as ghostly apparitions, much like R2D2's projected Princess Leia.
We can interact with the 3D images by sticking our hands into the 'objects' and moving them—maybe to the person sitting next to us. At this time it's simple stuff, but still it means moving something as if it were actually something tangible. Check out the video:
It's an increasingly pressing question in this day and age, and one that has certainly seen some interesting responses—including this interdepartmental collaboration from Switzerland design school ECAL—as an evolving dialectic between two closely related design disciplines. Exhibited in Milan's Brera District during the Salone del Mobile last week, "Delirious Home" comprises ten projects that explore the relationship between industrial design and interaction design. (Naoto Fukasawa, for one, believes that the former will eventually be subsumed into the latter as our needs converge into fewer objects thanks to technology.)
Both the Media & Interaction Design and the Industrial Design programs at the Lausanne-based school are highly regarded, and the exhibition at villa-turned-gallery Spazio Orso did not disappoint. In short, professors Alain Bellet and Chris Kabel wanted to riff on with the "smart home" concept—the now-banal techno-utopian prospect of frictionless domesticity (à la any number of brand-driven shorts and films). But "Delirious Home" transcends mere parody by injecting a sense of humor and play into the interactions themselves. In their own words:
Technology—or more precisely electronics—is often added to objects in order to let them sense us, automate our tasks or to make us forget them. Unfortunately until now technology has not become a real friend. Technology has become smart but without a sense of humor, let alone quirky unexpected behavior. This lack of humanness became the starting point to imagine a home where reality takes a different turn, where objects behave in an uncanny way. After all; does being smart mean that you have to be predictable? We don't think so! These apparently common objects and furniture pieces have been carefully concocted to change and question our relationship with them and their fellows.
Thanks to the development of easily programmable sensors, affordable embedded computers and mechanical components, designers can take control of a promised land of possibilities. A land that until now was thought to belong to engineers and technicians. With Delirious Home, ECAL students teach us to take control of the latest techniques and appliances we thought controlled us. The students demonstrate their artful mastery of electronics, mechanics and interaction, developing a new kind of esthetic which goes further than just a formal approach.
The ultimate object—still missing in the delirious home—would be an object able to laugh at itself.
Photos courtesy of ECAL / Axel Crettenand & Sylvain Aebischer
Volvo's recently introduced a trio of concept cars: The Concept Coupe, the Concept XC Coupe and the Concept Estate. It is the latter that has most caught our eye because it is, quite oddly to us Yanks, a two-door station wagon. In America, the station wagon has always been about families, but by omitting rear doors, Volvo seems to be aiming this concept at the childless couple that likes to ski.
The Concept Estate brings with it Volvo's bold new styling direction, both inside and out, that's a million miles (er, kilometers) away from the Swedish carmaker's designed-by-Etch-a-Sketch look that we grew up with:
It's been over a year since we've seen interactive restaurant tables in the news, but here comes a new one from Pizza Hut. Yes, the American fast food joint is hoping that if their deep-dish pizzas aren't enough to get you inside, perhaps their fee-yancy touchscreen table will be. Have a look:
What's interesting about this, from a business perspective, is that Pizza Hut is owned by Yum! Brands, which also owns KFC and Taco Bell. While the last interactive restaurant table we looked at was integrated into a one-off restaurant, Yum! Brands (God I hate typing that stupid exclamation point in their name) has some 40,000 restaurants in over 125 countries.
As for the actual interface design (which was done by creative firm Chaotic Moon), it still seems a bit cutesy to me; I'm not confident that people will want to do a two-finger drag to choose a pie size, for instance—I suspect they'd rather just hit an S, M or L button. But the visual representation of how large something is will probably prove popular. And once the balance between what the technology can do and what people actually want has been worked out, if Y!B decides to move ahead with this concept, we could see mass uptake in a relatively short time period, on account of their size. Presumably they've got the juice to require individual franchisees to integrate these units, handily spreading the costs out.
Someone has finally taken note that throughout the day, we use our smartphones in at least two different ways. There's the active way, where you're futzing around with an app and your thumbs are flying across the screen. Then there's the passive way, where you're glancing at it to reference some piece of information you need. And with that latter usage, it would be better if the information was persistently presented, not something you had to call up by doing a home-button-press/swipe/access-code-enter/app-button-press.
Thus Russian tech manufacturer Yota Devices produces the Yota Phone, billed as "The world's first dual-screen, always-on smartphone." While one side has got the familiar color touchscreen we're all familiar with, flip the thing over and there's a black-and-white, EPD electronic-ink-type display that draws no power once its pixels are in place. (The image or text will stay "burned" there even if the phone's battery dies.) In other words you send whatever data you want to that second screen and it stays there, ready for immediate viewing when you pull the phone out of your bag, no button presses necessary. If I owned this phone I'd constantly avail myself of the convenience of having a grocery list, boarding pass, map snippet, reference dimensions, addresses and appointment times, etc.
Woman shopping for groceries in South Korea at a HomePlus display using her mobile phone
Earlier this month, Adaptive Path held the Service Experience conference in San Francisco, CA. The conference invited designers and business leaders who are out there 'in the trenches' to share insights, tips, and methods from their case studies in service design.
Service Design is an emergent area of design thinking that's been percolating in design circles for many years. Though corporate brands like Apple, Nike, P&G and Starbucks have built their success on the principles of good service design, it's an approach getting more serious consideration in countries like the U.S. after years of being developed in Europe.
Service Design, Service Experience, or Consumer Experience is a design approach that understands that the process by which a product is made and the organization that produces it, not only affects the product, but also defines the experience of the product. Service Design is made up of many ecosystems, including a company's own internal culture, their approach to production and development, as well as the context of the product as it exists in the day to day life of the users. Think about how Apple represents not only the product, but also customer service combined with the branded architectural experience of the Apple store. Or how Tesla motors is not only considering the product (an electric vehicle) but also mapping out a plan for a network of electric charging stations in California.
Service Design is a holistic system that takes into consideration the end to end experience of a product, whether it be a car, a computer, a trip, or a book. It is invested in creating the infrastructure that supports and empathizes with human needs by prioritizing people and experiences over technology during the design process. Service design is a design approach that can be applied across fields.
Swimming in Culture
A key perspective of Service Design is the ability to grasp organizational culture. Ever wonder why you had a great time working for one company and another company, not so much? Maybe it's not all 'in your head': According to keynote speaker David Gray of Limnl, culture is a summation of the habits of a group, and that "people swim in culture the way fish swim in water," using the analogy of dolphins and sharks.
Illustration from David Gray's presentation. (People may prefer to self-identify as a dolphin rather than a shark.)
In order to change culture, one must be able to find its foundation first. Ask dumb questions, talk to the newbies, gather evidence, and the evidence (what you see) usually leads to levers (how and why decisions are made and the protocol used) which leads to the company values (the underlying priorities and what's considered important) that uncover foundational assumptions (how they view the way the world works and what is the reasoning behind those values).
Although it launched nearly a year ago, I'm surprised that an app called How.Do didn't turn up on our radar—after all, an app for making quick'n'dirty how-to tutorials is right up our alley. Thankfully, co-founder Emma Rose Metcalfe reached out to us on the occasion of the launch of How.Do Two.Oh (Version 2.0, that is), which was released yesterday on the occasion of iOS7 and the World Maker Faire this weekend. (Supported by venture capital, her fellow co-founders Nils Westerlund and Edward Jewson round out the Berlin-based team.)
Viewable both through the free app and online, the Micro Guides are concise user-generated slideshows with audio, an ideal format for step-by-step tutorials and on-the-go reference guides. Insofar as the app hits a sweet spot in the maker/fixer/lifehacking movement, the How.Do team will be reporting from World Maker Faire tomorrow and Sunday, offering a unique window into the festivities at the New York Hall of Science—follow them on Twitter @HowDo_ to get the scoop!
As busy as they are this weekend, Metcalfe took a few moments to share her thoughts at this exciting time for the growing company.
Core77: What inspired you to create How.Do in the first place?
Emma Rose Metcalfe: How.Do is the intersection of my MFA research in sharing and distributing meaningful experiences and Nils' interest in the challenges of scaling projects for large communities. He had left SoundCloud to finish his studies at Stockholm School of Entrepreneurship where the two of us met. Long story short, we came home from a design bootcamp in India wanting work on something together. We shared the belief that knowledge is deeply personal. The space created between the emotional power of sound and the fantasy of image is incredibly profound—we wanted to harness that to make sharing and learning feel good.
As we recently saw, Ford has been experimenting with ways for drivers to use real-time vehicle information. Now competitor Chevrolet is also throwing their hat into this ring with a new, configurable dashboard display in the 2014 Corvette Stingray.
For the Fast & Furious set, the Stingray's dash can display acceleration and lap timers, as well as surprisingly techie stuff like a "friction bubble" displaying cornering force and a gauge showing you how hot the tires are. (Hot tires have better grip, which is why you see F1 drivers violently zigzagging on their way to the starting line; they're trying to get some heat on.)
For drivers in less of a rush, the dash can be set to display more practical information like fuel economy, what the stereo's playing or navigational details. I think the latter one in particular is a good move, as having route guidance graphics front and center behind the steering wheel is a lot better than having to shift your gaze to the center of the entire dashboard.
There are 69 different pieces of information the system can display, divided into three main themes: Tour, aimed at commuters and long-distance driving; Sport, which provides a pared-down, classic-looking radial tachometer; and Track, which gives you the hockey-stick tach, shift lights and an enlarged gear indicator. "Each of these three themes," says Jason Stewart, General Motors interaction designer, "can also be configured so that drivers can personalize their experience in the Stingray."
It's very strange that Google Glass is not mentioned once in this news segment. Researchers at Taiwan's Industrial Technology Research Institute (ITRI) have developed this eyeglass-based display, below, that uses images projected onto the lenses, and depth cameras focusing beyond the lenses, to create the functional illusion of operating a "floating touchscreen":
ITRI is simply the latest research group to use depth cameras to track our fingers, which then triggers a microprocessor to recognize that as an actionable "touch." Most recently we saw this with Fujitsu Labs' FingerLink Interaction System. So you might wonder why we're looking at this—isn't this just a combination of existing technologies that we've all seen before? It is, but so was the iPod, the iPhone and the iPad when they first came out.
Our friends at frog design recently released a short documentary on Industrial Design in the Modern World, a kind of iterative manifesto (the consultancy's first but certainly not their last), featuring several key players of the design team. We had a chance to catch up with Creative Director Jonas Damon on the broader message of the piece, as well as his thoughts on user experience and a possible revision to Dieter Rams' canonical principles of design.
Core77: Can you elaborate on the points you touch on in the opening monologue? Specifically, to what degree do 'traditional' (or outdated) forms and materials embody value or character? For example, I recently came across an iPod speaker in which the dock opens like a cassette tape deck, evoking a certain nostalgic charm despite being rather impractical (it was difficult to see the screen behind the plastic).
Jonas Damon: The opening monologue is about the physical constraints that have guided forms in the past vs. forms today, and the opportunities that arise from the absence of these constraints. 'Honesty' in design is a widely admired quality, and in the past that honesty was expressed by skillfully sculpting with and around a given product's physical conditions, rather than just hiding or disguising these. So when products were more mechanical, they had a more imposing DNA that informed their character; their mechanics largely defined their identities. Many product types came preconditioned with an iconic, unmistakable silhouette.
Today, most products in the consumer electronics space can be made with a rectangular circuit board, a rectangular screen, and a rectangular housing. Therefore, the natural expression of these products today is limited to a rectangle—not really a unique identity. Expression of character becomes more nuanced and malleable. With that newfound freedom, we have to be more sensitive, judicious and inventive. These days, 'honesty' is more complex and difficult to design for, as it's about the intangible aspects of the brand the product embodies.
Traditional forms and materials have cultural value because of their iconic, built-in character. The starting point for many contemporary consumer electronics forms is generic and sterile, so historical forms are often tapped to artificially trigger our memory-based emotions. It's been a popular fallback that we may be a little tired of these days, but on occasion its been well executed, and even that can have merit.
Of course, the 'flat black rectangle' effect also implies a shift from traditional form-follows-function I.D. to a broader, UX-centric approach to design (i.e. some argue that Apple's focus on iOS7 is simply a sign that they've shifted from hardware innovation to the UX/software experience). What is the relationship between hardware and UX?
Hardware is an integral part of UX. A true "user experience" is multi-sensory: when you engage with something, don't you see, feel, hear, maybe even smell that which you are engaging with? (I'm not sure why anybody refers to solely screen-based interactions as "UX"; that notion is outdated) As an Industrial Designer, I am a designer of User Experience. ID has gotten richer since we've started considering "living technology" as a material. By "living technology," I mean those elements that bring objects to life, that make them animate and tie them to other parts of the world around us: sensors, screens, haptics, connectivity, software, etc. By claiming these elements as part of our domain (or by tightly embedding their respective expert designers/engineers in our teams), we are able to create holistic designs that are greater than the sums of their parts.
In addition to unveiling their redesigned Mac Pro, yesterday Apple also previewed their forthcoming iOS 7. This is the one many an industrial designer has been waiting to see; we all know Jonathan Ive can do hardware, but iOS 7 will be the first real indication of what software will look like under Ive rule—and if he'd be given free reign. Former Apple executive Scott Forstall was famously a proponent of skeuomorphism, the inclusion of real-world elements—stitched leather, lined legal pads, spiral bindings—that many in the design community found tacky and backwards-looking. Following his ouster, Ive was placed in charge of iOS design, and he's made it no secret that he intended to Think Different.
Well, based on what we're seeing, we're happy to report that it seems Ive's creative control is complete.
The first thing users will likely note is the change in typography. Just as Forstall's beloved word "skeuomorphism" has an unusual sequence of three vowels in a row, Ive has switched the font to what looks to be Helvetica Neue Ultra Light, which has an equally foreign sequence of contiguous vowels. The resultant look is undoubtedly more modern (though your correspondent prefers thicker fonts for legibility's sake).
"Flatness" is the adjective of the day, and the new iOS has it in spades. In the past decade-and-a-half icons have spun steadily out of control; what were once simple representations of objects, necessarily drawn in low-res due to computing constraints, unpleasantly evolved into overcomplicated, miniaturized portraits. Ive's flat design approach returns to the roots of the graphic icon, eschewing 3D shading and instead using line to tell the tale. With the exception of a couple of icons—the Settings gears and Game Center's balloons—shading is completely absent. The cartoonish highlights on the text message word bubbles are gone. Background gradations are the only non-flat visual variation allowed.
Interestingly enough, the keypad now looks like something graphically designed by the Braun of yore...
Among the many criticisms leveled against New York City's new bikeshare program, I'm particularly perplexed by the notion that the stations are a blight upon Gotham's otherwise pristine streetscapes—at worst, they're conspicuously overbranded, but, as many proponents have pointed out, they're no worse than any other curbside eyesore. Although the city is making a conscious effort to reduce the visual overstimuli at street level, it's only a matter of time before static signage simply won't suffice.
While Maspeth Sign Shop continues to crank out aluminum signs, BREAKFAST proposes an entirely novel concept for interactive, real-time wayfinding fixtures. "Points" is billed as "the most advanced and intelligent directional sign on Earth," featuring three directional signs with LEDs to dynamically display relevant information. However, "it's when the arms begin to rotate around towards new directions and the text begins to update that you realize you're looking at something much more cutting-edge. You're looking at the future of how people find where they're headed next."
Here in NYC we've got a billionaire mayor, and you've probably heard of the device that made him rich, the Bloomberg Terminal. For those of you that haven't, it's an integrated computer system and service feed offering real-time financial data and trading.
For finance peeps, Bloomberg Terminals are like potato chips, in that you can't have just one. Your average user rocks a two-, four- or six-monitor set-up....
Color me impressed! I figured the next generation of designer-relevant input devices would come from Apple or Wacom, but surprise—it's Adobe. The software giant is venturing into hardware, and their resultant Project Mighty looks pretty damn wicked so far.
The Adobe Mighty Pen is designed for sketching on tablets, and it's got at least two brilliant features integrated with their drawing app: Since the screen can distinguish between the pen's nib and your mitts, you can draw with the pen, then erase with your finger. No more having to click a submenu to change the tool. And when you do need a submenu, you click a button on the pen itself to make it appear on-screen.
The truly awesome device, however, is the pen's Napoleon Ruler. Adobe's VP of Product Experience Michael Gough was trained as an architect, and wanted to bring the efficacy of sketching with a secondary guiding tool--like we all once did with our assortment of plastic triangles, French curves and the like--to the tablet experience. What the Napoleon does is so simple and brilliant, you've just got to see it for yourself:
Presumably they're still working out the kinks, as the release date is TBD.
This mind-boggling interface design from MIT Media Lab's Fluid Interfaces Group essentially adds another layer of interactivity over your physical life. What I mean by that is: Right now, in real life, you look at your desk and see a bunch of objects. With the F.I.G's "Smarter Objects" system, you pick up a tablet, look at the objects on your desk "through" your tablet, as if through a window, and the tablet's screen shows you virtual overlays on the very real objects on your desk. You can then alter the functionality of these wi-fi enabled "smarter objects" on the screen, then go back to manipulating them in the real world. Tricky to explain in print, but you'll grasp it right away by watching their demo video:
The work was done by researchers Valentin Heun, Shunichi Kasahara, and Pattie Maes, and as they point out, none of the things in the demo video are the result of effects added in post; everything you see is working and happening in real time.
One commenter on the video suggested this interface design be adapted to Google Glass, but I think the tablet is a necessary intermediary, as you can tap, drag and slide your fingers across it. Your thoughts?
I send text messages less frequently with my iPhone than I did in the T9 days. I get so frustrated trying to tap out a text that I often wait until I get to a computer to switch to e-mail and a proper keyboard. The interface just sucks, and I cannot remember the last time I was able to send a text without backspacing repeatedly.
One part of the problem is the tiny buttons. Another part of the problem might be the QWERTY layout itself. Ideally what you want is "two-thumb tapping," where the keyboard's letters are divided in such a way that you're alternating between right- and left-thumbs for each keystroke; a group of international researchers reckons this increases efficiency and reduces errors. With that in mind they've created KALQ, a split keyboard with a new layout.
KALQ is a split keyboard for touchscreen devices. The position of the keyboard on the display and the assignment of letters to keyslots were informed by a series of studies conducted with the aim of maximizing typing performance. KALQ is used by gripping the device from its corners. Trained users achieved an entry rate of 37 wpm (5% error rate). This is an improvement of 34% over their baseline performance with a standard touch-QWERTY system. This rate is the highest ever reported for two-thumb typing on a touchscreen device.
Tokyo is futuristic, but maybe not this futuristic... yet.
I spent a little time in and around Roppongi neighborhood during my first trip to Tokyo last June, but (as is the case with most work-related travel), I didn't have much time to explore the city on my own. Given the diverse texture of the city and the overflowing stimuli of a new and different urban setting, it didn't occur to me that Roppongi Hills is a relatively new construction, some $4 billion and three years in the making. Centered on the 54-story, Kohn Pedersen Fox-design Mori Tower—named after the developer behind the entire project—the 27-acre megaplex opened its doors in April 2003... which means that this week marks its tenth anniversary.
To commemorate the milestone, Mori Building Co., Ltd., has commissioned Creative Director Tsubasa Oyagi to create a digital experience, the very first project for his new boutique SIX. Working with a team of media production all-stars, Oyagi created "TOKYO CITY SYMPHONY," an interactive web app that combines projection mapping with a simple music composition engine to create user-generated ditties with brilliant visuals.
"TOKYO CITY SYMPHONY" is an interactive website, in which users can experience playing with 3D projection mapping on a 1:1000 miniature model of the city of Tokyo. The handcrafted model is an exact replica of the cityscape of Tokyo in every detail.
Three visual motifs are projected onto the city in sync with music: "FUTURE CITY," conjuring futuristic images; "ROCK CITY" that playfully transforms Roppongi Hills into colorful musical instruments and monsters; and "EDO CITY," or "Traditional Tokyo," which portrays beautiful Japanese images. Users could play a complex, yet exquisitely beautiful harmony on the city by pressing the keys on the computer keyboard. Each key plays a different beat along with various visual motifs, creating over one hundred different sound and visual combinations. Each user is assigned a symphony score of eight seconds, of which could be shared via Facebook, twitter, and Google+. The numerous symphony scores submitted by the users are put together online to create an infinite symphony.
Cyclepedia on-the-go! (NB: Mounting an iPad with a Turtle Claw is not advised.)
We covered Michael Embacher's Cyclepedia back in 2011, when it made its debut in print, and the Viennese architect/designer's enviable bicycle collection was exhibited behind glass, so to speak, shortly thereafter. Although the iPad app—developed by Heuristic Media for publisher Thames & Hudson—originally came out in December 2011, they've since launched a new version on the occasion of the 2012 Tour de France, with substantially more content beyond the 26 new bikes that bring the total to 126.
The bikes themselves are indexed by Year, Type, Make and Name, Country of Origin, Materials and (perhaps most interestingly) Weight, for which the thumbnails neatly arrange themselves around the circular dial of a scale. Different users will find the different options more useful than others, though the small size of the thumbnails makes it difficult to differentiate between about 75% of the bikes, which are distinguished by more fine-grained details. (The lack of search feature is also a missed opportunity, IMHO.)
That said, the photography is uniformly excellent—the 360° views alone are composed of over 50 images each, as evidenced by the lighting on the chrome Raleigh Tourist—and the detail shots are consistently drool-worthy. Each bike has been polished to perfection for the photo shoot, yet the perfectly in-focus photos also capture telltale signs of age—minor dings, paint chips and peeling decals that suggest that the bicycle has been put to good use. (The rather gratuitous bike porn is accompanied by descriptions that are just the right length for casual browsing, as well as technical details such as date, weight and componentry.)