Brought To You By
Get Our Newsletter
Submit

Sign-up for your monthly fix of design news, reviews and stuff to make you smarter.

Follow Core77
Twitter Facebook RSS

 

UX

The Core77 Design Blog

send us your tips get the RSS feed
 
Posted by Ray  |  14 Apr 2014  |  Comments (0)

ECAL-DeliriousHome_HERO.jpg

It's an increasingly pressing question in this day and age, and one that has certainly seen some interesting responses—including this interdepartmental collaboration from Switzerland design school ECAL—as an evolving dialectic between two closely related design disciplines. Exhibited in Milan's Brera District during the Salone del Mobile last week, "Delirious Home" is comprised of ten projects that explore the relationship between industrial design and interaction design. (Naoto Fukasawa, for one, believes that the former will eventually be subsumed into the latter as our needs converge into fewer objects thanks to technology.)

ECAL-DeliriousHome-exterior.jpg

Both the Media & Interaction Design and the Industrial Design programs at the Lausanne-based school are highly regarded, and the exhibition at villa-turned-gallery Spazio Orso did not disappoint. In short, professors Alain Bellet and Chris Kabel wanted to riff on with the "smart home" concept—the now-banal techno-utopian prospect of frictionless domesticity (à la any number of brand-driven shorts and films). But "Delirious Home" transcends mere parody by injecting a sense of humor and play into the interactions themselves. In their own words:

Technology—or more precisely electronics—is often added to objects in order to let them sense us, automate our tasks or to make us forget them. Unfortunately until now technology has not become a real friend. Technology has become smart but without a sense of humor, let alone quirky unexpected behavior. This lack of humanness became the starting point to imagine a home where reality takes a different turn, where objects behave in an uncanny way. After all; does being smart mean that you have to be predictable? We don't think so! These apparently common objects and furniture pieces have been carefully concocted to change and question our relationship with them and their fellows.
Thanks to the development of easily programmable sensors, affordable embedded computers and mechanical components, designers can take control of a promised land of possibilities. A land that until now was thought to belong to engineers and technicians. With Delirious Home, ECAL students teach us to take control of the latest techniques and appliances we thought controlled us. The students demonstrate their artful mastery of electronics, mechanics and interaction, developing a new kind of esthetic which goes further than just a formal approach.
The ultimate object—still missing in the delirious home—would be an object able to laugh at itself.

ECAL-DeliriousHome-COMP.jpgPhotos courtesy of ECAL / Axel Crettenand & Sylvain Aebischer

"Delirious Home" was easily a highlight of this year's Fuorisalone and was duly recognized with a Milano Design Award. The video, which features all of the projects, is well worth watching in full:

Additional details and images of each project below.

continued...

Posted by hipstomp / Rain Noe  |  20 Mar 2014  |  Comments (2)

0volvoconceptestate-001.jpg

Volvo's recently introduced a trio of concept cars: The Concept Coupe, the Concept XC Coupe and the Concept Estate. It is the latter that has most caught our eye because it is, quite oddly to us Yanks, a two-door station wagon. In America, the station wagon has always been about families, but by omitting rear doors, Volvo seems to be aiming this concept at the childless couple that likes to ski.

0volvoconceptestate-002.jpg

The Concept Estate brings with it Volvo's bold new styling direction, both inside and out, that's a million miles (er, kilometers) away from the Swedish carmaker's designed-by-Etch-a-Sketch look that we grew up with:

0volvoconceptestate-003.jpg

continued...

Posted by hipstomp / Rain Noe  |   6 Mar 2014  |  Comments (9)

0pizzahutinttabl.jpg

It's been over a year since we've seen interactive restaurant tables in the news, but here comes a new one from Pizza Hut. Yes, the American fast food joint is hoping that if their deep-dish pizzas aren't enough to get you inside, perhaps their fee-yancy touchscreen table will be. Have a look:

What's interesting about this, from a business perspective, is that Pizza Hut is owned by Yum! Brands, which also owns KFC and Taco Bell. While the last interactive restaurant table we looked at was integrated into a one-off restaurant, Yum! Brands (God I hate typing that stupid exclamation point in their name) has some 40,000 restaurants in over 125 countries.

As for the actual interface design (which was done by creative firm Chaotic Moon), it still seems a bit cutesy to me; I'm not confident that people will want to do a two-finger drag to choose a pie size, for instance—I suspect they'd rather just hit an S, M or L button. But the visual representation of how large something is will probably prove popular. And once the balance between what the technology can do and what people actually want has been worked out, if Y!B decides to move ahead with this concept, we could see mass uptake in a relatively short time period, on account of their size. Presumably they've got the juice to require individual franchisees to integrate these units, handily spreading the costs out.

Posted by hipstomp / Rain Noe  |  27 Feb 2014  |  Comments (1)

0yotaphone-001.jpg

Someone has finally taken note that throughout the day, we use our smartphones in at least two different ways. There's the active way, where you're futzing around with an app and your thumbs are flying across the screen. Then there's the passive way, where you're glancing at it to reference some piece of information you need. And with that latter usage, it would be better if the information was persistently presented, not something you had to call up by doing a home-button-press/swipe/access-code-enter/app-button-press.

Thus Russian tech manufacturer Yota Devices produces the Yota Phone, billed as "The world's first dual-screen, always-on smartphone." While one side has got the familiar color touchscreen we're all familiar with, flip the thing over and there's a black-and-white, EPD electronic-ink-type display that draws no power once its pixels are in place. (The image or text will stay "burned" there even if the phone's battery dies.) In other words you send whatever data you want to that second screen and it stays there, ready for immediate viewing when you pull the phone out of your bag, no button presses necessary. If I owned this phone I'd constantly avail myself of the convenience of having a grocery list, boarding pass, map snippet, reference dimensions, addresses and appointment times, etc.

0yotaphone-003.jpg

First gen: Square-ish


continued...

Posted by Gloria Suzie Kim  |  21 Oct 2013  |  Comments (0)

Homeplus-viaBrandSugar.jpgWoman shopping for groceries in South Korea at a HomePlus display using her mobile phone

Earlier this month, Adaptive Path held the Service Experience conference in San Francisco, CA. The conference invited designers and business leaders who are out there 'in the trenches' to share insights, tips, and methods from their case studies in service design.

Service Design is an emergent area of design thinking that's been percolating in design circles for many years. Though corporate brands like Apple, Nike, P&G and Starbucks have built their success on the principles of good service design, it's an approach getting more serious consideration in countries like the U.S. after years of being developed in Europe.

Service Design, Service Experience, or Consumer Experience is a design approach that understands that the process by which a product is made and the organization that produces it, not only affects the product, but also defines the experience of the product. Service Design is made up of many ecosystems, including a company's own internal culture, their approach to production and development, as well as the context of the product as it exists in the day to day life of the users. Think about how Apple represents not only the product, but also customer service combined with the branded architectural experience of the Apple store. Or how Tesla motors is not only considering the product (an electric vehicle) but also mapping out a plan for a network of electric charging stations in California.

Service Design is a holistic system that takes into consideration the end to end experience of a product, whether it be a car, a computer, a trip, or a book. It is invested in creating the infrastructure that supports and empathizes with human needs by prioritizing people and experiences over technology during the design process. Service design is a design approach that can be applied across fields.

Swimming in Culture

A key perspective of Service Design is the ability to grasp organizational culture. Ever wonder why you had a great time working for one company and another company, not so much? Maybe it's not all 'in your head': According to keynote speaker David Gray of Limnl, culture is a summation of the habits of a group, and that "people swim in culture the way fish swim in water," using the analogy of dolphins and sharks.

culturemap3.jpgIllustration from David Gray's presentation. (People may prefer to self-identify as a dolphin rather than a shark.)

In order to change culture, one must be able to find its foundation first. Ask dumb questions, talk to the newbies, gather evidence, and the evidence (what you see) usually leads to levers (how and why decisions are made and the protocol used) which leads to the company values (the underlying priorities and what's considered important) that uncover foundational assumptions (how they view the way the world works and what is the reasoning behind those values).

dividedcompany.jpgSketchnote courtesy of Kate Rutter / intelleto.com

continued...

Posted by Ray  |  20 Sep 2013  |  Comments (0)

HowDo-Steps.jpg

Although it launched nearly a year ago, I'm surprised that an app called How.Do didn't turn up on our radar—after all, an app for making quick'n'dirty how-to tutorials is right up our alley. Thankfully, co-founder Emma Rose Metcalfe reached out to us on the occasion of the launch of How.Do Two.Oh (Version 2.0, that is), which was released yesterday on the occasion of iOS7 and the World Maker Faire this weekend. (Supported by venture capital, her fellow co-founders Nils Westerlund and Edward Jewson round out the Berlin-based team.)

Viewable both through the free app and online, the Micro Guides are concise user-generated slideshows with audio, an ideal format for step-by-step tutorials and on-the-go reference guides. Insofar as the app hits a sweet spot in the maker/fixer/lifehacking movement, the How.Do team will be reporting from World Maker Faire tomorrow and Sunday, offering a unique window into the festivities at the New York Hall of Science—follow them on Twitter @HowDo_ to get the scoop!

As busy as they are this weekend, Metcalfe took a few moments to share her thoughts at this exciting time for the growing company.

Core77: What inspired you to create How.Do in the first place?

Emma Rose Metcalfe: How.Do is the intersection of my MFA research in sharing and distributing meaningful experiences and Nils' interest in the challenges of scaling projects for large communities. He had left SoundCloud to finish his studies at Stockholm School of Entrepreneurship where the two of us met. Long story short, we came home from a design bootcamp in India wanting work on something together. We shared the belief that knowledge is deeply personal. The space created between the emotional power of sound and the fantasy of image is incredibly profound—we wanted to harness that to make sharing and learning feel good.

continued...

Posted by hipstomp / Rain Noe  |  15 Aug 2013  |  Comments (0)

corvette-display-01.jpg

As we recently saw, Ford has been experimenting with ways for drivers to use real-time vehicle information. Now competitor Chevrolet is also throwing their hat into this ring with a new, configurable dashboard display in the 2014 Corvette Stingray.

For the Fast & Furious set, the Stingray's dash can display acceleration and lap timers, as well as surprisingly techie stuff like a "friction bubble" displaying cornering force and a gauge showing you how hot the tires are. (Hot tires have better grip, which is why you see F1 drivers violently zigzagging on their way to the starting line; they're trying to get some heat on.)

corvette-display-02.jpg

For drivers in less of a rush, the dash can be set to display more practical information like fuel economy, what the stereo's playing or navigational details. I think the latter one in particular is a good move, as having route guidance graphics front and center behind the steering wheel is a lot better than having to shift your gaze to the center of the entire dashboard.

corvette-display-03.jpg

There are 69 different pieces of information the system can display, divided into three main themes: Tour, aimed at commuters and long-distance driving; Sport, which provides a pared-down, classic-looking radial tachometer; and Track, which gives you the hockey-stick tach, shift lights and an enlarged gear indicator. "Each of these three themes," says Jason Stewart, General Motors interaction designer, "can also be configured so that drivers can personalize their experience in the Stingray."

Here's a video look at the system:

continued...

Posted by hipstomp / Rain Noe  |   5 Aug 2013  |  Comments (3)

gglass-ipad.jpg

It's very strange that Google Glass is not mentioned once in this news segment. Researchers at Taiwan's Industrial Technology Research Institute (ITRI) have developed this eyeglass-based display, below, that uses images projected onto the lenses, and depth cameras focusing beyond the lenses, to create the functional illusion of operating a "floating touchscreen":

ITRI is simply the latest research group to use depth cameras to track our fingers, which then triggers a microprocessor to recognize that as an actionable "touch." Most recently we saw this with Fujitsu Labs' FingerLink Interaction System. So you might wonder why we're looking at this—isn't this just a combination of existing technologies that we've all seen before? It is, but so was the iPod, the iPhone and the iPad when they first came out.

continued...

Posted by frog  |  22 Jul 2013  |  Comments (4)

frogID-COMP.jpg

Our friends at frog design recently released a short documentary on Industrial Design in the Modern World, a kind of iterative manifesto (the consultancy's first but certainly not their last), featuring several key players of the design team. We had a chance to catch up with Creative Director Jonas Damon on the broader message of the piece, as well as his thoughts on user experience and a possible revision to Dieter Rams' canonical principles of design.

Core77: Can you elaborate on the points you touch on in the opening monologue? Specifically, to what degree do 'traditional' (or outdated) forms and materials embody value or character? For example, I recently came across an iPod speaker in which the dock opens like a cassette tape deck, evoking a certain nostalgic charm despite being rather impractical (it was difficult to see the screen behind the plastic).

Jonas Damon: The opening monologue is about the physical constraints that have guided forms in the past vs. forms today, and the opportunities that arise from the absence of these constraints. 'Honesty' in design is a widely admired quality, and in the past that honesty was expressed by skillfully sculpting with and around a given product's physical conditions, rather than just hiding or disguising these. So when products were more mechanical, they had a more imposing DNA that informed their character; their mechanics largely defined their identities. Many product types came preconditioned with an iconic, unmistakable silhouette.

Today, most products in the consumer electronics space can be made with a rectangular circuit board, a rectangular screen, and a rectangular housing. Therefore, the natural expression of these products today is limited to a rectangle—not really a unique identity. Expression of character becomes more nuanced and malleable. With that newfound freedom, we have to be more sensitive, judicious and inventive. These days, 'honesty' is more complex and difficult to design for, as it's about the intangible aspects of the brand the product embodies.

Traditional forms and materials have cultural value because of their iconic, built-in character. The starting point for many contemporary consumer electronics forms is generic and sterile, so historical forms are often tapped to artificially trigger our memory-based emotions. It's been a popular fallback that we may be a little tired of these days, but on occasion its been well executed, and even that can have merit.

Of course, the 'flat black rectangle' effect also implies a shift from traditional form-follows-function I.D. to a broader, UX-centric approach to design (i.e. some argue that Apple's focus on iOS7 is simply a sign that they've shifted from hardware innovation to the UX/software experience). What is the relationship between hardware and UX?

Hardware is an integral part of UX. A true "user experience" is multi-sensory: when you engage with something, don't you see, feel, hear, maybe even smell that which you are engaging with? (I'm not sure why anybody refers to solely screen-based interactions as "UX"; that notion is outdated) As an Industrial Designer, I am a designer of User Experience. ID has gotten richer since we've started considering "living technology" as a material. By "living technology," I mean those elements that bring objects to life, that make them animate and tie them to other parts of the world around us: sensors, screens, haptics, connectivity, software, etc. By claiming these elements as part of our domain (or by tightly embedding their respective expert designers/engineers in our teams), we are able to create holistic designs that are greater than the sums of their parts.

continued...

Posted by hipstomp / Rain Noe  |  11 Jun 2013  |  Comments (9)

ios-7-01.jpg

In addition to unveiling their redesigned Mac Pro, yesterday Apple also previewed their forthcoming iOS 7. This is the one many an industrial designer has been waiting to see; we all know Jonathan Ive can do hardware, but iOS 7 will be the first real indication of what software will look like under Ive rule—and if he'd be given free reign. Former Apple executive Scott Forstall was famously a proponent of skeuomorphism, the inclusion of real-world elements—stitched leather, lined legal pads, spiral bindings—that many in the design community found tacky and backwards-looking. Following his ouster, Ive was placed in charge of iOS design, and he's made it no secret that he intended to Think Different.

Well, based on what we're seeing, we're happy to report that it seems Ive's creative control is complete.

The first thing users will likely note is the change in typography. Just as Forstall's beloved word "skeuomorphism" has an unusual sequence of three vowels in a row, Ive has switched the font to what looks to be Helvetica Neue Ultra Light, which has an equally foreign sequence of contiguous vowels. The resultant look is undoubtedly more modern (though your correspondent prefers thicker fonts for legibility's sake).

ios-7-02.jpg

ios-7-03.jpg

"Flatness" is the adjective of the day, and the new iOS has it in spades. In the past decade-and-a-half icons have spun steadily out of control; what were once simple representations of objects, necessarily drawn in low-res due to computing constraints, unpleasantly evolved into overcomplicated, miniaturized portraits. Ive's flat design approach returns to the roots of the graphic icon, eschewing 3D shading and instead using line to tell the tale. With the exception of a couple of icons—the Settings gears and Game Center's balloons—shading is completely absent. The cartoonish highlights on the text message word bubbles are gone. Background gradations are the only non-flat visual variation allowed.

ios-7-04.jpg

Interestingly enough, the keypad now looks like something graphically designed by the Braun of yore...

continued...

Posted by Ray  |   6 Jun 2013  |  Comments (1)

BREAKFAST-Points-1.jpg

Among the many criticisms leveled against New York City's new bikeshare program, I'm particularly perplexed by the notion that the stations are a blight upon Gotham's otherwise pristine streetscapes—at worst, they're conspicuously overbranded, but, as many proponents have pointed out, they're no worse than any other curbside eyesore. Although the city is making a conscious effort to reduce the visual overstimuli at street level, it's only a matter of time before static signage simply won't suffice.

While Maspeth Sign Shop continues to crank out aluminum signs, BREAKFAST proposes an entirely novel concept for interactive, real-time wayfinding fixtures. "Points" is billed as "the most advanced and intelligent directional sign on Earth," featuring three directional signs with LEDs to dynamically display relevant information. However, "it's when the arms begin to rotate around towards new directions and the text begins to update that you realize you're looking at something much more cutting-edge. You're looking at the future of how people find where they're headed next."

See "Points" in action:

continued...

Posted by hipstomp / Rain Noe  |  16 May 2013  |  Comments (11)

bloomberg-terminal-01.jpg

Here in NYC we've got a billionaire mayor, and you've probably heard of the device that made him rich, the Bloomberg Terminal. For those of you that haven't, it's an integrated computer system and service feed offering real-time financial data and trading.

For finance peeps, Bloomberg Terminals are like potato chips, in that you can't have just one. Your average user rocks a two-, four- or six-monitor set-up....

bloomberg-terminal-02.jpg

bloomberg-terminal-03.jpg

bloomberg-terminal-04.jpg

...though that can get out of control.

bloomberg-terminal-05.jpg

continued...

Posted by hipstomp / Rain Noe  |   9 May 2013  |  Comments (8)

Adobe-project-mighty.jpg

Color me impressed! I figured the next generation of designer-relevant input devices would come from Apple or Wacom, but surprise—it's Adobe. The software giant is venturing into hardware, and their resultant Project Mighty looks pretty damn wicked so far.

The Adobe Mighty Pen is designed for sketching on tablets, and it's got at least two brilliant features integrated with their drawing app: Since the screen can distinguish between the pen's nib and your mitts, you can draw with the pen, then erase with your finger. No more having to click a submenu to change the tool. And when you do need a submenu, you click a button on the pen itself to make it appear on-screen.

The truly awesome device, however, is the pen's Napoleon Ruler. Adobe's VP of Product Experience Michael Gough was trained as an architect, and wanted to bring the efficacy of sketching with a secondary guiding tool--like we all once did with our assortment of plastic triangles, French curves and the like--to the tablet experience. What the Napoleon does is so simple and brilliant, you've just got to see it for yourself:

Presumably they're still working out the kinks, as the release date is TBD.

Posted by hipstomp / Rain Noe  |   3 May 2013  |  Comments (1)

mitsmarterobjects.jpg

This mind-boggling interface design from MIT Media Lab's Fluid Interfaces Group essentially adds another layer of interactivity over your physical life. What I mean by that is: Right now, in real life, you look at your desk and see a bunch of objects. With the F.I.G's "Smarter Objects" system, you pick up a tablet, look at the objects on your desk "through" your tablet, as if through a window, and the tablet's screen shows you virtual overlays on the very real objects on your desk. You can then alter the functionality of these wi-fi enabled "smarter objects" on the screen, then go back to manipulating them in the real world. Tricky to explain in print, but you'll grasp it right away by watching their demo video:

The work was done by researchers Valentin Heun, Shunichi Kasahara, and Pattie Maes, and as they point out, none of the things in the demo video are the result of effects added in post; everything you see is working and happening in real time.

One commenter on the video suggested this interface design be adapted to Google Glass, but I think the tablet is a necessary intermediary, as you can tap, drag and slide your fingers across it. Your thoughts?

Posted by hipstomp / Rain Noe  |  24 Apr 2013  |  Comments (6)

KALQ-01.jpg

Are these the keys to easier texting?

I send text messages less frequently with my iPhone than I did in the T9 days. I get so frustrated trying to tap out a text that I often wait until I get to a computer to switch to e-mail and a proper keyboard. The interface just sucks, and I cannot remember the last time I was able to send a text without backspacing repeatedly.

One part of the problem is the tiny buttons. Another part of the problem might be the QWERTY layout itself. Ideally what you want is "two-thumb tapping," where the keyboard's letters are divided in such a way that you're alternating between right- and left-thumbs for each keystroke; a group of international researchers reckons this increases efficiency and reduces errors. With that in mind they've created KALQ, a split keyboard with a new layout.

KALQ-02.jpg

KALQ is a split keyboard for touchscreen devices. The position of the keyboard on the display and the assignment of letters to keyslots were informed by a series of studies conducted with the aim of maximizing typing performance. KALQ is used by gripping the device from its corners. Trained users achieved an entry rate of 37 wpm (5% error rate). This is an improvement of 34% over their baseline performance with a standard touch-QWERTY system. This rate is the highest ever reported for two-thumb typing on a touchscreen device.

continued...

Posted by Ray  |  23 Apr 2013  |  Comments (0)

TOKYOCITYSYMPHONY-Future.jpgTokyo is futuristic, but maybe not this futuristic... yet.

I spent a little time in and around Roppongi neighborhood during my first trip to Tokyo last June, but (as is the case with most work-related travel), I didn't have much time to explore the city on my own. Given the diverse texture of the city and the overflowing stimuli of a new and different urban setting, it didn't occur to me that Roppongi Hills is a relatively new construction, some $4 billion and three years in the making. Centered on the 54-story, Kohn Pedersen Fox-design Mori Tower—named after the developer behind the entire project—the 27-acre megaplex opened its doors in April 2003... which means that this week marks its tenth anniversary.

TOKYOCITYSYMPHONY-screen.jpg

To commemorate the milestone, Mori Building Co., Ltd., has commissioned Creative Director Tsubasa Oyagi to create a digital experience, the very first project for his new boutique SIX. Working with a team of media production all-stars, Oyagi created "TOKYO CITY SYMPHONY," an interactive web app that combines projection mapping with a simple music composition engine to create user-generated ditties with brilliant visuals.

"TOKYO CITY SYMPHONY" is an interactive website, in which users can experience playing with 3D projection mapping on a 1:1000 miniature model of the city of Tokyo. The handcrafted model is an exact replica of the cityscape of Tokyo in every detail.

TOKYOCITYSYMPHONY-Edo.jpg

TOKYOCITYSYMPHONY-Rock.jpg

Three visual motifs are projected onto the city in sync with music: "FUTURE CITY," conjuring futuristic images; "ROCK CITY" that playfully transforms Roppongi Hills into colorful musical instruments and monsters; and "EDO CITY," or "Traditional Tokyo," which portrays beautiful Japanese images. Users could play a complex, yet exquisitely beautiful harmony on the city by pressing the keys on the computer keyboard. Each key plays a different beat along with various visual motifs, creating over one hundred different sound and visual combinations. Each user is assigned a symphony score of eight seconds, of which could be shared via Facebook, twitter, and Google+. The numerous symphony scores submitted by the users are put together online to create an infinite symphony.

continued...

Posted by Ray  |  22 Apr 2013  |  Comments (0)

IllAdvised.jpgCyclepedia on-the-go! (NB: Mounting an iPad with a Turtle Claw is not advised.)

We covered Michael Embacher's Cyclepedia back in 2011, when it made its debut in print, and the Viennese architect/designer's enviable bicycle collection was exhibited behind glass, so to speak, shortly thereafter. Although the iPad app—developed by Heuristic Media for publisher Thames & Hudson—originally came out in December 2011, they've since launched a new version on the occasion of the 2012 Tour de France, with substantially more content beyond the 26 new bikes that bring the total to 126.

The bikes themselves are indexed by Year, Type, Make and Name, Country of Origin, Materials and (perhaps most interestingly) Weight, for which the thumbnails neatly arrange themselves around the circular dial of a scale. Different users will find the different options more useful than others, though the small size of the thumbnails makes it difficult to differentiate between about 75% of the bikes, which are distinguished by more fine-grained details. (The lack of search feature is also a missed opportunity, IMHO.)

CyclepediaApp-Lotus.jpg

CyclepediaApp-CapoEliteEis.jpg

That said, the photography is uniformly excellent—the 360° views alone are composed of over 50 images each, as evidenced by the lighting on the chrome Raleigh Tourist—and the detail shots are consistently drool-worthy. Each bike has been polished to perfection for the photo shoot, yet the perfectly in-focus photos also capture telltale signs of age—minor dings, paint chips and peeling decals that suggest that the bicycle has been put to good use. (The rather gratuitous bike porn is accompanied by descriptions that are just the right length for casual browsing, as well as technical details such as date, weight and componentry.)

CyclepediaApp-ColnagoCrank.jpg

continued...

Posted by hipstomp / Rain Noe  |  18 Apr 2013  |  Comments (4)

fujitsu-fingerlink-02.jpg

I've simplistically assumed we would advance from "dumb" paper with things printed on it to some smarter variant, where every sheet of paper is an iPad. But as researchers at Fujitsu Laboratories demonstrate here, there's still plenty of room to design new interfaces that are between those two extremes.

fujitsu-fingerlink-01.jpg

By combining an ordinary webcam, a computer and an off-the-shelf projector, Fujitsu's "FingerLink Interaction System" provides a new user interface that effectively turns a "dumb" piece of paper, and the table it's sitting on, into a touchscreen. Check out how they did it, and peep the CAD demo starting around 2:43:

continued...

Posted by hipstomp / Rain Noe  |  16 Apr 2013  |  Comments (4)

google-glass-stats.jpg

As Google Glass gets closer to its launch date, the search giant has released specs on what users can expect from the production models. The onboard camera will record 720p video and be able to shoot 5MP stills; audio will be piped into your dome via bone conduction; it will have Bluetooth and 802.11b/g WiFi; you'll have 12GB of storage; and the battery will reportedly last for "one full day of typical use." The 640×360 resolution of the video is claimed to be "the equivalent of a 25 inch high definition screen from eight feet away," but we'll need to see that in action.

Which we will, if we head out to San Francisco or Los Angeles. Word on the street (and by "street," we mean Buzzfeed) is that Google will be opening up their own retail stores, starting with California's big city. The physical storefronts will be meant to push not only Glass, but Android- and Chromebook-related products as well. There's no word on what the stores will look like or who will be designing them, but given that Apple's got the likes of Norman Foster on their stores/HQ and Facebook's got Gehry on "Facebook West," we'd be surprised if Google didn't go with a big-ticket architect/designer for the prestige.

continued...

Posted by hipstomp / Rain Noe  |  16 Apr 2013  |  Comments (2)

cursor-game-01.jpg

During the end of their lifetimes as useful interfaces, no one threw a party for the rotary dial, the skeleton key or the crank people once used to manually start their Model T's. But Amsterdam-based design firm Studio Moniker, certain that we're "nearing the end of the humble computer cursor" presumably due to touchscreens, is celebrating the little left-leaning arrow with an interactive video project.

This is a little tricky to describe, but what they're doing is creating a crowdsourced interactive experience. You click on a link and are presented with a screen featuring not only your cursor, but the cursors of users all around the world that have been recently recorded by them doing exactly what you are—which is following a series of onscreen prompts to guide your cursor in specific directions.

cursor-game-02.jpg

It's a lot more fun than it sounds like, and we highly recommend you try it out by clicking here (NSFW). Your cursor's movements will then be recorded and integrated into future iterations of the video that new people will click on and experience.

The website Creative Applications has more info on the project here.

Posted by hipstomp / Rain Noe  |   9 Apr 2013  |  Comments (1)

missfeldt-google-glass-01.jpg

Martin Missfeldt is a Berlin-based artist with a sense of humor, known for posting gags like asserting the Google Glass team is working on an X-ray-spec-like application (and that Apple is countering it with asbestos-lined underwear). However, Missfeldt has also released an earnest infographic showing "How Google Glass Works," based on his study of both the patent and several write-ups.

The bulkiest parts are the battery riding on the right ear and the projector, though these things will presumably shrink over time. (On the battery front, have a look at LG Chem's wire-like battery tech and UCLA's developments in supercapacitors.) The image is bounced off of a prism and focused directly onto the wearer's retina. Interestingly, the fine-tuning of the focus is apparently achieved in a primitive way: By physically adjusting the distance of the prism from the eye.

missfeldt-google-glass-02.jpg

"The biggest challenge for Google will now be to make the Google Glass also usable for people with normal glasses," writes Missfeldt. That's no trivial matter, as by his reckoning that's more than 50% of the population in some countries; by your correspondent's observation, countries like South Korea and cities like Hong Kong have an insanely high percentage of children wearing eyeglasses.

"In this case the Google Glass has to be placed ahead of normal glasses—which doesn't [work well]. Or Google has to manufactor [sic] individual customized prisms, but this would be considerably more expensive than the standard production."

missfeldt-google-glass-03.jpg

Click here to see the full-sized graphic.

Posted by Sam Dunne  |   3 Apr 2013

digital-intern.jpgCould games like Papa Sangre pave the way for other mobile audio experiences?

The tech lovers at last week's MEX Mobile User Experience conference in London were treated to all manner of fantastical visions of our further mobile empowered futures; big data, connected cars, smart homes, Internet of Things, gestural interfaces, personal mini-drones—the lot.

Few presentation this year will be complete without at least passing reference to the game changing nature or dystopian social implications of soon-to-be-unleashed Google Glass. Surprisingly, however, a couple of jaw-dropping demonstrations were enough to leave many of those attending wondering whether we might be missing a slightly quieter revolution taking hold. Could immersive audio be about to come of age in mobile user experience?

Having played second fiddle to the visual interface for decades, being so often the reserve of experimental art installations or niche concepts for the blind, audio has yet to find mass interaction application outside of alarms, alerts, ringtones and the occasional novelty bottle opener. All of this, however, could be set to change, if the two fields of binaural sound and dynamic music can find their way into the repertoire of interaction designers.

Binaural Audio Spatializes Interaction

Hardly a new phenomenon (though not always well known), Papa Sangre is regarded as the 'best video game with no video ever made.' Since it's release back in 2011, the audio app game for iOS has been a hit with both the visually impaired and fully sighted. The game plunges players into a dark, monster-infested fantasy with only their ears to navigate the three dimensional underworld and rescue the damsel in distress. The incredible 3D sound effects are achieved with headphones and binaural audio—an effect that replicates the experience of hearing a sound-wave originating from a certain direction, hitting one ear before the other. Use of the screen is disconcertingly limited to only a rudimentary compass-like dial (determining the player's virtual direction of movement) and two feet buttons, pressed to take steps into the darkness. Never has a computer game monster been so terrifying than when you can't actually see it.

papasangre_screen2.pngIn the dark: screenshot of immersive audio game PapaSangre

The creators, London-based SomethinElse, developed the game by first mapping out the experiences of sound from hundreds of directions using a binaural microphone—a stereo mic the exact shape and density of a human head with pick-ups for ear drums. The algorithmic engine this produced could then be put to work transforming any ordinary mono audio into a spacialised, stereo output for listeners wearing headphones (with a fair dose of clever coding, of course).

MEX_binaural_mic.pngBinaural microphone with exact dimension and density as human head

continued...

Posted by Ray  |  27 Mar 2013  |  Comments (0)

GabrieleMeldaikyte-MultiTouchGestures.jpg

A couple months ago, I posted about "Curious Rituals," a research project by a team of designers at the Art Center College of Design, which I discovered on Hyperallergic. In his post, editor Kyle Chayka also drew a connection to another project concerning touchscreen gestures IRL, "Multi-Touch Gestures" by Gabriele Meldaikyte, who is currently working towards her Master's in Product Design at RCA.

GabrieleMeldaikyte-MultiTouchGestures-zoom.jpg

GabrieleMeldaikyte-MultiTouchGestures-scroll.jpg

Where Richard Clarkson's "Rotary Smartphone" concept incorporated an outdated dialing concept into a contemporary mobile phone, Meldaikyte explores interaction design by effectively inverts this approach to achieve an equally thought-provoking result. The five objects are somehow intuitive and opaque (despite their transparent components) at the same time, transcribing the supposedly 'natural' gestures to mechanical media.

There are five multi-touch gestures forming the language we use between our fingers and iPhone screens. This is the way we communicate, navigate and give commands to our iPhones.

Nowadays, finger gestures like tap / scroll / flick / swipe / pinch are considered to be 'signatures' of the Apple iPhone. I believe that in ten years or so these gestures will completely change. Therefore, my aim is to perpetuate them so they become accessible for future generations.

I have translated this interface language of communication into 3D objects which mimic every multi-touch gesture. My project is an interactive experience, where visitors can play, learn and be part of the exhibition.

GabrieleMeldaikyte-MultiTouchGestures-button.jpg

continued...

Posted by Ray  |  21 Mar 2013  |  Comments (6)

Minuum-lead.jpg

We've seen plenty of variations on the now-canonical input device known as a keyboard, from touchscreen interfaces and, um, exterfaces to a tactile surface treatment (currently available on Kickstarter). However, a new keyboard concept has more in common with so-called index typewriters—as seen in hipstomp's typewriter round-up—than these superficial keyboard treatments, at least to the extent that it offers a more economic layout.

Merritt-viaOfficeMuseum.jpgsource

Specifically, Minuum improves on the concept of a linear arrangement of letters: screen-based UI and predictive text allows for a QWERTY layout to be transposed into a single line of letters. (It's worth noting that index typewriters were initially developed as a less expensive, more portable alternative to keyboard-based typewriters, though they were reportedly slower than handwriting in most instances.)

Minuum is a tiny, one-dimensional keyboard that frees up screen space while allowing fast, accurate typing. Current technology assumes that sticking a full typewriter into a touchscreen device is the best way to enter text, giving us keyboards that are error-prone and cover up half the usable screen space (or more) on most smartphones and tablets.

Minuum, on the other hand, eliminates the visual clutter of archaic mobile keyboards by adapting the keyboard to a single dimension. What enables this minimalism is our specialized auto-correction algorithm that allows highly imprecise typing. This algorithm interprets in real time the difference between what you type and what you mean, getting it right even if you miss every single letter.

Minuum-Tablet.jpg

The video is, as they say, a must-see:

Yes, the last bit is cool, but nota bene: it's currently an alpha-stage prototype, and Will Walmsley & co. are currently seeking funding on IndieGoGo. Suffice it to say that we'll be keeping an eye on this one... if all of the hypothetical wearable implementations become a reality, we could see the emergence of a new set of curious rituals.

Minuum-gestures.jpg

Hat-tip to Nik Roope

Posted by hipstomp / Rain Noe  |  20 Mar 2013  |  Comments (1)

4moms-01.jpg

Why would a company that creates baby products have robotocists on staff? Well, check out what 4Moms' Origami stroller can do:

How awesome is that? In addition to the physical feature it has—the onboard storage and the peekaboo window that I'd imagine are de rigueur—it's the technical aspects that most impress me. Having a generator in the wheel that automatically charges your cell phone seems particularly brilliant.

4moms-02.jpg

Then there's the LCD dashboard, which sounds gimmicky at first, but useful on closer inspection: While you might be able to do without the speedometer, an odometer tells you how far you've traveled and the current ambient temperature is displayed, helping you decide whether you ought throw another layer on your tyke.

4moms-03.jpg

And of course, there's that crazy power folding/unfolding operation. (And yes, it's got baby sensors, so it cannot accidentally be activated while the child is onboard.)

4moms-04.jpg

continued...

Posted by hipstomp / Rain Noe  |  15 Mar 2013  |  Comments (0)

samsung-galaxy-s-4.jpg

Now that the "Who owns the glass rectangle" smartphone wars are thankfully fading into the background of the news cycle, competition in interaction designs is coming to the forefront. Apple arguably kicked it off in '11 by integrating Siri, introducing voice control; as we saw yesterday, Google may push into backside touch; and now Samsung is introducing a host of different interaction designs with their latest model.

Unveiled last night, Samsung's new Galaxy S4 has "Smart Pause," which stops and starts videos depending on whether your eyes are looking at the screen (they are presumably tracked by the camera). "Smart Scroll" advances screen content when the user tilts the phone to one side or the other. "Air Gesture" allows users to manipulate the phone without actually touching it, but rather by hovering a finger over the screen, or using a broader gesture like a hand wave to advance photographs. (And it works while wearing gloves.) Lastly, "S Translator" enables you to speak one language into the phone, and have the phone speak back a translation into a different language.

While none of these features are a magic bullet that will instantly win the smartphone war, that's not the relevant point, to us. What we're glad of is that heated competition is producing a range of experimental ways that we can interact with devices. Apple's steady, measured development process is very different from Samsung's "throw it at the wall and see what sticks" approach, with Google somewhere in the middle, and we can't say which methodolody is superior; but either way it's an exciting time for interaction design, and it is the end user who stands to win from all of these companies duking it out.