Get Our Newsletter
Submit

Sign-up for your monthly fix of design news, reviews and stuff to make you smarter.

Follow Core77
Twitter Facebook RSS

 

UX

The Core77 Design Blog

send us your tips get the RSS feed
 
Posted by Ray  |  22 Apr 2013  |  Comments (0)

IllAdvised.jpgCyclepedia on-the-go! (NB: Mounting an iPad with a Turtle Claw is not advised.)

We covered Michael Embacher's Cyclepedia back in 2011, when it made its debut in print, and the Viennese architect/designer's enviable bicycle collection was exhibited behind glass, so to speak, shortly thereafter. Although the iPad app—developed by Heuristic Media for publisher Thames & Hudson—originally came out in December 2011, they've since launched a new version on the occasion of the 2012 Tour de France, with substantially more content beyond the 26 new bikes that bring the total to 126.

The bikes themselves are indexed by Year, Type, Make and Name, Country of Origin, Materials and (perhaps most interestingly) Weight, for which the thumbnails neatly arrange themselves around the circular dial of a scale. Different users will find the different options more useful than others, though the small size of the thumbnails makes it difficult to differentiate between about 75% of the bikes, which are distinguished by more fine-grained details. (The lack of search feature is also a missed opportunity, IMHO.)

CyclepediaApp-Lotus.jpg

CyclepediaApp-CapoEliteEis.jpg

That said, the photography is uniformly excellent—the 360° views alone are composed of over 50 images each, as evidenced by the lighting on the chrome Raleigh Tourist—and the detail shots are consistently drool-worthy. Each bike has been polished to perfection for the photo shoot, yet the perfectly in-focus photos also capture telltale signs of age—minor dings, paint chips and peeling decals that suggest that the bicycle has been put to good use. (The rather gratuitous bike porn is accompanied by descriptions that are just the right length for casual browsing, as well as technical details such as date, weight and componentry.)

CyclepediaApp-ColnagoCrank.jpg

continued...

Posted by hipstomp / Rain Noe  |  18 Apr 2013  |  Comments (4)

fujitsu-fingerlink-02.jpg

I've simplistically assumed we would advance from "dumb" paper with things printed on it to some smarter variant, where every sheet of paper is an iPad. But as researchers at Fujitsu Laboratories demonstrate here, there's still plenty of room to design new interfaces that are between those two extremes.

fujitsu-fingerlink-01.jpg

By combining an ordinary webcam, a computer and an off-the-shelf projector, Fujitsu's "FingerLink Interaction System" provides a new user interface that effectively turns a "dumb" piece of paper, and the table it's sitting on, into a touchscreen. Check out how they did it, and peep the CAD demo starting around 2:43:

continued...

Posted by hipstomp / Rain Noe  |  16 Apr 2013  |  Comments (4)

google-glass-stats.jpg

As Google Glass gets closer to its launch date, the search giant has released specs on what users can expect from the production models. The onboard camera will record 720p video and be able to shoot 5MP stills; audio will be piped into your dome via bone conduction; it will have Bluetooth and 802.11b/g WiFi; you'll have 12GB of storage; and the battery will reportedly last for "one full day of typical use." The 640×360 resolution of the video is claimed to be "the equivalent of a 25 inch high definition screen from eight feet away," but we'll need to see that in action.

Which we will, if we head out to San Francisco or Los Angeles. Word on the street (and by "street," we mean Buzzfeed) is that Google will be opening up their own retail stores, starting with California's big city. The physical storefronts will be meant to push not only Glass, but Android- and Chromebook-related products as well. There's no word on what the stores will look like or who will be designing them, but given that Apple's got the likes of Norman Foster on their stores/HQ and Facebook's got Gehry on "Facebook West," we'd be surprised if Google didn't go with a big-ticket architect/designer for the prestige.

continued...

Posted by hipstomp / Rain Noe  |  16 Apr 2013  |  Comments (2)

cursor-game-01.jpg

During the end of their lifetimes as useful interfaces, no one threw a party for the rotary dial, the skeleton key or the crank people once used to manually start their Model T's. But Amsterdam-based design firm Studio Moniker, certain that we're "nearing the end of the humble computer cursor" presumably due to touchscreens, is celebrating the little left-leaning arrow with an interactive video project.

This is a little tricky to describe, but what they're doing is creating a crowdsourced interactive experience. You click on a link and are presented with a screen featuring not only your cursor, but the cursors of users all around the world that have been recently recorded by them doing exactly what you are—which is following a series of onscreen prompts to guide your cursor in specific directions.

cursor-game-02.jpg

It's a lot more fun than it sounds like, and we highly recommend you try it out by clicking here (NSFW). Your cursor's movements will then be recorded and integrated into future iterations of the video that new people will click on and experience.

The website Creative Applications has more info on the project here.

Posted by hipstomp / Rain Noe  |   9 Apr 2013  |  Comments (1)

missfeldt-google-glass-01.jpg

Martin Missfeldt is a Berlin-based artist with a sense of humor, known for posting gags like asserting the Google Glass team is working on an X-ray-spec-like application (and that Apple is countering it with asbestos-lined underwear). However, Missfeldt has also released an earnest infographic showing "How Google Glass Works," based on his study of both the patent and several write-ups.

The bulkiest parts are the battery riding on the right ear and the projector, though these things will presumably shrink over time. (On the battery front, have a look at LG Chem's wire-like battery tech and UCLA's developments in supercapacitors.) The image is bounced off of a prism and focused directly onto the wearer's retina. Interestingly, the fine-tuning of the focus is apparently achieved in a primitive way: By physically adjusting the distance of the prism from the eye.

missfeldt-google-glass-02.jpg

"The biggest challenge for Google will now be to make the Google Glass also usable for people with normal glasses," writes Missfeldt. That's no trivial matter, as by his reckoning that's more than 50% of the population in some countries; by your correspondent's observation, countries like South Korea and cities like Hong Kong have an insanely high percentage of children wearing eyeglasses.

"In this case the Google Glass has to be placed ahead of normal glasses—which doesn't [work well]. Or Google has to manufactor [sic] individual customized prisms, but this would be considerably more expensive than the standard production."

missfeldt-google-glass-03.jpg

Click here to see the full-sized graphic.

Posted by Sam Dunne  |   3 Apr 2013

digital-intern.jpgCould games like Papa Sangre pave the way for other mobile audio experiences?

The tech lovers at last week's MEX Mobile User Experience conference in London were treated to all manner of fantastical visions of our further mobile empowered futures; big data, connected cars, smart homes, Internet of Things, gestural interfaces, personal mini-drones—the lot.

Few presentation this year will be complete without at least passing reference to the game changing nature or dystopian social implications of soon-to-be-unleashed Google Glass. Surprisingly, however, a couple of jaw-dropping demonstrations were enough to leave many of those attending wondering whether we might be missing a slightly quieter revolution taking hold. Could immersive audio be about to come of age in mobile user experience?

Having played second fiddle to the visual interface for decades, being so often the reserve of experimental art installations or niche concepts for the blind, audio has yet to find mass interaction application outside of alarms, alerts, ringtones and the occasional novelty bottle opener. All of this, however, could be set to change, if the two fields of binaural sound and dynamic music can find their way into the repertoire of interaction designers.

Binaural Audio Spatializes Interaction

Hardly a new phenomenon (though not always well known), Papa Sangre is regarded as the 'best video game with no video ever made.' Since it's release back in 2011, the audio app game for iOS has been a hit with both the visually impaired and fully sighted. The game plunges players into a dark, monster-infested fantasy with only their ears to navigate the three dimensional underworld and rescue the damsel in distress. The incredible 3D sound effects are achieved with headphones and binaural audio—an effect that replicates the experience of hearing a sound-wave originating from a certain direction, hitting one ear before the other. Use of the screen is disconcertingly limited to only a rudimentary compass-like dial (determining the player's virtual direction of movement) and two feet buttons, pressed to take steps into the darkness. Never has a computer game monster been so terrifying than when you can't actually see it.

papasangre_screen2.pngIn the dark: screenshot of immersive audio game PapaSangre

The creators, London-based SomethinElse, developed the game by first mapping out the experiences of sound from hundreds of directions using a binaural microphone—a stereo mic the exact shape and density of a human head with pick-ups for ear drums. The algorithmic engine this produced could then be put to work transforming any ordinary mono audio into a spacialised, stereo output for listeners wearing headphones (with a fair dose of clever coding, of course).

MEX_binaural_mic.pngBinaural microphone with exact dimension and density as human head

continued...

Posted by Ray  |  27 Mar 2013  |  Comments (0)

GabrieleMeldaikyte-MultiTouchGestures.jpg

A couple months ago, I posted about "Curious Rituals," a research project by a team of designers at the Art Center College of Design, which I discovered on Hyperallergic. In his post, editor Kyle Chayka also drew a connection to another project concerning touchscreen gestures IRL, "Multi-Touch Gestures" by Gabriele Meldaikyte, who is currently working towards her Master's in Product Design at RCA.

GabrieleMeldaikyte-MultiTouchGestures-zoom.jpg

GabrieleMeldaikyte-MultiTouchGestures-scroll.jpg

Where Richard Clarkson's "Rotary Smartphone" concept incorporated an outdated dialing concept into a contemporary mobile phone, Meldaikyte explores interaction design by effectively inverts this approach to achieve an equally thought-provoking result. The five objects are somehow intuitive and opaque (despite their transparent components) at the same time, transcribing the supposedly 'natural' gestures to mechanical media.

There are five multi-touch gestures forming the language we use between our fingers and iPhone screens. This is the way we communicate, navigate and give commands to our iPhones.

Nowadays, finger gestures like tap / scroll / flick / swipe / pinch are considered to be 'signatures' of the Apple iPhone. I believe that in ten years or so these gestures will completely change. Therefore, my aim is to perpetuate them so they become accessible for future generations.

I have translated this interface language of communication into 3D objects which mimic every multi-touch gesture. My project is an interactive experience, where visitors can play, learn and be part of the exhibition.

GabrieleMeldaikyte-MultiTouchGestures-button.jpg

continued...

Posted by Ray  |  21 Mar 2013  |  Comments (6)

Minuum-lead.jpg

We've seen plenty of variations on the now-canonical input device known as a keyboard, from touchscreen interfaces and, um, exterfaces to a tactile surface treatment (currently available on Kickstarter). However, a new keyboard concept has more in common with so-called index typewriters—as seen in hipstomp's typewriter round-up—than these superficial keyboard treatments, at least to the extent that it offers a more economic layout.

Merritt-viaOfficeMuseum.jpgsource

Specifically, Minuum improves on the concept of a linear arrangement of letters: screen-based UI and predictive text allows for a QWERTY layout to be transposed into a single line of letters. (It's worth noting that index typewriters were initially developed as a less expensive, more portable alternative to keyboard-based typewriters, though they were reportedly slower than handwriting in most instances.)

Minuum is a tiny, one-dimensional keyboard that frees up screen space while allowing fast, accurate typing. Current technology assumes that sticking a full typewriter into a touchscreen device is the best way to enter text, giving us keyboards that are error-prone and cover up half the usable screen space (or more) on most smartphones and tablets.

Minuum, on the other hand, eliminates the visual clutter of archaic mobile keyboards by adapting the keyboard to a single dimension. What enables this minimalism is our specialized auto-correction algorithm that allows highly imprecise typing. This algorithm interprets in real time the difference between what you type and what you mean, getting it right even if you miss every single letter.

Minuum-Tablet.jpg

The video is, as they say, a must-see:

Yes, the last bit is cool, but nota bene: it's currently an alpha-stage prototype, and Will Walmsley & co. are currently seeking funding on IndieGoGo. Suffice it to say that we'll be keeping an eye on this one... if all of the hypothetical wearable implementations become a reality, we could see the emergence of a new set of curious rituals.

Minuum-gestures.jpg

Hat-tip to Nik Roope

Posted by hipstomp / Rain Noe  |  20 Mar 2013  |  Comments (1)

4moms-01.jpg

Why would a company that creates baby products have robotocists on staff? Well, check out what 4Moms' Origami stroller can do:

How awesome is that? In addition to the physical feature it has—the onboard storage and the peekaboo window that I'd imagine are de rigueur—it's the technical aspects that most impress me. Having a generator in the wheel that automatically charges your cell phone seems particularly brilliant.

4moms-02.jpg

Then there's the LCD dashboard, which sounds gimmicky at first, but useful on closer inspection: While you might be able to do without the speedometer, an odometer tells you how far you've traveled and the current ambient temperature is displayed, helping you decide whether you ought throw another layer on your tyke.

4moms-03.jpg

And of course, there's that crazy power folding/unfolding operation. (And yes, it's got baby sensors, so it cannot accidentally be activated while the child is onboard.)

4moms-04.jpg

continued...

Posted by hipstomp / Rain Noe  |  15 Mar 2013  |  Comments (0)

samsung-galaxy-s-4.jpg

Now that the "Who owns the glass rectangle" smartphone wars are thankfully fading into the background of the news cycle, competition in interaction designs is coming to the forefront. Apple arguably kicked it off in '11 by integrating Siri, introducing voice control; as we saw yesterday, Google may push into backside touch; and now Samsung is introducing a host of different interaction designs with their latest model.

Unveiled last night, Samsung's new Galaxy S4 has "Smart Pause," which stops and starts videos depending on whether your eyes are looking at the screen (they are presumably tracked by the camera). "Smart Scroll" advances screen content when the user tilts the phone to one side or the other. "Air Gesture" allows users to manipulate the phone without actually touching it, but rather by hovering a finger over the screen, or using a broader gesture like a hand wave to advance photographs. (And it works while wearing gloves.) Lastly, "S Translator" enables you to speak one language into the phone, and have the phone speak back a translation into a different language.

While none of these features are a magic bullet that will instantly win the smartphone war, that's not the relevant point, to us. What we're glad of is that heated competition is producing a range of experimental ways that we can interact with devices. Apple's steady, measured development process is very different from Samsung's "throw it at the wall and see what sticks" approach, with Google somewhere in the middle, and we can't say which methodolody is superior; but either way it's an exciting time for interaction design, and it is the end user who stands to win from all of these companies duking it out.

Posted by Sam Dunne  |  15 Mar 2013  |  Comments (0)

bodyspacedata_SMALL.jpg'Dare We Do It Real Time' by body>data>space (photo by Jean-Paul Berthoin)

Over an intensive two days at the end the month, 100 delegates at MEX 2013—the international forum for mobile user experience, in its 12th iteration this year—will gather in central London to discuss and attempt to envision the development and future impact of mobile technology.

With speakers at last year's forum including Dale Herigstad, four-time Emmy award winning creator of the iconic Minority Report conceptual user interfaces, as well as connected car experts from Car Design Research, this year's event boasts inspiring input from the likes of content strategist at Facebook Melody Quintana, UX research guru of WhatUsersDo Lee Duddell and Ghislaine Boddington creative director at experimental connected performance outfit, body>data>space.

Right in the fallout from SXSW, and amidst mounting debate surrounding the launch of Google's Glass project, the MEX forum will explore six 'Pathways', each focusing on a particularly pertinent issue in the world of mobile UX:

Insight - How should we improve understanding of user behaviour and enhance how that drives design decisions?
Diffusion - What are the principles of multiple touch-point design and the new, diffused digital experiences?
Context - How can designers provide relevant experiences, respect privacy and adapt to preferences?
Sensation - What techniques are there for enhancing digital experience with audible and tactile elements?
Form - How can change in shapes, materials or the abandonment of physical form be used to excite users?
Sustainability - How can we enable sustainable expression in digital product choices? Can we harness digital design to promote sustainable living?

Sam Dunne, Design Strategist at Plan and Core77 UK Correspondent, will be reporting live from the event.

MEX, Mobile User Experience
Walllacespace St. Pancras
22 Duke's Road
London, WC1H 9PN
March 26–27, 2013

A small number tickets still available here.

Posted by hipstomp / Rain Noe  |  14 Mar 2013  |  Comments (3)

backside-touch-01.jpg

Way back in '07, we learned Apple had patented touchscreens with interactive backs, meaning you could perform on-screen manipulations while keeping your finger out of the way. By 2010 we were calling it "backtouch" and (incorrectly) predicting the iPad would have it. Now that we'd given up hope on this UI technology ever hitting the market, Google is bringing our hopes up once more (even though we're afraid to love again).

Patent Bolt has announced that Google has patented "Simple backside device touch controls":

backside-touch-02.jpg

We thought the whole point of a patent was that they're not awarded to duplicate technologies, but apparently there's something in Google's secret sauce that makes it different. From a user standpoint though, the benefits appear the same: You tap the back of your phone or tablet, and that registers a hit on-screen, enabling you to manipulate apps or perhaps type.

We're curious as to how ergonomically sound this is, as the opposable thumbs my dog always complains about not having seem more agile than the fingers we'd use to access the back of a device. I just picked up my phone and spent a few minutes pretending to type on the back versus actually typing on the front, and while the former feels a little awkward, I already suck at the latter. (One sure benefit though, backtouch would leave less fingerprints on the glass.) Try it yourself, assuming you're not out in public and don't want to look like a tool, and let us know if you think backtouch has got legs.

Posted by hipstomp / Rain Noe  |  11 Mar 2013  |  Comments (1)

leejinhawyciwyw.jpg

When we last looked in on interaction designer Jinha Lee, he was developing the See-Through 3D Desktop for the Microsoft Applied Sciences Group. Last week Lee, who's pursuing a doctorate at MIT Media Lab's Tangible Media Group, posted a video showing a potential retail application for the set-up: Called WYCIWYW, for "What You Click is What You Wear," the interface would allow the user to virtually try on wristwatches and jewelry.

continued...

Posted by hipstomp / Rain Noe  |   8 Mar 2013  |  Comments (5)

home-button.jpg

Are the Home button's days numbered?

Apple notoriously applies for tons of patents, very few of which will make it into actual products. This one is interesting from a UI perspective.

You could argue either way, but let's say it's an ergonomic necessity to have an easy-to-locate Home button, as now exists on the iPhone. That button cuts into screen real estate. Is there a way for Apple to get rid of it, growing the screen, while still somehow offering the Home button's functionality?

The answer may lie in a patent Apple has secured involving the measuring of electrical capacitance of the body of a product. As an example, if this were incorporated in an all-screen iPhone, the user could simply squeeze the phone's housing as a means of input. That doesn't mean the body of the phone itself would have to deform; it just means that the phone's body would register the change in capacitance coming from a squeeze, and would turn that into some sort of command. Software would sort out whether the squeeze was purposeful or accidental.

Apple Insider is speculating that the technology mentioned in the patent, which was granted several years ago, may pop up in the forthcoming iWatch.

Posted by hipstomp / Rain Noe  |  28 Feb 2013  |  Comments (6)

myo-01.jpg

Well folks, looks like 2013 is shaping up to be the year gesture control finally becomes available to the masses.

First up, the Leap motion controller that caused such a blog stir (we covered it here and here) will start shipping on May 13th, just about a year after they began taking pre-orders.

Hot on their heels—or forearms, I should say—is the Myo controller pictured above, an arm bracelet that you wear well above your wrist but below the elbow. Why the weird position? The Myo actually reads the electrical activity in your muscles, rather than relying on a camera.

myo-02.jpg

This seems like a pretty smart approach, as the Myo can decipher complex finger gestures, flicks and rotations without requiring line-of-sight. That suddenly opens up a new world of interactivity that doesn't require the user be sitting in front of a camera-equipped computer, or dancing around in front of a Kinect. Peep this:

Looks amazing, no? If it works as advertised, it will have a much broader range of applications than the stationary Leap, and the Myo's price reflects that: The Leap's going for $80, while the Myo will run you $150. It's up for pre-order now and they're claiming it will ship later this year.

myo-03.jpg

Posted by Jeroen van Geel  |  26 Feb 2013  |  Comments (2)

IxD_Jeroen_Jack.jpgImage courtesy of smashingbuzz

Jeroen van Geel was invited to participate in the Redux at Interaction 13 in Toronto. Speakers were invited to reflect upon the conference content on the last day of the conference. This is part of his reflection, combined with some after thoughts.

Interaction design is a young field. At least, that's what we as interaction designers keep telling ourselves. And of course, in comparison to many other fields we are relatively young. But I get the feeling that we use it more as an excuse to permit ourselves to have an unclear definition of who we are—and who we aren't.

At this year's Interaction Design Association (IxDA) conference, Interaction 13, you got a good overview of the topics that are of interest to interaction designers. And I can tell you that, as long as it has something to do with human behaviour, it seems of interest. In four days time there were talks and discussions around data, food design, social, health, gaming, personas, storytelling, lean, business and even changing the world. The topics ranged from the very specific task of creation of attributes to having an impact on a global scale. It shows that interaction designers have a great curiosity and want to understand many aspects of life. When we think we have an understanding of how things work, we have the feeling that we can impact everything. Of course this is great and we all know that curiosity should be stimulated, but at the same time this energy and endless search for knowledge can be a curse. Before we know it we become the jack of all trades, master of none. Interaction designers already have a lot of difficulty explaining their exact value. But where does it end? I don't know the answer, because I myself understand this endless curiosity and see how it helps me to improve my skills. Maybe the question is: are we becoming more a belief than a field?

The theme of Interaction 13 was 'social innovation with impact.' From this topic there were several presentations that focused on the role of interaction designers making the world a better place. Almost all designers in general, but every interaction designer specifically, wants to have this kind of impact. Over the last few years I've seen quite a few presentations at 'User Experience' conferences where a speaker enthusiastically puts his fist in the air and proclaimed that the time has come for the interaction designer to make the world more livable. Everybody cheered, interaction designers rallied up with their sharpies and thought they could solve every possible wicked problem. They enthusiastically went back to their huge corporation or agency in the hope that the next day they would finally get this world-changing assignment from their boss. But of course it didn't work that way.

continued...

Posted by hipstomp / Rain Noe  |  21 Feb 2013  |  Comments (6)

google-glass-221-01.jpg

As Google's Project Glass moves closer to completion, they're making a two-pronged push to draw eyeballs, both figuratively and literally. For the former, they've released a video with actual footage captured through actual Glass prototypes:

It's funny how quickly I've become accustomed to the fineness of GoPro's smooth footage; Google's comparatively primitive video quality leaves no doubt that the footage is real.

Viewing the footage, we see Google pushing several applications:

1. Exciting athletic or action-packed POV footage, à la GoPro.
2. Voyeuristic or "memory-making" POV footage, as with the ballerina about to hit the stage, folks playing with their children and dogs.
3. Practical real-time referencing, as with the ice sculptor pulling up images of his subject.
4. Hands-free photography.
5. Real-time sharing, à la Facetime, as with the man sharing footage of a snake with (presumably) his wife and child.
6. Real-time navigation.
7. Real-time translation (though I think choosing tone-based Asian languages like Chinese and Thai will present some implementation challenges).

What's interesting is that Glass promises such a broad range of applications—quite a different tack from Apple's approach of making their devices do a few things well. For us designers, the video raises questions of interface design: Glass presumably taps into Wi-Fi, how do we access the network and enter passwords? Will the voice control work on a crowded sidewalk or a noisy train station? How fine is the camera's voice-prompted shutter timing? How, and how often, do users charge the device? And how do users get footage off of the device?

google-glass-221-02.jpg

continued...

Posted by hipstomp / Rain Noe  |  15 Feb 2013  |  Comments (3)

chevy-siri-int-01.jpg

While it doesn't appear to be an outright Apple-Chevrolet partnership, Chevy has announced that their new Sonic and Spark models will offer integration with Apple's Siri. Called "Eyes Free Integration," Chevy's system will enable iPhone-toting drivers to initiate and answer phone calls, interact with their calendars, play music, hear transcriptions of incoming text messages, and compose outgoing text messages all by voice.

As per the context in which it's meant to be used, one of the system's touches purposely violates a cardinal rule of user interface design: Visual feedback. With Eyes Free, the phone avoids lighting up when interacted with, instead remaining dark to prevent your tendency to look at things that suddenly illuminate, so that you'll keep your peepers on the road.

Two Eyes-Free-compatible apps/hacks we'd like to see:

1. The KITT voice mod, which continually refers to you as "Michael" no matter what your name is.

chevy-siri-int-02.jpg

2. An app that enables you to call out the license plate of the car in front of you that just cut you off. It automatically dials that driver's phone, and you can tell them exactly what you think of them without needing to roll the window down to yell it.

chevy-siri-int-03.jpg

"Where did you learn to DRIVE, you #@#$%*?

Posted by hipstomp / Rain Noe  |  11 Feb 2013  |  Comments (7)

212-area-code.jpg

When I grew up in the '70s, all of New York City had the same area code; I could call from Queens to Manhattan, or vice versa, without having to dial "212" first. When "718" was finally assigned to the outerboroughs, there was a sort of bizarre pride that people took in having a "212" area code, which we from the outerboroughs of course thought was silly.

Interestingly enough, the number sequence "212" wasn't chosen randomly, but was a direct result of the design of the input device of the time: The rotary dial.

Touch-tone phones may have debuted in the early '60s due to John E. Karlin, but I grew up in a house that used rotary phones all the way into the '80s. It was only after we got our first touch-tone phone that I realized how slow the dial was—numbers with an 9 or 0 in them seemed to take forever, and maybe one out of ten times you'd screw the dialing up and have to start over. But "212" was always easy to dial.

As you can guess, when the North American Numbering Plan of the 1940s went about assigning area codes, "212" was assigned to New York City because it was a center of business, and businesspeople are by definition busy, and "212" is the fastest possible area code to dial; due to the way the switching equipment worked, the first and third digits could not be a "1," and the second digit had to be a "1" or a "0." So "212," at a total of five clicks on the dial, was the fastest.

Of course, after the addition of "718," it was only us in the outerboroughs that enjoyed the speed of "212"—you Manhattanites had to wait for the "7" and "8" to go all the way around the dial. Suckers!

Posted by hipstomp / Rain Noe  |  11 Feb 2013  |  Comments (2)

touchtonephone-01.jpg

You may not know his name, but you know his work. John E. Karlin, who passed away in late January, essentially invented the touch-tone keypad. We take that ubiquitous input device for granted—it's on everything from cell phones to alarm systems to microwave ovens—but there was a time when that interface didn't exist, and no one knew what the "correct" design for quickly inputting numbers ought to be.

An industrial psychologist, Karlin was working for Bell Labs (AT&T's R&D department) in the 1940s when he convinced them to start a dedicated human factors department. By 1951 he himself was the director of Human Factors Engineering. In the late 1950s they sought a faster alternative to rotary dialing, and Karlin and his group developed the configuration we know so well today.

touchtonephone-02.jpg

During the process they examined different options, of course. Aren't you glad we didn't wind up with this?

touchtonephone-03.jpg

You might think Karlin simply took the calculator keypad and placed the smaller numbers up top. Nope—take a look at what calculators looked like at the time:

touchtonephone-04.jpg

continued...

Posted by Ray  |   7 Feb 2013  |  Comments (2)

CuriousRituals.jpg

We've all seen it: the teenagers with one earbud in, feigning interest in conversation; iPad users brandishing the device like a radiation barrier to snap a photo; the veritable hypnosis of the "cell trance." In fact, maybe you're reading these very words on your smartphone, killing time in line while you wait for the next express train or your double-shot skinny latte. No shame in that—we all do it.

CuriousRituals-earbuddude.jpg

These behaviors and over 20 other digital gestures are duly catalogued in a research project conducted at the Art Center College of Design by Nicolas Nova, Katherine Miyake, Nancy Kwon and Walton Chiu, in July and August of last year. The four published their findings on our gadget-enabled society in an ongoing blog and a book [PDF] as of last September. "Curious Rituals" is nothing short of brilliant, a comprehensive index of the gestures, tics and related epiphenomena organized into seven categories of vaguely anthropological rigor. (The authors also extrapolated their findings in a short film of several hypothetical not-so-distant future scenarios, which I found rather less compelling than the book.)

CuriousRituals-NEScartridge.jpg

While the blog illustrates their process—along with related videos and imagery—the final report, published under a Creative Commons Attribution-NonCommercial 3.0 Unported License, offers an incisive examination of "gestures, postures and digital rituals that typically emerged with the use of digital technologies."

Regarding digital technologies, [this endeavor shows] how the use of such devices is a joint construction between designers and users. Some of the gestures we describe here indeed emerged from people's everyday practices, either from a naïve perspective (lifting up one's finger in a cell phone conversation to have better signal) or because they're simply more practical (watching a movie in bed with the laptop shifted). Even the ones that have been "created" by designers (pinching, taps, swipes, clicks) did not come out from the blue; they have been transferred from existing habits using other objects. The description of these postures, gestures and rituals can then be seen as a way to reveal the way users domesticate new technologies.

CuriousRituals-TouchscreenGestures.jpg

Dan Hill of City of Sound sets the stage with a number of own observations in his fluent introductory essay. The designer/urbanist/technologist sets the stage by taking a casual inventory of gestures from the "wake-up wiggle" (impatiently jostling a mouse to awaken a sleeping computer) to iPad photography (which "feels awkward and transitional") and instant-classic iPhone compass calibrator (later referred to as the "angry monkey"). I'd add that this last gesture looks something like twirling an invisible baton or fire dancing—or, incidentally, 'skippable rope' from Art Hack Day.

CuriousRituals-DanHill-ClipArt-iPhoneCompassCalibrator.jpg

continued...

Posted by hipstomp / Rain Noe  |  31 Jan 2013  |  Comments (2)

redpaperheart.jpg

How do you project moving images onto water? That was the challenge faced by Red Paper Heart, a Brooklyn-based collective of designers and coders. Tasked by nightlife tracker UrbanDaddy with creating an event featuring "a memorable interactive experience in water," RPH decided to "create animations that partygoers could swim through."

Sixty-five thousand ping pong balls later, they had their solution:

Posted by hipstomp / Rain Noe  |   4 Jan 2013  |  Comments (4)

interactive-restaurant-table.jpg

The forthcoming Touchscreen Cafe Table we posted on has had some good follow-up, and unsurprisingly, Moneual aren't the only ones to have visualized such a thing. Fans of the seminal '90s Japanese anime Cowboy Bebop may remember Spike and Jet ordering dishes off of a touchscreen restaurant table that presented holographic images of the dishes, and Core77 readers have chimed in with more examples. SCAD grad and interaction designer Clint Rule (update your Coroflot page please!) worked up a touchscreen cafe table concept video a couple of years ago, and at least one restaurant in London has something similar currently in existence. Whereas I was thinking of the table's potential purely as a transactional device, both Rule and London's Inamo eatery have taken it further.

To start with, Rule's concept integrates social features:

Inamo, an Asian-fusion restaurant in London's Soho district, opts for projection rather than touchscreen. Their system was created by a London-based company called E-Table Interactive, and though it's projection, it still contains some type of hand-tracking mechanism that provides similar functionality to a touchscreen.

continued...

Posted by hipstomp / Rain Noe  |   3 Jan 2013  |  Comments (9)

Touchscreen-cafe-table.jpg

Aside from cell phones and tablets, we've seen touchscreens integrated into voting machines, vending machines and musical devices. Perceptive Pixel's Jeff Han integrated one into an industrial designer's dream set-up and Adam Benton proposed this desk I'd kill for. But if you look at the physical properties of a proper touchscreen--it's a flat surface that we can use for communications, and the "buttons" disappear when we don't need them—perhaps its true killer app is in restaurants.

At least, that's what Korean electronics company Moneual is hoping, with the rumored forthcoming release of their touchscreen cafe table. With a touchscreen integrated into a table, restaurants could do away with paper menus, instead displaying dish descriptions and photos on demand. Diners would never have to flag a waiter down. And with the NFC technology that Moneual will reportedly integrate into the table, you could pay the bill without having to wait for the check. You'd still need a runner to dole out the chow and a busboy to clean up afterwards, but as a former waiter myself, I'd wholeheartedly vote for an object that made the waiter obsolete.

The rumor mill says Moneual will pull the wraps off of the table at this year's CES, where it just so happens Core77 will be. We'll keep you posted if we come across it.

Posted by hipstomp / Rain Noe  |  31 Dec 2012  |  Comments (2)

phonejoy-play-01.jpg

phonejoy-play-02.jpg

Whether or not you're interested in videogames, this device is kind of fascinating from an industrial design/interface design point of view. The PhoneJoy Play is essentially a portable input device with a slick mechanical design: The two holdable halves can spread sideways, connected by a telescoping mechanism. Your smartphone or mini-tablet can then be "docked" in the middle, and the variety of buttons and motion pads interact with your device wirelessly.

continued...