You sit on a chair in a kitchen. In front of you on the table is a black, glossy cube. The cube is about the size of your hand and vividly reflects its environment. Sharp edges meet at sharp corners. There are no visible means of manufacture, no split lines, no changes of material, no textures. It is clearly man-made, but what is it for? You lift the object and turn it over in your hands. The cube is cool to the touch, hard, glassy and non-compliant. It feels dense, although not solid. A little tap reveals no vibrations, no internal voids or structures. Viewing all six sides you note that there are no buttons, knobs or dials, no perforations for a microphone or speaker, no means of inserting electricity or media.
Suddenly a piezo-electric vibration emits from the cube and an LED glows from within. At once it becomes clear, this is a device.
I want to write a little about affordances today, and how we understand the objects around us. The term 'affordance' was coined in 1977 by the American psychologist JJ Gibson in his (and Roger Barker's) work on ecological psychology. Plainly put, an affordance is an environment's way of letting you know what is possible. During infancy we press, poke, pull and pick at objects, codifying their behaviors and building a library of reactions. Over time we become adept at estimating probable outcomes from interactions with certain shapes. Small bumps are to be pushed, cylinders are to be rotated, toggles are to be pulled. Every designer uses these codes when designing products. As we enable physical objects with digital capabilities, transforming 'things' into 'devices,' we begin to meddle with their core affordances. I've been spinning a thought experiment over in my head for a while now and want to share it with you.
Your hammer has four components: the wooden handle, the iron head, the steel wedge and a printed logo sticker. After many years of use, the hammer breaks and needs replacing. You visit the hardware store and see that new hammers have a hollow, reinforced plastic handle. Five years pass, the plastic hammer breaks, and you return to the store. The latest hammers have a small sensor inside the hollow plastic handle, which measures force and stroke. By pairing the hammer with your phone, you are able to see usage information, and the hammer is always easy to find. The manufacturer is able to access this information to chart how often hammers are used, and to track how and when they break. Five years pass. The next generation of hammers have the ability to circumvent the phone and upload information directly to the internet. The manufacturer has created an online service offering discounts on further purchases, based on your habits and frequency of use. You login to this service via Facebook.
At this point, your hammer knows about your affair.
I'm not sure what to do with the hammer story, or what it means exactly, but it troubles me. A hammer is the archetypal blunt instrument, yet after a brief (but credible) escalation of technology the core affordances of a hammer become blurred. Despite the overall physical form of the object remaining somewhat constant, the addition of sensors, connectivity, user information and interpretation make the object more difficult to comprehend. Our core understanding of 'hammer-ness' is challenged. It's still for hitting nails, but it has access to a whole host of other information, which may encroach on our approach to it as an object. Our internal library of handles, levers, knobs, toggles and sliders is ill equipped to make sense of the emerging population of 'devices' where 'things' once lived. Increasingly I find the question of purpose difficult to answer. I'm not sure I can define what a phone is for, what a laptop is for or any number of convergent digital things are for.
The temptation to build flexible platforms is strong, to collect a family of sensors and interactions together into a place and hope for the best. To allow an army of developers access to the device and to deliver parcels of functionality into it seems smart, but we must be cautious of such approaches. As we add more functionality into an object, the object itself becomes necessarily flexible, bland even. This leads to the type of industrial design we see in the world of smartphones: anonymous, cookie cutter devices which don't announce any functionality from their physical form.
Film still from 2001: A Space Odyssey
Kubrick's 2001: A Space Odyssey introduces a deliberately affordance-free object in the form of the monolith, which strikes a chord of unease for many reasons. We ask 'what is it?', ' what is it for?', ' how does it work?'. Our brain tries to make sense of it, scratching at it to find purpose. Kubrick naturally offers no explanations, and we are left with hanging questions, such is our desire to make sense of the world. In our landscape of devices, we risk creating ever more monoliths, devoid of any exterior evidence of use, whilst simultaneously cramming the entire knowledge of mankind into them.
Convergent devices such as smartphones will continue to have a very important role in the future of objects, but we are set to enter an emerging and complex world driven by decreasing hardware costs, the emergence of software SIMs and significant improvements in connectivity infrastructure. We will soon exist in a world where adding connectivity (and by extension a profile and history) to an object will become trivial and commonplace. A standardized sub-assembly of power, antennae and processors will appear, allowing manufacturers to add network connectivity to their products with the same level of consideration as they currently give to an LED.
These collections of electronics are platforms by nature. With a little work, it's feasible that a Garmin bicycle computer could call me an uber, or I could send a Facebook message from my coffee machine. Whilst these things are not trivial, the relevant hardware exists in these focussed devices to muddy the waters of their core function. Over time we see simple connected objects fall prey to feature creep and bloat as designers and engineers realize new capabilities gifted by the electronics within.
As it becomes simpler to embed connectivity into an increasing number of objects, we will need, more than ever, to practice restraint in the creation of those objects. If we are happy that a Garmin doesn't need to do email, or order an Uber, then a small screen with no keyboard seems sensible. But why a screen at all? Why has that become the default? Any smart product manager will point to the opportunity to add features at a later date, to offer more modes, to give some sort of choice... but just because you can, it doesn't mean that you should. As the British comedic duo Fry and Laurie beautifully illustrated in their sketch 'The Cause':
"…at least you've got the choice now. I mean they may be complete crap but you've got the choice ... that's what's important, the choice …"
There's something elegant about doing one thing well, of not being distracted by choice, of not developing a platform. It's difficult to resist offering options, and simply focussing on the essence of an object or activity. That's much, much more difficult than leaving it open ended, but it's much, much better for the user in the long run. Clarity of purpose is a sure fire way to generate a strong bond between user and object. If the product does the same thing it always has, then you can be assured that it will continue to do so. When your hammer now needs a login that it didn't need yesterday, that clarity breaks down. The core affordances of a hammer become obscured. A good exercise when designing your new thing is to write its purpose on the studio wall. At every moment of feature development, get the team to look back at this statement and judge every decision accordingly.
So let's say you've managed to show restraint in designing the feature set for your thing, what now? Far from restricting design opportunities, you are now free to explore the glorious world of mechanical interactions. Dials, buttons, levers, bezels, sliders, toggles and switches. These interactions only make sense when we restrict the functional requirements of an object, but they offer up a host of benefits. Interaction design has been so myopically focussed on the high density pixel rectangle that the conversation rarely steps out of that arena. Tangible controls telegraph their use from a mile away, they stir our inner understanding of the world.
The first piece of interaction design a toddler understands on an iPhone is the home key. It clearly affords pressing. It's concave. It has a split around it's perimeter, it clicks when you push it. It really clicks! Touch screens offer none of these affordances. Where once sat a button, there is now a person's face, or a piece of text, or a candy to be crushed. Our brains need to constantly keep up with the ever changing landscape of interactions which take place in the necessarily complicated world of the smartphone.
But it needn't be like that for a focussed device such as a bicycle computer. The function list, in truth, is small. Sure you can add features, but should you? By adding extra features, we add complexity. By adding complexity, we reduce the options for interaction, and likely resort to touchscreen and soft keys. We arrive at a place where the primary method of interaction is unsuitable for the core use case.
I want to close by directing you to a new product, which I feel displays this wonderful balance of technology, restraint and mechanical interaction. Omata, a small company from Los Angeles, are on the cusp of launching the are world's first analog GPS bicycle speedometer. [Editor's Note: Read more coverage on the Omata from our bike editor here.] This is not a retro product, nor is it whimsical, it's the result of a lifelong obsession with cycling and a purity of vision for the place of technology within that sport. Current bicycle computers are bewildering devices, which have reduced cycling to a metrics race through the addition of features and data.
Omata bicycle computer
The addition of LCD screens has allowed manufacturers to add layers of analytics and feedback to cycling, which some would argue improves the experience, but does it really? Omata don't think so. Their product encourages you to enjoy the ride, and only displays speed, distance, elevation and time whilst riding. That seems like a sensible amount of data to handle whilst out-and-about. The product does still track all of those other things cyclists might want, but it does so internally, for interrogation at a later date. By restricting the feature list, Omata have created a beautifully simple object, with a mechanical bezel-driven interaction, which stirs the tool-seeking part of your simian brain, not another screen surrounded by silicon domed buttons whose interactions must be learned on the couch.
Nick Foster is a post-discipline designer with specialisms. He has over fifteen years experience in the design industry as an engineer, industrial designer and futurist for companies such as Dyson, Seymourpowell, Sony and Nokia. He received his MA from the Royal College of Art in London and currently lives in Oakland, where he is Staff Industrial Designer at Google. He is also a partner at the Near Future Laboratory, pioneering work in the field of Design Fiction