Lapka is a set of "artisan electronic devices" for gathering data about one's immediate surroundings: each of the four building-block-like sensors can be attached to one's iPhone through the standard headphone jack. Coupled with a free app, they can provide detailed information on radiation, organic matter, electromagnetic fields and humidity—interesting features in themselves, enhanced by the product's quasi-organic, vaguely totemic form factor.
To complement Lapka's effort to make the product look more like jewelry or tabletop sculptures than gadgets, Burgopak notes that "The products themselves are luxury tools that convey their connection with nature. The packaging, we felt, should do the same."
From the beginning this was not intended to feel like an, 'Apple' product. It is intended to disrupt preconceived expectations about consumer electronics. Brown kraft board, single colour print and incredibly limited product information were all intentional features.
The devil, as they say, is in the detail; using precise harmonious proportions (derived from the product) Burgopak created a simple tray to protect and frame the product. This was wrapped in a sleeve with an integrated lock and finished with a single tamper evident seal.
The acronym "P.O.S." always struck me as somewhat ironic: most folks who have worked in retail know that it's short for Point Of Sale, but it also has a pejorative meaning in common parlance. When it launched in 2010, Square's register app marked a digital solution to the former—precisely because extant payment gateways so often might be characterized as the latter.
Today, they announced a major upgrade from the now-iconic card reader.
Square, the company making commerce easy for everyone, today announced Square Stand, beautiful new hardware for brick and mortar businesses that turns an iPad into a complete point of sale. With local businesses increasingly tearing out their old point of sale systems to run Square Register, Square Stand gives merchants a remarkable new way to manage and grow their business, all for the price of a cash register.
"Local business owners take as a given that they need an ugly, slow, expensive, and complicated point of sale system cluttering their counter," said Jack Dorsey, co-founder and CEO of Square. "Square Stand is elegant, fast, affordable, and easy to use. Whether you're selling cupcakes, cardigans, or cappuccinos, running your business with Square has never been easier."
Designed by Ammunition Group in collaboration with Square, the simple swiveling stand is designed as an all-in-one system. The card reader is discreetly integrated into the base, providing a larger and more stable slot for swiping.
They've also managed to cast a young Julianne Moore in the role of a lifetime:
Water and electricity don't mix, at least not where safety's concerned. But artist Antonin Fourneau, while in residence with the French R&D and prototyping collective DigitalArti, devised a safe and spectacular way that even children could safely activate LED lights with water.
Fourneau's proprietary hack, called "Water Light Graffiti," is a traveling installation that will next touch down at the Grohe showroom during New York Design Week. It consists of a grid of thousands of LED bulbs that light up as soon as water hits them. "You can use a paintbrush, a water atomizer, your fingers or anything damp to sketch a brightness message or just to draw," DigitalArti explains. "Water Light Graffiti is a wall for ephemeral messages in the urban space... A wall to communicate and share magically in the city."
Check it out:
Water Light Graffiti will go live in New York City on May 13th, at the Grohe Live! Center at 160 Fifth Ave; RSVP required.
As part of MoMA PS1's forthcoming EXPO 1: New York exhibition, a "large-scale festival exploring ecological challenges," the contemporary art center is bringing rAndom International's "Rain Room" to its sister organization in Midtown Manhattan.
Rain Room is a hundred square metre field of falling water through which it is possible to walk, trusting that a path can be navigated, without being drenched in the process. As you progress through The Curve, the sound of water and a suggestion of moisture fill the air, before you are confronted by this carefully choreographed downpour that responds to your movements and presence.
The digitally-inclined art/design collective is pleased to bring "Rain Room" to MoMA following its debut at the Barbican Center in their hometown last fall, where it recently closed after a five-month engagement. We can only assume that some of our readers have already had the pleasure of seeing the installation in London, but we're definitely looking forward to experiencing it in person.
So it seems like software developers have all the cool toys—and they are really good at sharing. Design software on the other hand, can be a little clunky and, as anyone who has tried to share a rhino file with a classmate can tell you, it can be difficult to directly collaborate in most 3D modeling software.
Enter Sunglass a collaboration tool you can use in conjunction with most 3D modeling software to share, review and access your (and your coworkers) files from anywhere. Sunglass stores all of your 3D files in the cloud allowing both private and public access through the open API. Their tag "Think GitHub for 3D" is a powerful statement for those familiar with the web-based hosting service for software development projects. Sunglass has quite a buzz in the start-up realm, but designers are the ones who will really benefit from the browser-based software.
Revisions are logged and can be seen by any of the contributors
MIT-educated founders Kaustuv DeBiswas and Nitin Rao developed the platform for sharing and syncing of 3D model files over the could allowing access sharing for clients, coworkers and contractors all over the world. As design studios spread further across the globe—not to mention manufacturing moving to every corner of the universe—the software seems like a touch of brilliance in terms of keeping track of workflow.
Sunglass offers plugins to sync with 3D modeling software
Sunglass offers a free version (allowing unlimited public projects) on their site that is great for group projects in design school. The entire platform operates with plug-ins to interface with a wide variety of 3D modeling software (including all our old friends: Rhino, Solidworks Autodesk and sketch-up among others). The professional version, available by subscription for $20/month, offers more features for private projects and more features.
Review tools offer ability to comment and accept changes
It's been just over seven months to the day since the Morpholio Project debuted their Trace app to much acclaim. By January of this year, they had added several new tools for designers beyond the original audience of architects, and now, just a few months later, they're pleased to announce a suite of new tools that constitute a major release. "The App Store's number one portfolio app re-imagines the portfolio as a design utility, moving it into the fast, flexible, at-your-fingertips device era. The project seeks to advance the ways that creatives access, share, discuss, and get feedback on their work from a global community of users."
By combining production and presentation software with web-enabled tools for sharing and critique, the app offers a fully-integrated platform for production and collaboration. To hear Morpholio's Anna Kenoff tell it, "Aside from making design production easier, we wanted to know if better tools could make it smarter by integrating the wisdom of crowds and capitalizing on the power of the touchscreen to capture feedback."
To achieve this, Morpholio had to become very sophisticated about all the ways that designers communicate—not just through language, but most importantly through their eyes and hands. Over the past year, the team of architects and programmers has collaborated with experts from various disciplines to build a robust design-centric workspace that could be used by anyone—from fashion designers to photographers, architects and automotive designers, even tattoo artists. It builds on research into human-computer-interaction to deliver innovations like a tool for image analytics called "EyeTime" and virtual "Crits" where collaborators can share images, and comment on each other's work via notes or sketches. Human behavior data-mining is essential to offering these forms of powerful feedback, letting you know how your followers are interacting with your work.
When you think about what you might encounter at the Museum of Modern Art in New York, Pac-Man and Tetris are generally not first on the list... if they're on the list at all. Last month, MoMA opened the doors on their new exhibition 'Applied Design,' showcasing a range of designed objects, interfaces and interactions dealing with nearly every facet of society. One of the major highlights of the show is the controversial addition of 14 video games to MoMA's permanent collection. The acquisition seems to toe the line between obvious and ridiculous, but we have to admit, MoMA is right on target for envisioning the modern museum collection of the digital age.
The 14 games, showcasing an array of videogames from traditional arcade, single-player fantasy to MMOGs (Massively Multiplayer Online Game) were selected not on their graphic quality or aesthetics, but as exemplary pieces of interaction design.
Applied Design is the brainchild of Senior Curator of Architecture and Design, Paola Antonelli, who is no stranger to stretching the boundaries of the contemporary art museum (she was responsible for such shows as Talk to Me and Design and the Elastic Mind; for years she had been pushing to include a Boeing 747 in the permanent collection). The physical museum display of the games feels a little strange, appearing to transform part of the gallery into an arcade. From the collection of 14, about half of the games are playable for museum guests. Games employing longer narratives (Myst and the Sims among others) are displayed with a pre-recording to show the scope, while not letting guests interact directly.
The process by which such unconventional works are selected and acquired for our collection can take surprising turns as well, as can the mode in which they're eventually appreciated by our audiences. While installations have for decades provided museums with interesting challenges involving acquisition, storage, reproducibility, authorship, maintenance, manufacture, context—even questions about the essence of a work of art in itself—MoMA curators have recently ventured further.
Most of us are losing our hearing for some reason or another, either to poorly distributed sound from cheap earbuds or old age. Millennials seem to be destined to be shouting to hear each other in just a few short decades (if they aren't already). While most of us are interested in noise cancelling headwear for the airplane or subway, advancements in customized audio tech could improve a number of different markets from field equipment for military personnel to custom headphones.
Born out of the labs at MIT, Lantos Technologies formed in 2009 and developed a way to 3D map the ear canal. We've seen a lot of 3D scanning equipment recently, but in contrast to projects like the Photon that are fuzzy on the actual application, the ability to visualize the ear canal is an innovation likely to be a huge leap not only for audiologists, but designers of audio gear and medical equipment alike. Likewise, we owe a nod of appreciation to Boston Device Development for a nicely executed form and geometry for the handheld instrument .
The world's first Intra-Aural 3D scan system uses the "intensity measurement of two different wavelength bands of fluorescent light as they travel through an absorbing medium, capturing images and stictching them together with elegant algorithms, the system generates a highly accurate 3D map."
Essentially, the hand-held device has a probe that goes into the ear canal, fills with a liquid and then takes a series of photos that are combined to create the 3D model—all in less than 60 seconds. The ear scan raises a few thoughts: first, its sort of ugly in there, second, this could be huge for customized audio equipment. You also have to wonder, if modeling the interior of the ear canal is now possible, advancements in 3D mapping must have a myriad of other medical applications. Lantos recently received its clearance from the FDA to market the scanning system later this year in the United States.
BBC Future recently invited Conran's Jared Mankelow to rethink the camera for their series on "redesigning the everyday," Imagineering, in which "top designers rethink common objects and offer 21st Century solutions." The Senior Designer at Sir Terence's venerable company did away with the screen-based interface, hearkening back to the "retro joys of analogue photography"—namely, "that old-school feeling of waiting for your photographs to be developed before seeing how they turned out."
Mankelow's concept consists of a simple square, roughly the size of a Post-It pad, featuring a distinctive central aperture that serves as the lens and viewfinder, "with two rings at the front for the imaging sensors (black) and a ringflash (white)."
The square snapper may only be a mock-up—made by the UK's Complete Fabrications—but it includes many of the attributes Mankelow would like in a finished product. Firstly there is the weight—the design's reassuring heaviness harks back to the chunky character of models from the 1970s, when old-school film cameras arguably reached their golden age.
The lack of screen, of course, is the most radical departure from existing digital camera design. Noting the availability of wireless screens—smartphones, tablets, etc.—Mankelow has opted to relegate preview images to mobile devices via Bluetooth instead of in the camera itself. Not only does this add the element of surprise, as in film photography, but it also serves to reduce battery usage.
The Photon 3D Scanner we mentioned last week has been overfunded by $140,000. The Photon, you'll recall, will allow you to inexpensively scan things on your desk.
A team of Heriot-Watt University researchers in Scotland, however, have developed a 3D scanner with a very different reach: It scans objects that are up to 325 meters away from it, and will reportedly be able to scan at a distance of 10 kilometers in the future. The researchers documented the results achieved with their functioning prototype in an optic science journal, and according to 3Ders,
The new system works by sweeping a low-power infrared laser beam rapidly over an object. It then records, pixel-by-pixel, the round-trip flight time of the photons in the beam as they bounce off the object and arrive back at the source. The system can resolve depth on the millimeter scale over long distances using a detector that can "count" individual photons.
However, you'll notice that while the mannequin scanned with something approaching fidelity, the face of the Asian gentleman (one of the co-authors of the research paper) is severely distorted:
This would seem to indicate that Asian people are immune to laser beams. For their part the researchers claim that human skin and perspiration muck with the scanning technology, but I think we can all agree that my explanation is more compelling.
As for applications, the team forecasts that their long-range 3D scanner could be used to scan large natural environments, like the side of a mountain, for example. They estimate that "a lightweight, fully portable scanning depth imager is possible and could be a product in less than five years."
More proof that the future never shakes out like you think it will: If you asked any of us during our childhoods what laser weapons would look like, our Lucasfilm-fueled imaginations would have described bolts of light that blast out of gun barrels like tracer rounds. We have a tendency to map current technology onto future technologies, which is why futuristic flying car concepts from the 1950s all look like '57 Chevy Impalas with no wheels.
Well, the U.S. Navy is now testing a shipborne laser cannon, and its actual application looks a lot less like blasting TIE fighters out of the sky and a lot more like burning ants with a magnifying glass. Observe:
That's called the Laser Weapon System—also known by the somewhat lame acronym LaWS—and it was developed by the U.S. Naval Research Laboratory. As you can see in the video, it works by tracking targets and painting them with the laser until it catches fire. The concept had been tested against a small boat prior to torching the flying drone you see in the video, and will reportedly be used in the future to counter things like incoming missiles. However, there's no word yet on whether Navy engineers will be able to surmount the ultimate technical hurdle: Ensuring that it makes a really cool noise when you fire it.
The Museum of Modern Art and open hardware startup littleBits are pleased to unveil a new collaboration, on display in the windows of MoMA Design Store locations in Midtown and Soho as of today, April 9, 2013. Developed in conjunction with brooklyn design studio Labour, the "4’-tall kinetic sculptures [are] made of wood, cardboard and acrylic, [brought to life] with 'Bits' measuring less than 1 inch square."
Although littleBits have been billed as "LEGO for the iPad generation," founder Ayah Bdeir notes in her TED Talk (embedded below) that the transistor has been around since 1947—predating the the iPad by over six decades. Rather, the modular bits comprise a full ecosystem of input/output functionality, such that littleBits cannot be classified strictly as a construction toy or an electronic one. Bdeir elaborates:
The idea behind littleBits is that electronics should be like any other material, paper, cardboard, screws and wood. You should be able to pick up 'light,' 'sound,' 'sensing,' etc., and embed it into your creative process just like you do foam and glue. We sit at the border between electronics, design, craft, art and mechanical engineering, and we are constantly negotiating those boundaries. I believe the most interesting things happen at the intersection of disciplines and the borders need to become more porous for us to see the most incredible uses of electronics in the world. littleBits is a library. We now have three kits and over 35 Bits and are working on the next 30, so this is literally just the beginning.
We had the chance to catch up with Bdeir, an interactive artist and engineer by training, about the past, present and future of littleBits.
Core77: I understand it's been roughly a year and a half since you originally launched littleBits. Have you been surprised by the response? What achievement or milestone are you most proud of thus far?
Ayah Bdeir: The response has been incredible. When I first started the company in September 2011, I knew that we already had fans who were waiting for the product, but I had no idea the response would be what it was. We sold the first products on our site on December 20th of that year and we sold out within 3 weeks of starting. [In 2012, we grew over] a series of events: we won best of toyfair, I gave a talk on TED that got a great response, we had a documentary on CNN and at every juncture, demand shot up. It was really incredible to see people from all over the world, parents, teachers, kids, designers, artists, hackers getting excited about littleBits for different reasons.
I think my most proud milestone is that despite all I heard about the toy industry being competitive, jaded and without mercy, we won 14 toy awards in less than eight months (including Dr Toy 10 Best Educational Products, Academic's Choice Brain Toy, etc)—in some cases, we bested some of the most popular toy companies in the world.
Could games like Papa Sangre pave the way for other mobile audio experiences?
The tech lovers at last week's MEX Mobile User Experience conference in London were treated to all manner of fantastical visions of our further mobile empowered futures; big data, connected cars, smart homes, Internet of Things, gestural interfaces, personal mini-drones—the lot.
Few presentation this year will be complete without at least passing reference to the game changing nature or dystopian social implications of soon-to-be-unleashed Google Glass. Surprisingly, however, a couple of jaw-dropping demonstrations were enough to leave many of those attending wondering whether we might be missing a slightly quieter revolution taking hold. Could immersive audio be about to come of age in mobile user experience?
Having played second fiddle to the visual interface for decades, being so often the reserve of experimental art installations or niche concepts for the blind, audio has yet to find mass interaction application outside of alarms, alerts, ringtones and the occasional novelty bottle opener. All of this, however, could be set to change, if the two fields of binaural sound and dynamic music can find their way into the repertoire of interaction designers.
Binaural Audio Spatializes Interaction
Hardly a new phenomenon (though not always well known), Papa Sangre is regarded as the 'best video game with no video ever made.' Since it's release back in 2011, the audio app game for iOS has been a hit with both the visually impaired and fully sighted. The game plunges players into a dark, monster-infested fantasy with only their ears to navigate the three dimensional underworld and rescue the damsel in distress. The incredible 3D sound effects are achieved with headphones and binaural audio—an effect that replicates the experience of hearing a sound-wave originating from a certain direction, hitting one ear before the other. Use of the screen is disconcertingly limited to only a rudimentary compass-like dial (determining the player's virtual direction of movement) and two feet buttons, pressed to take steps into the darkness. Never has a computer game monster been so terrifying than when you can't actually see it.
In the dark: screenshot of immersive audio game PapaSangre
The creators, London-based SomethinElse, developed the game by first mapping out the experiences of sound from hundreds of directions using a binaural microphone—a stereo mic the exact shape and density of a human head with pick-ups for ear drums. The algorithmic engine this produced could then be put to work transforming any ordinary mono audio into a spacialised, stereo output for listeners wearing headphones (with a fair dose of clever coding, of course).
Binaural microphone with exact dimension and density as human head
Anyone who's seen James Cameron's Aliens cannot forget the images of 1.) Ripley in a cargo-loader exoskeleton, and 2.) Vasquez prowling the corridors with that body-mounted machine gun on the swing arm. That was back in 1986; now it's 2013, and not only have these designs actually come to pass, but they've been combined.
As we previously reported, Lockheed Martin licensed a company called Ekso Bionics' technology to develop the HULC, or Human Universal Load Carrier. It's got the power-assist legs and the body-supported gun mount:
While Ekso Bionics is targeting the consumer market, enabling paraplegics to walk again, Lockheed has initially gone military. However, they're reportedly creating a version of the HULC called the Mantis, for industrial applications. As Bloomberg News reports,
The machines may follow a classic arc from Pentagon research project to fixture on an assembly line, similar to the development of lasers, said Paul Saffo, managing director of foresight at investment advisory firm Discern in San Francisco. "The medical devices get the most attention, the military funds it and the first mass application is industrial," Saffo said in a telephone interview.
[Mantis is aimed at] any industry in which workers must hold heavy equipment that can cause fatigue and back injuries.... Mantis has a mechanical extension for a wearer's arm and absorbs the strain from hefting a grinder or sander, [Lockheed business development manager Keith] Maxwell said. Tests found productivity gains of more than 30 percent, he said, and wearers showed their Macarena footwork to demonstrate the suits' flexibility.
"It turns workers away from being a weightlifter and into a craftsman," Maxwell said.
I'm all for Construction Worker Exoskeletons—as long as the power tools are not integrated, but remain separate objects that you pick up. Because once they start replacing the user's hands with built-in angle grinders and magazine-fed nail guns, we're going to have a problem. Last year, I watched a construction worker fight a cabdriver in front of my building; the hack didn't stand a chance. The last thing I want to see is an angry frame carpenter tramping off the jobsite in one of these things, ready to settle someone's hash with his Forstner-bit fingers and chopsaw hands.
While debate rages in the U.S. over drone surveillance of its citizens, drones were pressed into service over London on Saturday for a less contentious purpose: To promote the upcoming Star Trek movie. Ars Electronica Futurelab, an Austria-based media art lab, collaborated with German quadrocopter manufacturer Ascending Technologies to give Paramount Pictures publicity via "spaxel."
Thirty autonomous, LED-equipped "Hummingbird" drones took to London's evening skies, then self-assembled into the Star Trek logo, which then rotated as a whole. If that sounds simple, it sure ain't; Futurelab's software has to keep the drones from crashing into each other while they take off and find their positions, and the matter was complicated by both wind and snow, the former affecting the navigation and the latter affecting the drone-to-drone communication. Nevertheless, they were able to pull it off:
It's starting to seem inevitable that we will end up on the bandwagon that is self-tracking, whether we like it or not. While most of the recent tech-enhanced products seem to focus on logging fitness data, you might be wondering, "What about other things I could be tracking?" Well, if there happens to be room in your cloud after an onslaught of Nike Fuel Band data, CubeSensors are a set of environmental sensors that allow you to keep tabs on your indoor spaces.
The CubeSensors record interior conditions and store them in the cloud for access from any mobile device. A cleverly designed app sends you notices and suggestions about how you might better your indoor environment for greater productivity or comfort. Likewise, in contrast to the number of wearable tech items, the cubes are being pitched as an addition to both the home and the office. Essentially, they appear to give you the option of blaming your environment—not your boring powerpoint presentation—for low employee productivity.
Specifically, Minuum improves on the concept of a linear arrangement of letters: screen-based UI and predictive text allows for a QWERTY layout to be transposed into a single line of letters. (It's worth noting that index typewriters were initially developed as a less expensive, more portable alternative to keyboard-based typewriters, though they were reportedly slower than handwriting in most instances.)
Minuum is a tiny, one-dimensional keyboard that frees up screen space while allowing fast, accurate typing. Current technology assumes that sticking a full typewriter into a touchscreen device is the best way to enter text, giving us keyboards that are error-prone and cover up half the usable screen space (or more) on most smartphones and tablets.
Minuum, on the other hand, eliminates the visual clutter of archaic mobile keyboards by adapting the keyboard to a single dimension. What enables this minimalism is our specialized auto-correction algorithm that allows highly imprecise typing. This algorithm interprets in real time the difference between what you type and what you mean, getting it right even if you miss every single letter.
The video is, as they say, a must-see:
Yes, the last bit is cool, but nota bene: it's currently an alpha-stage prototype, and Will Walmsley & co. are currently seeking funding on IndieGoGo. Suffice it to say that we'll be keeping an eye on this one... if all of the hypothetical wearable implementations become a reality, we could see the emergence of a new set of curious rituals.
A driver's license is meant to be induplicable, so you might ask why on Earth New York State has decided to switch the headshots from color to black and white. Surely a greyscale image is easier to knock off than a color one? That's true with printed images, but the headshots on the new licenses will use a more esoteric production method: Lasers.
In a bid to eliminate forgery, the NYS Department of Motor Vehicles will still capture your image with a conventional camera—but a high-end laser engraver will then burn your mug onto a polycarbonate sheet. While the official language is understandably vague, it appears polycarbonate was chosen because it can essentially be fused shut—unlike earlier, laminated versions of driver's licenses. As anyone who's ever owned a skateboard or abused a piece of plywood knows, laminated layers can be separated. In the case of licenses, that separation allowed tampering that a polycarbonate material would preclude. A host of other identifying measures not subject to public scrutiny are to also be embedded within the material.
One thing we're curious about is how thick the cards are, and how much the new, stiffer material will flex inside a wallet you're sitting on. "The new cards are so stiff," the Times reports, "that they sound like a compact disc when dropped."
'Dare We Do It Real Time' by body>data>space (photo by Jean-Paul Berthoin)
Over an intensive two days at the end the month, 100 delegates at MEX 2013—the international forum for mobile user experience, in its 12th iteration this year—will gather in central London to discuss and attempt to envision the development and future impact of mobile technology.
With speakers at last year's forum including Dale Herigstad, four-time Emmy award winning creator of the iconic Minority Report conceptual user interfaces, as well as connected car experts from Car Design Research, this year's event boasts inspiring input from the likes of content strategist at Facebook Melody Quintana, UX research guru of WhatUsersDo Lee Duddell and Ghislaine Boddington creative director at experimental connected performance outfit, body>data>space.
Insight - How should we improve understanding of user behaviour and enhance how that drives design decisions? Diffusion - What are the principles of multiple touch-point design and the new, diffused digital experiences? Context - How can designers provide relevant experiences, respect privacy and adapt to preferences? Sensation - What techniques are there for enhancing digital experience with audible and tactile elements? Form - How can change in shapes, materials or the abandonment of physical form be used to excite users? Sustainability - How can we enable sustainable expression in digital product choices? Can we harness digital design to promote sustainable living?
Sam Dunne, Design Strategist at Plan and Core77 UK Correspondent, will be reporting live from the event.
MEX, Mobile User Experience
Walllacespace St. Pancras
22 Duke's Road
London, WC1H 9PN
March 26–27, 2013
When we last looked in on interaction designer Jinha Lee, he was developing the See-Through 3D Desktop for the Microsoft Applied Sciences Group. Last week Lee, who's pursuing a doctorate at MIT Media Lab's Tangible Media Group, posted a video showing a potential retail application for the set-up: Called WYCIWYW, for "What You Click is What You Wear," the interface would allow the user to virtually try on wristwatches and jewelry.
I understand that robots and drones are going to play a big role in our future lives, but why are the more advanced ones always so creepy?
A Swiss outfit known as the Laboratory of Intelligent Systems, or LIS, has created a drone that can map and navigate unknown spaces. In theory, this could be quite useful for, say, taking stock of the interior of a collapsed structure. The AirBurr, as it's called, flies around the space crashing into things, like a fly or mosquito, and then uses those collisions to mark where the obstructions are:
(Is it just me, or does that narrator need to clear his throat for the entire video?)
Not to be outdone in the freak factor department, robo-overlords Boston Dynamics have tricked out their BigDog robot so that the thing can now hurl cinderblocks, presumably as a means of expressing rage:
It will get worse before it gets better. Gizmag is reporting that Italian Institute of Technology researchers are toying with the idea of a quadripedal robot kitted out with a pair of arms to provide a measure of manual dexterity. I'm interpreting that to mean they are creating a robot centaur, which I just don't think is a good idea. Let's look at a non-robotic centaur:
Now picture him made out of metal and imbued with powerful emotions, and ask yourself, do you want to fight that thing? Yeah, I didn't think so.
Hit the jump to see more of what's in store for our futures.
The tech blogs have been aTwitter with news of a potential portable energy breakthrough. Heralded as possibly bringing "The End of Batteries," researchers at UCLA have succeeded in creating high-energy-density graphene micro-supercapacitors with a ridiculously cheap fabrication tool: The laser in an off-the-shelf DVD burner.
Let me back up a sec and cut through the tech jargon. What these supercapacitors can do is quickly absorb, store, and release energy. If they could be produced inexpensively—which they now can—they promise to do the same things batteries do, but way better. A supercapacitor-equipped cell phone would charge in seconds, not minutes. If scaled up to integrate with an electric car, overnight top-ups would become a thing of the past.
As far as the impact on product design, supercapacitors are made of graphene, which is thin, flexible and super-strong. Battery real estate is one of the big constraints in the design of portable objects, whether we're talking about cell phones, cordless drills or emergency lighting; imagine the freedom of form possible when that barrier is minimized.
But the greatest benefit of supercapacitors would be realized not just by individuals, but by the planet. Batteries use up metal and contain nasty chemicals required to create portable juice, and creating/disposing of these things causes environmental problems. Graphene, however, is just carbon—biodegradable and compostable. When you're done with the thing, throw it outside and let it go back into the Earth.
Here are the two researchers, professor Richard Kaner and grad student Maher El-Kady, explaining the possibilities:
Time is money, and James Dyson has proven he's willing to spend both in search of product design breakthroughs. The British inventor famously spent years perfecting his Dyson Vacuum through the '70s and '80s, and has since invested a considerable amount in R&D; since 1999 the company has quietly spent more than US $160 million researching digital motors alone.
That motor research has yielded fruit, as the company is now able to produce digitally-controlled motors so small and absurdly powerful—their V4 motor, which powers the Dyson Airblade, lives in a housing just 85mm wide yet can go from 0–92,000rpm in less than 0.7 seconds—that they've invested $79 million in a new hi-tech motor manufacturing facility in Singapore.
While Dyson has already been producing their own motors for ten years there, they'd done it in partnership with a local firm, something like what Apple does with Foxconn. By opening their own facility they're not only doubling their capacity, they're gaining the ability to produce in absolute privacy.
We first reported on Canon noodling around with augmented reality, which they branded "Mixed Reality," back in '09. Their system had this clunky two-handed viewfinder:
By last year they'd refined it into a smaller set of goggles, though they artfully added some protruding antennae, to prevent the user from looking too cool:
The system was pitched specifically for industrial design applications, getting our hopes up, however guardedly. We've seen plenty of pie-in-the-sky technologies promised and never delivered, but this morning Canon announced the system is ready for roll-out. While some media sources are reporting a March 1st launch, Canon's saying it's available now. And if their press photo is to be believed, they've done away with the dorky antennae:
I've tried to remain silent on this topic, but there are only so many times I can hear people's ridiculous ignorance of industrial design before I've gotta pipe up. The general, misguided statement I see people making is Well, all smartphones now look like the iPhone. It's impossible to design it any further. Here's the latest assertion made along these lines:
[Apple vs.] Samsung and other hardware manufacturers over who owns the rounded corner has only served to reinforce why hardware design is steadily becoming homogenous. There are only so many things you can do with a thin glass rectangle....
...As hardware evolves itself into invisibility we're well on our way to a time when the only thing that differentiates how something feels will be its software.
Yeah, I disagree. Statements like this show a real lack of imagination, along the lines of Henry Ellsworth—U.S. Commissioner of Patents in the 1840s—saying "The advancement of the arts, from year to year, taxes our credulity and seems to presage the arrival of that period when human improvement must end." That statement, by the way, is often twisted sideways and misattributed to future Commissioner Charles H. Duell supposedly having said "Everything that can be invented has been invented."
This statement that "There are only so many things you can do with a thin glass rectangle" isn't meaningful, because the person who made it is locked into the idea of thin glass rectangles and cannot conceive that the technology will change. One of the stories recounted to us in design school was of a wire manufacturer that made a fortune during the telephone boom of the 20th Century. They grew complacent, saying, "Well, people will always need telephones, and telephones need wires, so we'll always make good money." They could not conceive of the advent of the cell phone, and as the shift began, the company's fortunes waned. (They were saved, interestingly enough, by shifting into another emerging technology where fine wiring was needed: The manufacturing of meshes for airbags.)
There's been talk of Apple developing curved glass, and if that comes to pass, the form factor of phones will change. If further developments materialize and the phone is something flexible that can be rolled up, the form factor will change again. If holography becomes affordable, we'll see yet another change. But to me, the notion that "hardware [will evolve] itself into invisibility" is absurd.
We've all seen our share of TED Talks, and when the camera pans over the audience, most people are paying attention; maybe one or two have their heads down and are presumably texting something. But how do you ensure every pair of eyes in the house is totally riveted on your project? What object, technology or idea could designer Markus Fischer possibly demonstrate such mastery of that the audience is roused into a standing ovation at just three minutes into his talk?