Design Speculations is a monthly feature that rounds up the latest news and postulates on what it implies for the future of design.
February seemed to have zoomed by, as it always does, but there were some crucial events in the short month that are sure to have long-lasting effects. Let's dive into it:
While scrolling through TikTok in early February, I came across a video that led me down a rabbit hole about technology and how it's bound to affect the evolving visual direction of design.
The video is by TikToker Derrick Gee, a fun follow for anyone interested in music and lesser-known design facts. He first discusses the computational photography updates of the iPhone 14, announced in September 2022. Apple's "deep fusion" software use contextual information to create an ideal photograph by configuring multiple photography frames, and adjusting elements such as exposure and contrast. In other words, this technology shows that camera phones are moving away from technically perfecting features like lenses to produce a more true-to-life image. Instead, phones are opting for technology that creates a more "perfect" image, regardless of how true to life it may be. What is perhaps most interesting in Gee's commentary is that this editing algorithm was developed not just to create a more perfect photograph, but one that will appeal most to the average consumer consensus.
Gee then makes a connection between this and Spotify's Discovery Mode feature, described on the Spotify for Artists website as a "marketing tool that helps your music get heard when audiences are most open to discovery." The algorithm for engagement is driven not by novelty, but by popularity (and the more your song is heard, the bigger the cut Spotify gets for your song). So smaller artists with songs that capture a large audience get prioritized and a chance for a major spotlight- a win-win right?
Enter a caption (optional)
Gee argues the overall effect is a bit more complicated. He worries that relying on engagement analytics as a curatorial guide will stifle innovation in music, as songs that appeal to the greater masses will be more profitable. This raises an important question in the realm of design: what happens to the aesthetic evolution of design when a technological tool is deciding what is considered exemplary? As we increasingly incorporate image-generating AI into our design practices (though it should be noted while AI is not strictly an algorithm, it does rely on sets of algorithms in order to learn and evolve), it opens up the potential for an output we previously could have never imagined.
A fascinating LinkedIn viral post was published in February by Creative Director Eric Groza, who used AI to envision a sophisticated conceptual collaboration between British Airways and Burberry. Aside from the indecipherable logo giving away AI's handiwork, I would argue many designers would look at these renderings as passable for a pitch deck.
A luxury eye mask design for British Airways in the style of Burberry (Image credit: Eric Groza)
British Airways seating with a Burberry touch, imagined by AI (Image credit: Eric Groza)
It is important to highlight that these images were selected from hundreds of others by Groza, emphasizing the critical role of an editor's intervention when using AI for creative purposes. While AI is certainly a remarkable tool in this regard, we must also consider that it can only draw upon examples of excellence from the past to generate these sophisticated images.
The British Airways x Burberry experiment is a testament to AI's ability to match the quality and aesthetic of 2023. But what comes next? How can we innovate beyond current trends if our brand directions are based solely on the culmination of past references?
Gee's concern about Spotify's algorithms promoting music that appeals to the masses rather than fostering true innovation carries over to the world of design and AI's role in it. How can we use AI as a helpful tool without sacrificing design's potential for introducing truly new and noteworthy ideas into culture? This is a question that requires careful consideration moving forward.
AI no doubt imitates life, but how will life imitate AI in the future? This LinkedIn post poses a curious potential scenario:
Enter a caption (optional)
Reporter Has a Bizarre Chat with Bing's New A.I. Chatbot Sydney, Stating, 'I Want To Be Alive.'
For those who have yet to read this interview Kevin Roose did with Bing's A.I. chatbot named Sydney, it's a wild ride. Perhaps even more unsettling is the fact that when asked why Sydney behaved the way they did during this conversation with Roose, Microsoft had no clear answer.
This app could block text-to-image AI models from ripping off artists
The power of past example in generating AI also calls into question the issues of copyright. It appears programs are being developed to thwart AI's attempts to copy artists' work, but even the developers of these programs note it's just a short term solution. Founders of the Glaze Project at University of Chicago noted that their AI-fighting program is "not a permanent solution against AI mimicry...AI evolves quickly, and systems like Glaze face an inherent challenge of being future-proof...It is important to note that Glaze is not a panacea, but a necessary first step towards artist-centric protection tools to resist AI mimicry. We hope that Glaze and followup projects will provide some protection to artists while longer term (legal, regulatory) efforts take hold."
A concerning news story emerged on February 3rd when a train crashed in the town of East Palestine, Ohio, spilling a cocktail of chemicals that were quickly burned into the atmosphere to avoid lethal explosion. This burn, of course, sent noxious chemicals into the air, leading residents to experience terrible headaches, clearly contaminated river banks, and strange phenomena such as chicken laying eggs with a disturbing purple hue.
A view of the smoke caused by the Ohio train derailment the night of February 3 (image source: Wikimedia Commons)
How this story connects with design has to do with the chemicals spilled in Ohio—a combination of substances including but not limited to vinyl chloride, isobutylene, ethylhexyl acrylate, and benzene. Vinyl chloride, for one, is a chemical most commonly used for PVC pipes, but can also be found in products like vehicle upholstery and plastic kitchen ware.
While the story of the derailment itself has much more to do with parties like Norfolk Southern (the company operating the train that derailed), the damage done to our environment due to chemicals used in products designers bring to market reaffirms the role design plays in climate change. It is a well known statistic that an estimated 80% of all product-related environmental impacts are determined during the design phase of a product—this puts a lot of power in the hands of those making final decisions about manufacturing, product development, and materials.
Can environmental disasters such as this be used as an urgent call to start re-evaluating our systems? While it's difficult to imagine our reliance in supply chain on chemicals parallel to vinyl chloride and toxic plastics fading any time soon, it certainly is a moment that calls for conversations to be had about our future reliance on materials, and how we can evolve them to lessen impact.
While it's understandable to feel a sense of overwhelm about the state of climate change today, February did carry some news that suggests there are phenomena we can take advantage of to start working toward positive change.
The exciting bits of news in January released by the World Meteorological Organization that efforts to repair the thinning ozone layer are working has served as a guiding light of optimism for anyone feeling a sense of doom about the climate. A report released early February by the International Energy Agency also bared positive news, suggesting renewable energy and nuclear power is projected to meet almost all global demand for electricity within just the next three years—huge news in a time where electricity consumption is set to rapidly increase.
There is also interesting development in the realms relevant to design showing how the industry can mitigate further damage.
Read Space10's Regenerative Home Report
A report published on February 22nd by Space10, an IKEA-supported research hub, provides interesting key learnings around the potential ways design can effectively reduce energy consumption in the home. The Regenerative Home Report focuses its research on the fact that energy consumption in the home is an immense future challenge, with household consumption behavior responsible for a shocking 72% of global greenhouse emissions (with high-income countries responsible for the largest chunk of that percentage).
Looking at positive change through the lens of design, the report explores how designed interventions can significantly reduce emissions in the realms of home building/construction, energy consumption, food and agriculture, and consumer home products. It's a fascinating read, full of helpful statistics about home energy usage, resources on up-and-coming products and technologies tackling issues related to home energy consumption, and interesting examples of progressive sustainable design interventions that can serve as examples for the future.
Don't have an account? Join Now
Create a Core77 Account
Already have an account? Sign In
Please enter your email and we will send an email to reset your password.
The purpose of AI is to sell AI generation tools via a tech hype cycle. It literally is nothing more than that. It will pass just like NFTs.
I tried to use stable diffusion with various models in the recent times, with different libraries. Even thinking about train my own, specifically for making generative patterns and shape shades to ideate boring styling jobs. What I find working the best is to feed the model your own sketch and then proceed with that. This makes process more straightforward and controllable. Stable diffusion models lack in something which was not invisioned before. For example I can make a poor napkin sketch of generic sneaker and then create 100s of sneaker sketches with more or less acceptable designs (for some generic shoe brand) and even go wilder if give more freedom for AI. But it is just because there were thousands of sneaker side profile sketches on the internet, from which it learned. Yet it is also successful in generating architectural concepts for the same reason. But always human work required. For that reason it is not a time saver hmm
The British Airways x Burberry generated stuff fails after the first ten seconds of looking at it. There is a sort of CGI sheen that disguises it long enough to scroll past on Instagram and deliver ad revenue somewhere. The design content in that branded blindfold lump? The fat puffy cushion with the disjointed piping? The tilted-back upholstered chair? Onto the airplane interior, lights that will not light up anything. Luggage integrated into the seats, because? Seats are arranged like a bench at the back of a bus, flat against the wall. Towels draped over randomly, British Airways precision? AI is a looming shadow over design. Models that build from design intent and create 3D models as a basis for renders are the next step. chatGPT-4 is going to reveal some of this. Those Midjourney/DALL·E/StableDiffusion renders trying to reverse engineer design jpegs are a dead end.