There's been much talk lately about ChatGPT and its uncanny ability to mimic human prose. Burning questions inevitably start to gnaw away at us as we learn more: will we eventually need human writers at all? Have we created something destined to replace us?
That's why it's refreshing to see examples that remind us AI can help us enhance and highlight our own creativity. Take for example in late 2022, when Teenage Engineering, hybrid think tank-design studio MODEM, and creative studio Bureau Cool collaborated to create a visual experience powered by AI. The result aims to accurately reflect the visual experience of synesthesia, based off of published theories by scientists Richard E. Cytowic, M.D, Stephen Palmer, and Olivier Messiaen. The teams used Teenage Engineering's OP-Z Synthesizer and connected it to Stable Diffusion to generate live images and colors in reaction to music.
The OP-Z Stable Diffusion setup works by responding to key elements of the music such as pitch, key density, tempo and notes. Each element is associated with a color scheme according to synesthesia theory, i.e. high pitch matches with bright tones, low pitch matches dark tones. The music determines this script and creates "prompts", which are then delivered to AI image generation program Diffusion Cloud API to build an image based on the prompt. These responses converge to create a complex light show of image and color that undulate with the music, something generated by machine that maintains an emotive, human feel.
A visualization of how the OP-Z Stable Diffusion software model works and responds to musical directives.
While the current model works off of pre-determined image and colors prompts, it's not hard to imagine how this could be "hacked" by artists with a desire to generate their own imagery to accompany music. It's near impossible to explain the feeling of synesthesia to someone who hasn't experienced it; but what if you could offer this technology up to any artist to toy with, regardless of skill? The creative potential is endless.
This project is an interesting one to highlight for its optimistic perspective on how AI can serve the creative process. MODEM co-founder Bas van de Poel describes his biggest takeaway of the project being this sense of potential for future technologies to aid in the "augmentation of human capability." I can't think of a more hopeful way to describe the relationship we're beginning to form with artificial intelligence. "I think the role of creative people is shifting more towards a curator," van de Poel describes. "Where instead of generating all the final images yourself, you're selecting the ones that fit best."
AI is a powerful tool for creating new media, as it can draw on millions of existing data points to generate new content. However, it's important to remember humans still have an essential role to play in the realm of creativity. "Machines don't know what a good picture is," van de Poel mentions. "And maybe at some point, from a technical and aesthetic point of view, they're able to define what a good picture is. But it's still based on [the best examples out there], so it's not able to create new trends." For now, humans remain the true creative trailblazers.
Don't have an account? Join Now
Create a Core77 Account
Already have an account? Sign In
Please enter your email and we will send an email to reset your password.
Let's just admit the reality: the only reason companies keep trying to make AI generation for creative works a thing is because creative works required skilled labor and subsequently equivalent compensation, and capital hates that fact. It has absolutely nothing to do with "new realms of creativity" or "an innovative new tool for artists (read: low paid workers who no longer have to be skilled feeding it prompts)" or whatever horseshit was in the ad copy for this thing.
I think there's a bit more to it than that. This creative stuff, it's kinda hard to do, like, it takes a certain state of mind, and if we could do it without bothering with all that, how much better would THAT be?
Ryan Cee: absolutely. Glad to see people are starting to point these things out