Welcome back, once again, to Project Medusa. This final installment in our three-part How-To series aims to illuminate the last phase of any design research project: what are you to do with all the information that result from your brilliant effort? How do you decide what's relevant, and what's not? Needless to say, it can be a bit complicated. Many of the considerations introduced earlier are also helpful at this stage: remember your goals, and understand your audience (which shifts now to whoever you're preparing the research results for.) Confused? Visit Part 1 for a more thorough introduction. If you recall Part 1 but missed Part 2, now's your chance to catch up.
While there are no right or wrong answers in design research, not all data is equal. Assuming you've carefully prioritized your goals and outreach, it's now time to prioritize results. At Ziba, we use a four-part process to synthesize the data research yields.
1. Aggregate the data.
This could mean digitizing handwritten responses, stacks of sticky notes stuck to a wall, dozens of photos printed, or whatever works for you and your material. You'll need to be able to see the data—and ideally search through it efficiently—before you can plunge ahead.
2. Sort for theme(s).
Like goes with like, and making logical groupings of related information will help you identify the trends and anomalies within your data set. Embrace the granular: this is most likely the only time you'll look at each and every survey question, listen to every minute of recorded discussion, and squint at all those doodles. Stop worrying about your goals, momentarily, and evaluate your results as honestly and objectively as possible. Everything is allowed to be interesting, at this stage. If, on the other hand, you feel overwhelmed with the amount of data you're confronted with, the sorting process will allow you to reduce complexity.
Themes emerge as you connect the strongest trends in the data to your hypothesis or hypotheses. Think of it as a naming exercise, if you're stumped: with the data sorted into buckets, each bucket needs a concise handle. There may be some hard choices—fascinating but quirky individual responses sometimes need to be cast aside if they fail to play well with other, larger groups of more typical answers. Force yourself to make decisions about what's meaningful and what can actually have an impact on the work to come.
3. Organize and frame.
The next step, assessing themes as a group, may lead to some new considerations about your hypothesis. Perhaps there's something consistent emerging from people's responses you hadn't considered? Or were you right all along? Arrange the themes into a hierarchy, prioritizing the entire outcome. Don't be afraid to shuffle and reshuffle the deck, either; there's still more slicing and dicing to come. Resist getting lost in details at this stage: as tiring as this may be, it's important to keep higher-order needs, values and outcomes in mind as much as possible. As always, document your process so nothing of value is lost. We like sticky notes and the cameras on our phones for this.
Mapping your organized themes onto a matrix allows contextualization, or framing, of results. Don't worry, this isn't as bad as it sounds... number four is the real tough nut. It can be as simple as a pair of axes (think of New York Magazine's weekly Approval Matrix) that you plunk grouped, themed and organized data points down onto. Perhaps a Venn diagram might be more suitable, or any other format that helps visualize the story you're trying to tell. Experiment with different representations that make sense for your data and see what happens to the resulting constellations of themes.
4. Insights define story.
Ah, narrative, the king of content. Using the most relevant contextual landscape you arrived at during framing as a stage, it's time to make that data dance. Extracting insights and shaping them into a logical, meaningful story is obviously the most individual aspect of the entire process, which makes it difficult to generalize. It's safe to say, however, that if you've been diligent throughout, preparing and executing carefully, your work should now bear perceptible fruit. What you discover may be subtle, or it could be shattering, in accordance with the scope of your hypothesis and the size of your data set. Not satisfied with your outcome? Return to step 1, 2, or 3, depending on how fundamental the problems seem, and parse your information differently.
Conclusion: Whip it. Whip it good.
Making the most of what you've got applies to presenting results every bit as much as it did when you were scoping, in Part 1, and preparing tools, in Part 2. Knowing the audience for your final insights will shape how you prioritize assets. Will your team respond best to a video? A white paper? A book of photographs and infographics? Every one of these communication tools can be used to articulate narratives, but each will have a different impact. The ultimate form your results will take should also guide how to assess data as you're in the process of going through it. Whatever the means of transmission, be sure to isolate your "hero moments"—where the clouds part and a ray of sunshine streams down onto a genuinely useful, surprising insight—and celebrate the crap out of each and every one. Use people's real hopes to identify what the ideal solution to any problem might be, and then work backwards, supported by your data, to achieve as much as you can.
We've found that sharing results out loud and visually at the same time helps drive adoption. Brevity is very hard, and reducing complex stories to pictograms and a few choice sentences is never easy. But this level of clarity makes your conclusions shareable, which is vital for socialization and adoption. Your research was supposed to do something, right? Well, it can't if it stays locked in a drawer or buried in the text of a PowerPoint. Share, share and share again. With feedback, you may find yourself returning to steps 4 or 5, and rephrasing outcomes to better suit the needs of your audience.
As for Project Medusa, the results of an online survey might have been easier to manage, but not necessarily simpler to extract meaning from. Less direct, more engaging workshop activities were used to get the information AIGA needed. Ziba asked designers to imagine AIGA as a superhero... what are their powers? What about a nemesis? (Technology Boy was a popular reply.) Then they wrote obituaries for AIGA, which directly elicited what designers would miss if the organization ceased to exist. These metaphorical tools and projection activities allowed a degree of intellectual remove and also capitalized on designers' creativity. The aim of Project Medusa, remember, was to define a vision of the AIGA's future, with a special focus on younger designer's wants and needs. Binary answers wouldn't cut it, clearly. With meaty, emotional responses, we were able to identify weak points and gather tangible insights on how to strengthen AIGA.
To summarize: know your audience, and be sure to engage them. Experiment and take good risks. And, most importantly, understand that the hard work is actually just beginning: now it's time to apply research outcomes to design, and make something new or make something better. Applying the results of design research to make change happen is what really matters. Good luck!