Erika Hall Knows How to Fix Your Design Process (But You’re Probably Not Going to Like It)
The Mule Design co-founder and Just Enough Research author on why doing good research scares us, experience design is a misnomer, and real creativity requires logging off.
Chances are, you’ve bought into one of these pervasive tech-industry myths. And even if you haven’t, your company probably has:
A complicated strategy is a more sophisticated strategy.
Quantitative data is more objective data.
If we “do user-centered design,” we’ve done enough to protect our users.
We can all pick up “hard skills”—but some of us are just better at “soft skills.”
Erika Hall wants to fight these biases we’ve mistaken for facts and argues it’s time for design to return to the fundamentals.
That means doing the work that’s “simple” but not “easy”—talking to people, stepping away from our monitors, and admitting we don’t have all the answers.
We sat down with Erika and discussed where more ethical, more innovative, and more creative design solutions come from—and what ingrained fears we need to fight before they can emerge.
dscout: I’m curious about what initially got you interested in design. Or what made you such an advocate for research in design.
Erika: Graduating in a recession with a degree in philosophy led me to work on “the internet.” It was a surprisingly good fit. With philosophy, you’re taught to deal with complex, abstract systems. And some of it is directly comparable to information architecture. You’re saying: "Okay. Here's a system of concepts and we have to categorize them in a way that's defensible."
Beyond that, philosophy is conversational in nature. The Socractic method is hugely important to design work, because ethical decision making requires questioning and a capacity to think through all the potential ramifications.
Sometimes there’s an even more explicit connection. I remember taking an upper-division seminar on “trolley problems,” and I thought, "This will never come up in my life." And now with autonomous vehicles and other machine learning systems—I see trolley problems everywhere.
Wait, trolley problems?
Yeah, like: “You have a trolley. You have tracks. You have a Nobel Prize winner tied to one side, and five children on the other. Which way do you send the trolley?" The choices are ridiculous. But they hit on the fact that nothing is purely good or bad.
Human nature has some issues that you can’t just engineer away. Refusing to question consequences leads to blind optimism. And you often find that because you didn’t ask, the implications are way worse than if you’d taken time to talk about what might happen.
Ouch. Do you feel like we’re designing to "hit the right people”—or can we avoid the trolley problem altogether? Are we always going to need to ask, “How do I minimize the damage?"
It sounds pessimistic, but I think one of the reasons we're in this ethical mess with the commercial internet is a reticence to confront consequences.
Your choices are going to have consequences, often unintentional consequences. And the more that you do a trolley-problem-like thought experiment, the more you really think through what might happen. Because in reality there’s no design that’s purely good; everything you do is bad for something or someone.
In tech, we’ve gotten into this pattern of thinking like Buckminster Fuller. He looked at the world and believed that the only thing holding us back from abundance—from everyone having what they need—was an engineering solution. But human nature has some issues that you can't just engineer away. Refusing to question consequences leads to blind optimism. And you often find that because you didn’t ask, the implications are way worse than if you’d taken time to talk about what might happen.
I guess that makes sense—that we can't have the mindset of “I'll just make it perfect, and then it'll hurt no one."
Yeah, exactly.
In the early days of the web, you couldn’t kill somebody with a web page. But now, these systems are so complicated that there are real, life-or-death, consequences for what we build.
I read your piece on hazard mapping in design, and I thought that was a really interesting approach to mitigating some of “what might happen” when we put new tech out into the world. Are there “tried-and-true” techniques for navigating those risks?
All of a sudden, so many people are talking about ethics as a reaction to the fact that we weren’t talking about ethics. Really, it comes down to acknowledging risks as a part of your design process.
I’ve been interested in ways we can address this with artifacts. For example, graphing user needs against business success, looking at them together as a system, and asking, “Where are the dangers?”
I’ve looked at other industries that are more mature in terms of baking hazards or ethical considerations into the practice for a model. Architecture is a really great for that. So is medicine. Because architects and structural engineers and doctors understand that if they don't pay attention to hazards, people will die.
In the early days of the web, you couldn't kill somebody with a web page. But now, these systems are so complicated that there are real, life-or-death, consequences for what we build.
How do you get people to make “acknowledging the dangers” a part of their process?
I think designers need to get really interested in business. There are always going to be conflicting forces within an organization, but you can't put an ethical interface on top of an unethical business model.
And you don't want to be in the position of designing a casino—of building something that’s fun and pleasant, but that entices people into a system that's problematic at its core.
I think the technique for avoiding that can be really simple, which is why it's so difficult to get people to do it. Everyone wants fancy, expensive strategies because we have this bias that tactics are more useful, or more effective, if they're more complicated.
But really: you just need to sit in a room with people representing different disciplines: researchers, designers, writers, business leaders, technologists. And then you need to ask: "What does everybody hope happens? What does everybody fear? What could go wrong from each party’s perspective? What are the contexts we need to consider? What do we have a reasonable certainty about, and where are we unsure?"
If you work in design today, 80% of your job should be that. It should be talking to people. And then once you’ve done that, and a concept emerges, you can spend a little time making stuff.
You don’t want to be in the position of designing a casino—of building something that’s fun and pleasant, but that entices people into a system that’s problematic at its core.
That must feel like a huge shift if you're used to producing things. It might feel like you're "wasting time."
Yeah, there's a lot of pressure. I often go into organizations and hear them say "We're a delivery-driven organization." Which means they judge their success based on whether or not they're shipping code regularly. So when you go in and say, "You need to be doing a lot of work, with a lot of people, that will have no visible output"—that's a hard sell.
Any advice on how you “sell it?”
I think it comes back to artifacts—to mapping something that will feel tangible. Or to having more pauses in your process—getting people together to recap: "Okay, are we still clear on our goals? Do we have the same goals? Are we clear on the business requirements, what our customers need, what's going on in the world, and what our internal capacity is?"
Organizations have this fear of research: business people, technologists, designers. But they don’t come out and say: "I'm afraid." They say: "We just don’t have the time."
And there's a lot of pressure with everyone on agile. People start to think "We're successful as long as we're shipping software that works." But people aren't thinking "What if we build the wrong thing?" We're not measuring the success of not doing things, yet.
Do you feel like that's changing at all—that user research is growing to be more prioritized?
Yeah, definitely. A lot more organizations have embraced research or think research is important. However, the other thing that's happened is that we’ve put too much emphasis on processes and tools, without considering the fundamentals.
Organizations don't stop to think about what they really need to know before rushing to what they’re going to do. We choose a technique before we’ve even narrowed in on what we need to ask.
Organizations have this fear of research: business people, technologists, designers. But they don’t come out and say: “I’m afraid.” They say: “We just don’t have the time.”
It goes back to what you said about complexity and bias. “We'll come up with a bunch of complex processes and use a bunch of complex tools. That'll be better than just sitting together in a room and talking."
Exactly. And then there's a lot of confusion that never gets cleared up—like when to use quantitative versus qualitative research. Researchers still come up against questions like, "Why did you only talk to 12 people?"
That’s partially because managers are really keen on things that fit into a spreadsheet. And now there are so many life processes that happen online that we can track and measure. It breeds the idea that "All we have to do is measure things”—even if that doesn't really get at what you need to know.
So there’s a lack of understanding around fundamentals: do we need to measure something, or do we need to describe something? They’re two different types of data.
And if you use the “measurement data” to do the work of “description data,” you're just backing yourself into a weird corner.
Yeah. Plus, if somebody sees something expressed as a number they think: "That's a fact." But numbers aren’t facts. How you access and interpret quantitative data can be hugely biased.
We’ve built all these complicated tools that extract data and get us off the hook from just talking to people. But in the end, if you want to do good research or design work, you have to talk to people. Even if it may feel insufficient, or not "science-y.”
Why do you think, then, that people are so resistant to talking to each other? Is it social anxiety? General nervousness? They just don't feel like that's what their job should be?
All of the above. I think it gets back to the fact that so much of our work culture comes from the Cold War military.
Wait, really?
I mean, so many people have forgotten that the reason the internet even exists is because of the Cold War. And that's just the start. A lot of unexamined concepts and techniques come from the military—and we haven't looked at the history and examined it. For example, look at the concept of "hard skills” versus “soft skills.”
That’s where that’s from?
Yeah. The whole categorization goes back to the US Army in the 60s. They were doing a report on measurable skills and they called those skills “hard skills.” A little later, for convenience sake, they decided to call communication skills “soft skills”—to contrast them with hard skills. And then these concepts got out into the wild.
Now, everyone uses this dichotomy. Companies use it. Job postings use it. But they're not real—it's just appealing because people love dichotomies. And it's weirdly gendered, too. Hard skills are associated with things men do. A "hard skill" sounds more difficult, more masculine, and more important. Soft skills seem fluffy.
I've been reading Behave by the neuroscientist Robert Sapolsky. He argues that people put things in categories, just for convenience—and even if that category is arbitrary, they forget that it is. They treat it like it's something that's real.
So what was originally an arbitrary classification of "skills we can measure and skills we can't" is now taken very seriously.
And it’s a problem because what we call "soft skills" are generally so essential. And we have this strange association that soft skills can't really be taught. If you ask someone if they think welding is something people can learn, they're likely to say yes. You get a different response if you ask someone about communicating well, or managing your time.
All this is a roundabout way to say that in our work culture "hard skills"—visual design, coding, etc.—are really valued. And we totally undervalue the work of communication.
Creativity is finding novel connections among things that exist in the world. You’re not creating new matter, because that’s against the laws of physics. The more that you know about what exists around you, the more capacity for creativity you have.
What do we do about that?
Teach people those skills early in life, and start valuing conversation—particularly conversations where we feel the stakes are high, or we're forced to talk about what we don't know. What I've found in all my years of consulting is that if you go to any organization, everyone is terrified.
They’re terrified of their colleagues, of the people who report to them, of the people above them. They're terrified of looking like they don't know what they're doing. And it’s all because nobody has been taught to talk to people, or negotiate with people—despite the fact that these things are really learnable.
But because they’re “soft skills,” they’re discussed like they can’t be trained. And their existence in opposition to “harder skills” is treated like a fact.
Are there other things within the field people treat as a fact, that you find yourself pushing back against?
There are a lot. A big one is that quantitative data is more important and less biased than qualitative data. That, if you can't measure it, it doesn't matter.
It's funny. It sounds like we just lean on measurement every time we're scared of something. We lean on “hard skills” when we're worried we can’t master “soft skills.” We lean on quantitative data when we’re afraid to talk to people and gather qualitative data.
It’s because we’re so concerned about group cohesion and status that it short circuits everything else. Our own rationality is another big thing we take incorrectly as fact—and the idea that people who do more “hard skills” work are more rational. That technologists are more rational than designers.
There's also a huge myth about what creativity is. We have this sense of “blue sky creativity.” I’ve actually had clients withhold information from me saying, "We want you to be creative and think of new things."
But creativity is finding novel connections among things that exist in the world. You're not creating new matter, because that's against the laws of physics. The more that you know about what exists around you, the more capacity for creativity you have. I think today we still have this belief that innovation comes from a place of ignorance, or a place of internal gut genius, or from intuition.
And I hate the word intuition, because it's one of two things: it’s experience—which is something we gather like research—or it’s confirmation bias.
When it’s the former, it’s less “intuition” and more unconscious reasoning. It’s that background processing that happens to me all the time if I'm stuck in a piece of writing, or stuck on a problem—and I go for a run, and all of a sudden ideas start assembling themselves in my mind.
There are those great moments of: "Wow, I figured something out in this way that wasn't super explicit."
So if this force is unconscious, can you train yourself to be more creative? Does it just require getting more input?
Yes, and you have to have those pauses. Creativity doesn't happen when you're laboring in front of a monitor. You need space. You need time together and time apart.
Everybody is looking for the one thing that will lead to more creative work. And the real truth is, you need to do the right thing at the right time—but that's hard and feels effortful. Everybody wants to be able to put processes on rails and say, "We just do remote usability testing all the time. That's all we do."
Our brains desperately want to take a routine, take a set of tasks—whether it's decision making or making coffee—and turn it into a habit. To store it in the back of the brain and save our processing power for identifying predators.
So every time you have to intentionally engage with a situation, your brain is saying, "I thought we saw this already. Why are we talking about this?" That's what we're up against as human beings who are also like designers and thinkers and problem solvers—it’s fighting the urge to just put things on rails.
Every time you have to intentionally engage with a situation, your brain is saying, “I thought we saw this already. Why are we talking about this?” That’s what we’re up against as human beings who are also like designers and thinkers and problem solvers—it’s fighting the urge to just put things on rails.
That feels like it might be hard to reconcile with traditional work culture.
Business culture is always saying, "Make it a process, make it a routine, and scale it."
So you always have to be asking yourself, "Is this something that really can just be an unconscious routine? Or is this something that we have to fight through the pain and intentionally confront?"
It sounds like design work and research require you really fight your brain's default settings—which has to be a pretty brave process. Your brain is scared, and telling you not to engage, and you have to engage anyway...
That doesn’t sound like such a “soft skill.”
The easy thing is to sit at your computer and move pixels around. That's safe, and you feel like you're in control.
Admitting, “Wow, we’re not in control—but we’re doing the best we can,” is hard. People try to make research controlled and rigorous. That’s why they sometimes lean too heavily on doing it in a lab.
But if you're trying to understand things in a qualitative, descriptive, associated way—you shouldn't want to control it. When you do, you're not learning about their real context out in the world.
The world is messy. You want to expose yourself to that messiness, because whatever you're designing has to work in real life.
Here’s my favorite set up for people who are really concerned about innovation and don't want to do research. I say, "Go on Google. Go on Street View. Look at the street you grew up on. How different does it look from when you were a child?”
There will be some changes. But if you go back to my neighborhood, there's still a hot dog stand that has been there since the 40s.
Streets looks the same. Houses get remodeled here and there. That restaurant you loved gets turned into something stupid. But people are still driving around in cars. People still live in houses that look like houses from decades ago.
So if you're doing anything that's supposed to be innovative, you need to fit into that picture. Because ninety-nine percent of the world is unchanged. Experience design is such a misnomer, because what you're actually designing it's one tiny part of somebody's whole experience.
The world is messy. You want to expose yourself to that messiness, because whatever you’re designing has to work in real life.
That's crazy to think about, especially in a tech space. There’s so much talk about how fast everything’s changing—about how it’s hard to keep up. But it’s nice that when you zoom out, you realize, it’s not everything.
Yeah, the fact that Google Earth exists is mind blowing. But how did they do it? They put a camera on a car and drove on every street in the world. It wasn't magic.
They needed data storage and processing power to make that work. But the way they collected that data was really simple.
People develop this blind spot about how to solve a problem because they think, "It's got to use a lot of technology." Or that the most effective thing is going to look future-y. It prevents people from seeing a lot of really cool ways to solve problems.
So it’s our job to really think about the context. Let’s really ask: how is this actually going to fit in somebody's life?
Mac Hasley is a writer and content strategist at dscout. She likes writing words about words, making marketing less like “marketing,” and unashamedly monopolizing the office’s Clif Bar supply.
Subscribe To People Nerds
A weekly roundup of interviews, pro tips and original research designed for people who are interested in people