Skip to content

From Guessing to Informing: How Qualitative UX Can Shape the Future of AI

Helen Kim @ Capital One

Machine learning and artificial intelligence models are finding purchase in more and more experiences. Although they offer unique opportunities, it's critical to ensure the humans these models intend to serve are considered during their creation; Helen Kim makes a case for qualitative and open-ended data as a way to combat unethical and unintended harms.

Transcript

Helen Kim:

Hi, everyone. My name is Helen Kim, director of and head of Experience Insights for Capital One Software, and I'm here to talk to you about how design and research can play a crucial role in leveraging AI for good.

I'm going to start with a personal story. Once upon a time, not too far away at a large company with many, many customers, there was a young anxious to prove herself researcher named Helen, yes, me, decades ago, who was working with a team of designers, product, and tech people to redesign a new user setup workflow for this site. A number of really important large corporate customers were complaining about how hard it was to set up new users using the clunky interface that we provided. They were complaining so much about this painful task that they often left it to their relationship managers or sales support people, who also hated it, too. Many of these customers were just asking us to build in a copy function already, make it easier.

On the other side was our tech VP. Let's call him John. He was way higher than I was in job title with many, many more years at the company under his belt, and he was also much tougher than I was. He wore cowboy boots, black leather jackets, rode a motorcycle, and he was famous for shooting down design suggestions by saying the things that we are proposing just weren't technically feasible. As a fun aside, these photos are not me, she's much more glam than I am, or of that other guy at all. These were actually created by an AI image generator called DALL-E 2. Check it out if you ever have the chance.

Back to the story. Now, John, the tech VP, wanted to keep things simple. "Just give the customers what they're asking for," that basic fixed functionality that copies exactly the same access profiles from previous users. He called this functionality "roles." This was similar to the template functionality already provided. He claimed that this would minimize performance issues and would be better in the long run.

On our side, the design side, we had hypothesized that maybe what the users are really wanting was a more flexible way to set people up, letting people join multiple groups of access instead, letting them customize for new users as needed, because sometimes that cookie-cutter approach just might not work, so we decided to feel to do fieldwork with customers to observe them using our product for this workflow. These are the very same customers who had been clamoring for that copy functionality. What we found when we observed them was that the flexible groups model matched the way they did their work better. We also found that the roles model proposed might actually lead to performance issues, creating lots and lots of unused copies of slightly different roles each time they set up someone a little bit different.

Because the fieldwork was done with six companies, we had to see how prevalent this observation was. We worked with the analytics team and found that among the majority of the companies, there was actually a large number of unused templates that were cluttering up systems because there just wasn't an easy way to customize these and reuse these templates easily and so we were able to present this quantitative and qualitative data together with the analytics team at a VP-plus meeting. This evidence helped convince the entire team that the group's proposal would meet users' real-life use cases better than the original roles proposal. This was how my love of mixed-methods triangulation was born.

What I learned from this magical marriage of qualitative and quantitative data was just how both methods were needed to help cross-functional teams align on the right action to take. Using both qualitative and quantitative data helps paint the entire picture of what humans do, how often, and why. The unpredictability of what humans do, their often seemingly contradictory behavior, these are the kinds of things we often see and analyze in qualitative research, and it helps bring a deeper understanding of what the quantitative data actually means.

Now, in the past few decades, we've seen quantitative data take over in a really big way, whether it's the hype around big data, machine learning, or artificial intelligence. Big data, to me, has always been in many ways quantitative research writ large. These new data technologies are a new opportunity for us for even greater mixed-methods triangulation at an unimaginable scale.

Before I move on, I want to do a quick vocabulary check to clarify what I mean by big data, artificial intelligence, and machine learning. I'm by no means claiming to be an expert in AI or ML. I know that there are probably many of you out there who know more about these topics than I do, but generally speaking, AI is a branch of computer science that is concerned with building systems that mimic human intelligence in order to carry out tasks, whether it be loan evaluation, or yeah, finding the best route somewhere. Machine learning, on the other hand, is a subset of AI that is concerned with giving computers the ability to learn without explicit programming. That is, without being told exactly what to do. Big data, on the other hand, is what AI and ML use to drive their analyses and algorithms. It is what powers AI and ML. In this presentation, I'll mostly be alluding to AI, but please note that sometimes they're used interchangeably, not just here, but in other discussions outside.

In contrast to big data, qualitative data, or as Tricia Wang, tech ethnographer extraordinaire calls it, "thick data," is the dense, closely examined data that comes from analyzing and understanding smaller groups of individuals in greater depth. These are things like behavioral observations, feelings, reactions, human context. All these things point to the why of behaviors and the motivations behind the behaviors. This data cannot easily be reduced to numbers, but requires words, or sometimes even pictures to describe and fully understand. Just as Tricia Wang says in her TED Talk, "Thick data grounds our business questions in human questions and that's why integrating big data and thick data forms a more complete picture. When you actually integrate the two, you get to ask questions about why this is happening."

These days, the hype around big data has now morphed into the pervasiveness of real AI and ML innovations in our everyday lives. We see it in our phones with Siri, in self-driving, self-parking cars, in social media algorithms, in loan origination algorithms, smart homes, and so on. With all this happening around us, you might be wondering, "Is this okay? What about our privacy? These services are helping us in many ways, but what is the price we might be paying here? What if these devices get too smart? Are our lives being taken over slowly and gradually by these ever-pervasive technologies?" Then there's also that fear that some of us like to joke about a little anxiously, "Will AI-powered robots someday be smart enough to take us over and enslave us for the nefarious purposes, like in dystopian movies, The Matrix, or The Terminator? Is AI something to be feared?"

More seriously, you may have heard news stories where AI has harmed people, or have seen examples that hint that maybe AI is something that will make us obsolete someday. The top three images there allude to examples where AI has had unintended negative human consequences. On the left, we have biased algorithms, an example being where a facial recognition system is more easily identifying white faces than non-white faces because of the dataset it was trained on. That end picture on the right is about social media recommendation algorithms that are purported to spread hate and radicalism in the pursuit of more clicks. Then that central picture at the top is a picture of a chess-playing robot that broke that little boy's finger during a match in Moscow simply because he had moved too quickly, or perhaps had broken a chess rule.

The bottom two images are examples of things that might make people worry that we might be replaced by AI someday. That bottom picture is an AI-generated picture that won an art prize against other human artists. You can imagine the uproar that happened there. The bottom right image I put in there because it's of A/B testing. When I first heard about this decades ago, it actually made me fear for my job since I was doing a lot of usability testing back then, and it made me wonder perhaps that AI would be replacing people like me. All these examples might make some of us worry that AI is a potentially dangerous tool, that it might be something that replaces us someday.

But that's not what I believe. I believe that AI needs to work with humans in order to be successful, trusted, and adopted by humans. We've seen how things go wrong for people when AI narrowly focuses on business goals, or goals that don't account for human context, such as with social media recommendation algorithms, or the finger-breaking, chess-playing robot. This is why we need to be in there, ensuring that AI is continuously tweaked to account for potentially negative human impacts and human unpredictability. That can only be done if humans are constantly in there, testing and tweaking the AI until it works for us. Working together, we can make quicker data-driven decisions and design services that will have positive impacts for humans as well as business.

How do we do this? How do we humanize AI? We humanize AI by putting people first instead of technology. This is exactly what we designers and researchers excel at. This is a quote from Sylvain Duranton. He leads AI consulting services for BCG Global. "We have a choice here, carry on with the algocracy," that means being driven by algorithms, "or decide to go to human plus AI. We need to stop thinking tech first and we need to start putting humans first." Here he is saying that we need to stop taking humans out of AI, referring to the practice where some companies are trying to remove humans altogether from these AI systems because it seems cheaper and easier to do. He has observed, however, that most successful companies using AI have put humans first because in the end, only humans know what will succeed with other humans. AI needs to integrate with human thinking and processes to be fully adopted by humans.

What can you do as an individual designer or researcher? There are three things we can do to leverage AI for good. Number one, get curious about AI. Learn as much as you can about it. Learn the language of AI so that you can be part of that community. Number two, engage in dialogue with the AI community. Collaborate with them, do that triangulation that we're so good at. Then number three, integrate the different ways we think and work together. Inject our human point of view into the way AI is built and vice versa.

Back to number one, get curious. Leverage that natural human curiosity that we have as designers and researchers. To learn and explore everything you can about AI, ML, big data, read everything you can about these topics, take classes, ask questions. Here are just a few resources that I've used to learn more about AI, but I'd love to hear your recommendations as well to add to this growing list. This industry is innovating and evolving so much every day, much faster than we could ever have imagined, and it is on us to become literate in the language and constraints of this brave new world so that we can become part of it and influence the future of it.

Number two, one of my favorites, engage in dialogue with the AI community. Befriend your local data scientist or quant person. Share hypotheses, insights, stories to help humanize the data they're looking at to help understand that whole big picture of what's actually going on with the data. Keep asking those hard, challenging questions to discuss how we might bring in real messy human constraints and considerations into AI to make it work better for humans. Our human-centered point of view needs to be considered when defining the problems that AI is set to solve so that real human intent is part of it. Talk about the hard stuff, ethics, human values, what is right for people, not just profit. Learn about how they're defining business intent. What are they defining as their desired outcomes? What constraints are they considering? Have they put in any constraints or controls on potential negative outcomes to people?

Ask questions like, "Where are they getting their data from? How representative is that data? Is the data replicating bias?" For example, if the data they're using to train their model is based on historical data where, say, women weren't represented fairly due to past biased recruiting strategies, is it wise to use that model for predicting whether someone should be considered or not for future interviews if it's just going to replicate those past problems?

Now, you might be wondering, "What does this triangulation or collaboration with qualitative research and big data sound like?" Well, it sounds like a conversation. It's a dialogue similar to the one we have here between our fictional quant person, Tony, and our qualitative researcher, Johnetta.

"Hey, Johnetta, how are things?"

"Fantastic, Tony. Oh. Listen, I noticed a trend among a few of the younger participants in my study, and they're using their personal phones to make deposits for businesses, but the older ones don't. I want to see if this is an edge case or a potential new trend that we're starting to see. Could you do some of your big data magic to look into its prevalence?"

"Of course. It actually sounds related to something that I found the other day that seemed like an anomaly in the data. Awesome. I love working with you, Johnetta. This is so fun."

"Same. It's so much easier to see the bigger human picture with both quan and qual."

This little conversation is actually very similar to those that I had working in places where I was fortunate enough to work with big data analysts and data scientists. The quant researchers and data scientists that I knew were often coming to me and to other qualitative researchers to discuss potential hypotheses or insights that we had found in our work because this helped them find a starting place to start digging into their massive datasets. Some the most groundbreaking insights came when we both worked together to understand something deeply. Indeed, my dream is to someday build a multidisciplinary insights team that has mixed-methods user researchers working side by side with data scientists and analysts. Imagine the possibilities, the amazing human insights, and the amazing things we could build together.

Finally, what we need to do is integrate the different ways we think and work together to humanize these algorithms. Think about how we might do design and research better by bringing in machine learning or AI into our projects. Could we use some sort of AI to help us collect and process qualitative data more quickly? How might we triangulate our insights by using both quantitative and qualitative research to understand the bigger picture and move our stakeholders to act maybe even faster?

Conversely, how might we improve AI outcomes by making sure we insert real human context and behavioral insights into those systems? How might we make sure that we are putting in controls to avoid those negative human outcomes that we've seen? Can we improve the quality of the data that is being used for these models so that we're working towards a better future and not just replicating past biases?

Remember that the way AI is now, it tends to be narrowly focused on specific goals. We need to make sure that we broaden that a little bit to help define real human intent as part of those AI goals so that we are there to teach AI what is right, what actually works for us. I've collected some examples here of how qualitative research and AI can work together to make algorithms that work for humans. These examples show how qualitative research not only scales when AI is incorporated, but they also show that AI gets better as a result. When qualitative research helps identify errors, that increases the quality of the datasets that they're they're using to train these models. Please reach out to me. If you have anything to add to this list, I would love to learn more and share these with others as well. I'm going to leave you with this. My challenge to you is, when will your project be added to this list?

Remember, to leverage AI for good, you need to get curious about AI, learn the language of it, ask those questions, and learn as much as you can about what is happening in the AI world. Then number two, engage in dialogue with that AI community. Collaborate, collaborate, collaborate. Then number three, integrate our human-centered point of view... Oh, sorry. Just realized. Integrate our human-centered point of view into AI systems as much as possible and vice versa. By doing these three things, we can all make sure we're working towards positive human outcomes with this powerful new technology, and then maybe, just maybe we can prevent the machines from taking over. Thank you.

The Latest