Research For the User, Created With the User
Research Specialist Bryn Pernot explores the value of participatory research in experience design and what UXRs can learn from museum professionals.
The key to ensuring your research lives beyond the project? Understanding that research isn’t a singular objective with results—it’s an ongoing, evolving conversation.
Bryn Pernot of dscout’s Special Services team emphasizes how important it is that research is iterative. Just as people grow and change, so should your user research.
Learn about how Bryn’s journey from Anthropology/Public Humanities has shaped her perspective on user research, her advice for conducting global research, and how to design missions to ensure successful analysis.
dscout: You're an anthropologist trained in Public Humanities. Would you unpack what that means?
Bryn: There are lots of different ways to define Public Humanities, but the description that resonates with me is taken from the University of British Columbia’s website:
A reciprocal way of studying and practicing humanities disciplines, like literature, classics, history, and creative and performing arts—fields that investigate different aspects of society and culture.
By “reciprocal,” we mean that Public Humanities is about sharing research with the public through formats like blogs, exhibitions, lectures, and websites AND learning from and alongside the public.
It’s about enhancing the visibility of work that is typically kept within the academy, while also expanding the idea of who is capable and “worthy” of producing knowledge.
As an anthropologist, I utilize tools like ethnography to connect with and learn from the public and see a lot of overlap between Public Humanities and approaches like participatory action research.
Because anthropology is the study of what makes us human, I see it as a humanities discipline and a field that intersects with and can learn from the rest of the humanities.
For example, one of my favorite classes in graduate school was Critical Ethnography, taught by a professor in the Theatre Arts & Performance Studies department. The course looked at ethnography as a performative practice and interrogated how experiences get translated into text and what the ethical stakes for this translation are.
What role does research or data-collection play in that training?
My work in Public Humanities was primarily focused in museums, using anthropology to study how these institutions actively work to shape public perceptions of science, history, culture, and art and how they define who “counts” as part of the public.
For my Critical Ethnography course, I conducted a self-reflexive ethnography of “Look at Art. Get Paid” (LAAGP) a blended artist-research program developed with colleagues at Brown and RISD.
LAAGP paid people who rarely, if ever, go to art museums to visit the RISD Museum, look at art, and serve as guest critics, sharing their thoughts on the Museum, its collections, and its visitors.
LAAGP combined semi-structured interviews, focus groups, and art education techniques to understand art and museums from the perspectives of those who choose, for one reason or another, not to visit.
For Critical Ethnography I used videos, interviews with my colleagues and participants, and critical ethnography theories to understand the different contexts that all the stakeholders brought to LAAGP.
This was the first time that I had done an ethnography on a project I was directly involved in and it pushed me to interrogate how my value systems, as someone who is familiar and comfortable with art museums, compared to participants’ values.
What has stood out as you've transitioned to the user experience space?
There were two reasons why I decided to transition to the UX/product/design research world. First, I wanted to move outside of museums and cultural institutions to study a broader range of products.
Second, I wanted to do research on a faster timeline so that findings could more immediately inform the design of products and services that consumers use everyday.
One of dscout’s superpowers is the ability to gather a lot of information quickly and help teams infuse qualitative insights into their day-to-day work.
I think Public Humanities practitioners are well-skilled at showcasing and celebrating the human in their work, but because of the long histories (of the cultures, peoples, and works of the folks comprising an exhibit), they’re careful about the time it takes to craft an exhibit or program, which can elongate the distance between research and implementation.
On Special Services, we live in both speed zones. We do conduct quicker evaluative work (we like Express Missions for these), but then we also support folks who are fundamentally rethinking their product or offering who might be looking to understand non-US markets.
We like to take more time with those projects because the depth and scale of the questions requires it. There is—all of that said—still a desire for a fast pace given the touchpoints that are already happening.
What elements of Public Humanities might UXRs adopt in their work?
Exploring how research findings could be shared back with participants is something UXRs might benefit from doing. Outlining the analysis and synthesis process isn't as important as sharing how the data were used to inform strategy or product decision.
That engagement with the public is CRITICAL for public humanities; that conversation ensures that the exhibit or program represents the thing (culture, group, motif, etc.) that it strives to.
We’ve been having more and more conversations with clients about moving from extractive data practices to relationship-based—keeping folks in the loop is so important, and our scouts regularly tell us that they want to know how their data and time are impacting the work.
So I'd say open up a conversation with your users, your customers. Don't think of research as a one-way verb (we are "doing" research "on" a group).
Think of your work as an ongoing and evolving conversation—in this way, you'll be primed to weave participants in, which might very well heighten the user-centricity in your business practices, benefiting all.
You've been helping clients think globally about their work. What have you noticed?
The necessity of partnership is key when going international. As a US-based researcher, I have a lot of context about my own culture of folks within the US, letting me consult for folks interested IN the US.
But when I have a client come to me with a new market question, I’m less plugged in; foundational things like education, worklife, etc. are less familiar to me. I’m working with partners in different countries to tease out these elements and THOSE learnings inform the research findings.
I think there's a lot of opportunities for researchers to confront their own biases and previous study findings as they look beyond the US. Just because a product works well in one market doesn't mean it will be successful in another country.
For example, we've seen that participants in Brazil are excited about sharing content on social media while participants in Japan are more hesitant to do so.
Understanding differences like this will inform everything from recruitment to question framing to deciphering findings to learn about findings, especially when considering how it will/might shift in a new market. This informs everything from recruitment to question framing.
The ideas driving folks’ research can and do shift when a new market or culture is considered. International research gives our customers an opportunity to ask “What would it look like to do X in market Y?”
It helps them get a pulse check before making a much larger investment to roll something out. I’m helping more companies conduct smaller-scale, lower-cost exploratory studies to gain learnings about what their product/service might/should look like in a new market.
With dscout, researchers have been able to run smaller studies that help inform the broader why behind their work and understand what steps they should take next.
Checklist: Design missions with analysis in mind
Great analysis starts with a great research design. Before you launch your study, you should have a good sense for how you’re tactically planning to analyze each question, whether it’s creating a list of top themes, tagging responses, gathering key quotes, or pulling together a video reel.
Use your ideal analysis approach to guide how you’re writing your mission questions. For example, if you know that you’re interested in tagging the data based on an existing coding scheme, is there a way to turn that list of tags into a close-ended question that scouts can respond to?
Take a look at past studies that you’ve run on dscout or elsewhere—what kinds of questions gave you the best and easiest to analyze data? Is there a way to replicate that approach for your new study?
Finally, I would say “simplify, simplify, simplify.” I think researchers are often stunned and somewhat overwhelmed by the amount of data they get back. Are there ways to limit the amount of data you have to sort through while still getting rich insights?
I often suggest cutting things back so that you’re only asking one prompt per question, avoiding repetitive questions to limit scout fatigue, and taking a critical eye to determine what actually needs to be an open-ended or video question and what can be turned into a closed end or short response.
Turn your research questions into scout questions:
☐ | Put yourself in scouts’ shoes and review the mission overview, instructions and questions with their perspective in mind. Is there any space for interpretation or misunderstanding? If you asked your parent or older family member to complete this mission, would they understand how? Does the language being used make sense to someone who doesn’t work at your company? |
☐ | These kinds of wording tweaks help scouts understand what they are being asked to do and give clear responses to your questions. |
☐ | Set guidelines for length expectations if it makes sense. It’s not necessary that every question is answered in “2-3 sentences”, but if there’s an instance in which a scout could just provide a one-word response, it’s helpful to add in that language to ensure you’re getting quality responses. |
Limit the number of open ended questions you’re asking:
☐ | Having too many open-ends leads to scout fatigue and reduces the quality of responses, which impacts your ability to analyze the data. |
☐ | The general rule of thumb we use on Special Services is about six open ends per Part. It’s okay to be flexible with this guideline, but think about what your timing needs and priorities are. Will you be able to analyze all of these questions in the time you have to put together your report? |
Convert open ended questions into close-ended questions:
☐ | Open ends inherently take longer to analyze (and are more work for scouts), so whenever possible, it helps to ask questions as close-ended questions instead. |
☐ | A common suggestion we offer to customers is to change an open ended question to a close-ended question followed by an open ended question for context. Asking closed-ended questions first allows you to cut the data of the open ended question into more meaningful and manageable segments. |
☐ | This is especially important for questions we might need to use for filtering/segmentation purposes. Add close-ended questions before open ends if you feel the data can be cut in a meaningful way (positive vs. negative, often vs. rarely, etc. |
Split up questions to avoid double, triple, or quadruple barreled questions:
☐ | As much as possible, scouts should answer ONE prompt per question. It’s okay if a question includes additional prompts as a way of helping guide scouts in their answers, but if a prompt is asking 2 or more distinct questions, we recommend breaking it into separate prompts. This helps ensure that scouts will respond to all of the questions you are asking. |
☐ | When splitting questions into separate prompts, consider if the questions need to be open-ended or if you can use close-ends to make it easier for scouts to respond and increase the quality of their answers. |
Add “sum it up” open ends after video questions and screen recordings:
☐ | While videos and screen recordings are great for getting detailed responses from scouts, seeing their emotions, and gaining a better understanding of how they use products and services, it can be tricky to quickly generate a list of themes or patterns from a video question. |
☐ | We often suggest adding in an open end after video prompts (especially high-priority questions) where scouts can provide a written summary of what they shared. This is especially useful in moments-based Parts where each scout will be sharing multiple entries as it’s much faster to skim through open-ended responses than to read longer video transcripts. |
☐ | When analyzing the data, you can review the open ends first to generate your list of themes and then grab video clips that speak directly to these themes. |
Ben has a doctorate in communication studies from Arizona State University, studying “nonverbal courtship signals”, a.k.a. flirting. No, he doesn’t have dating advice for you.
Subscribe To People Nerds
A weekly roundup of interviews, pro tips and original research designed for people who are interested in people