Skip to content

Scrappy is Subjective: Quality, Speed, and Impactful UX Research

Leading UX researchers share how to establish research best practices that optimize for speed and quality, while setting expectations and aligning teams.

Featuring Michele Ronsen, Noam Segal, Cristen Torrey, Janice Wong, Ben Wiedmaier

As organizations mature in their human-centered insights practice, so too does demand and awareness of user research.

Front-line practitioners, as well as team leaders, are regularly faced with suggestions to conduct "quick-and-dirty," "scrappy," or "lean" research to keep up with demand.

But what does this mean for our methods, our rigor, or the confidence of our recommendations?

People Nerds brought together a panel of UX leaders to discuss the pressures of time, the false dichotomies those pressures can create, and ways researchers can expectation-set with stakeholders and collaborators.

The panelists explore...

  • The problems of juxtaposing "speed" with "quality" in UX research
  • A few variables and conditions contributing to this 2 x 2
  • Ways to align research goals to timelines, staying flexible and invitational
  • Saying "no" and the power of self-advocacy in the face of demand

Transcript:

Ben:
The title of the discussion is, scrappy is subjective, which is something that during our first walkthrough meeting, many of us were bumping up against trying to come up with a definition of what it means to be scrappy, or nimble, or agile, or faster. Both Michelle and Janice pointed out that that's subjective. Not only because it's rather ambiguous concept, but it has cultural ties to it as well. And I would argue even more so specific to this sort of innovation human-centered design research world. So a long way of saying, how does each of you think about scrappy research? When you hear that what do you think of? Michelle, do you want to kick us off?

Michelle:
Sure. Well, following up on that earlier discussion, to me, scrappy is definitely cultural, it's environmental, and it's exponential. It's really based on the collective experiences and the interpretation of those experiences of the people that are referring to whether or not it's scrappy. In my experience as a researcher and conducting research for a number of years, scrappy would mean, to me, something that is extremely lightweight focused and has a lot of collaboration, and agreement, and participation from my stakeholders in order to be successful. I totally love doing work like that. I think it's very exciting, but I think a lot of ducks need to be in a row and there needs to be a lot of agreement in terms of roles and responsibilities and alignment in terms of what we're really, really going to focus on in order for it to be successful.

Michelle:
That being said, I don't think it's appropriate in all instances. For all questions, for all people for all products, for all stages of the cycle. I think that there's a time and a place and I'm sure we'll talk much more about those times and those places. That's the first thing that comes to mind.

Janice:
Yeah. I think I could go next. I think what you touched on is, it's interesting to think about how the word scrappy might mean different things as our practice of design research. UX research has matured over time. So I think, for me, it's a loaded word because it's so vivid. So it definitely makes you think of specific things. It makes me think of quick and dirty, it makes me think of something was left off the table, or there was a trade off that was made, but we're moving forward anyways. It's like action paced. But then it also makes me think of other things that I have extra concern for. So if we're saying scrappy, then I immediately think integrity, methods choices, what am I going to do to accommodate for the fact that we're venturing into the space that's scrappy? That's kind of what comes to mind for me.

Ben:
[Nom 00:03:08], this has gotten to something that you and I have talked about before that does scrappy even fit in within a human-centered or a user experience design research practice? I know that for you, scrappy can be tied up in just Janice's saying, this notion of quality and what does good research mean.

Noam:
I have so much baggage with the term scrappy I think. I recall coming out to the academia. For those watching who were in graduate programs, the main concern about academics is that they do not know how to be scrappy, or nimble. Can you be scrappy? Can you be nimble? Can you move fast? And interesting for me with the perspective I now have I find that in many ways, academia seems way scrappier to me than industry, in the sense that ... For example, I've had the fortune of working at companies where we had essentially endless resources to conduct research. Incredible research ops teams, and incredible tools, and access to endless participants which we could pay meaningful amounts of money unlike what I used to do in my PhD days.

Noam:
So it's interesting how academia and industry view scrappy differently. And I think they're both getting it wrong to a certain extent. One of my main concerns around this is that I'm not actually sure that we do a great job measuring or understanding what is scrappy at a more tactical level. And so I'm never really too sure when I'm conducting a project whether it's closer to scrappy or closer to crappy.

Michelle:
I think that's a good point too. And where does the line between scrappy and iterative take place? If someone has a big question or a big ask, we can't really do that in the timeline allotted, but we could take a portion of that and just really, really laser in and really focus, is that scrappy or is that iterative? And then we'll come back and take a deeper dive into the following questions in maybe subsequent phases. That's an interesting question there.

Christine:
So I'll be the one who takes a sort of competing perspective that I don't mind the word scrappy at all. I think to me scrappy means flexible. So the connotation that I have with the word is that you've taken ... If you lined up every step in what we would consider to be the complete research process and you took the problem or the question you're trying to ask, and you looked at that whole process and said, "Do I really need all of these steps to answer this question?" And you're actually being critical about the ones that you need and the ones that you actually don't. To me, it feels like discernment more than skipping steps. It's like the people who just do things because that's just the way it's done as opposed to looking at it and saying, "This actually meaningfully improves the quality of my recommendation, and it's super critical to include in this project." To me, it feels like thoughtfulness as opposed to just trying to be reactive to stakeholders without any purpose.

Ben:
It's very useful. Some of the folks that I talked to, [Christine 00:06:58], I think they are used to ... many of them are qualitatively minded or mixed methodologists. They may be numbers aware, but they may not be very number centric. I head the same thing. Even though my training in graduate school was on statistics and experiments, I've had so many recruiters say, "But can you drink from a firehose? Are you going to be fast enough to? This isn't a dissertation." I'm like, "I know." But I also got the dissertation done. Keep in mind, I was able to focus on something with ... talk about stakeholders, your committee, the egoism. This crew knows that. But I've always been struck by how human-centered thinkers broadly are drawn to mixed and open sort of qualitative methods which can be intensive in terms of some of the operations, transcription, analysis, coding, grounded theory and stakeholders for whom these theories or methods or approaches are not quite as clear as do a survey. I think scrappy could be a way for them to say like, "Yeah, but you really can't do qual for what we need." So I wonder how many folks have ... are girding themselves for scrappy as like, "But we can actually do it." But Christine, your point is really a strong one. I think that flexibility and that being nuanced in one's research approaches is a great way. Michelle, please.

Michelle:
I embrace scrappy. I don't think of that as a negative term at all. To me, it invites more creativity and more agility into the process to think about what portions of this can be explored quickly or in the time allotted, and how can I best do that? It's the great puzzle or one of the many exciting puzzle pieces it's how I like it. So I definitely don't see it as negative per se.

Janice:
Yeah. Scrappy could be either a challenge and an opportunity to get invented with your methods or it could be an excuse. And then it kind of veers into the stuff where maybe we're a little bit more worried how it's being used.

Michelle:
And maybe it's also important to know the difference. And maybe a lot of people who are doing research on the side or checking a box don't necessarily know the difference between scrappy and more thoughtful work and how to do that more thoughtful work which is probably another interesting perspective here. We're all aware of that difference, but I wonder how many people are?

Ben:
Thank you. Those are all helpful definitions and to have your subjectivity brought into the discussion. For the audience members watching, the first time that this crew and I discussed this topic we had landed on a two by two with quality I guess high and low and time slower and faster. And Janice in particular was thinking about might there be projects where quick, and dirty, or scrappier approach might be better suited with UX maturity of the org as Michelle mentioned. And both Christine and Janice started to think about whether or not the two by two was a false ... not dichotomy, but a quadchotomy a straw person essentially that it might not be as easy.

Ben:
Again, if you're a sharp thinker and a design researcher, then you might not be a person who can, "We're going to do shitty quality here. Let's turn the quality down. We really want to be fast." I thought that was a really interesting thing that Janice ... you and Christine brought up. That is glossing over some of the real nuance and complexities in both advocating for a method and maintaining timelines because that's really what drew Nom and I to this idea that we hear colloquially so much about being able to do things fast, and quick, and dirty. So I'm wondering if we can turn to this two by two of quality although it is potentially a strong one of quality and time and talk about either ... Christine or Janice, if you could talk through some of the questions that you have about framing and understanding research in that two by two.

Janice:
Yeah. I'll dive in. I think that it's catchy and it's simple for us to think about things in terms of quality and speed. And so it makes sense that it's kind of the thing we start with. So on the side in my education in addition to the design and business stuff that I've learned, I also have taken a couple classes on negotiation. And that kind of feeds into the way I navigate with my stakeholders, I guess. So for me, it's more about growing the pie instead of fighting over the small pie where the only pieces that you have to play with are quality and speed. So I think that's where maybe it's a useful tool that I should have start with, but the more productive stuff to do would be to create options, grow the pie, and think through some of the stuff that we were chatting about earlier.

Janice:
For me, I think a big starting point is kind of separating the goals of the research and the main questions that stakeholder might be trying to answer in a fast way with the methodology that you use to get there. Often I think that partners or stakeholders can kind of feel like they need to ask for both at the same time. And for us, as researchers to kind of show that those are separate things, create that opportunity for the researcher to suggest different methods that perhaps the partner wasn't aware of, that could get to those answers and questions in a way that has that authenticity and integrity that we want, but also kind of meets the constraints that the partner has.

Christine:
Yeah. I love the background and negotiation. It feels broadly applicable. The thing that this sort of reminds me of is sometimes researchers I'll be talking to them, and they'll be struggling to figure out how to advocate for themselves for the space and time to do what they call foundational or strategic research. And there's this push and pull like I can't get the time to do this. And one of the things that I always encourage people to do is just rather than to just focus all of your energy on advocating for the time you think you need for this work, just start adding 15 minutes of your foundational questions to every study that you do. I've done this in the past. It's great success. Over time in a matter of months, I ended up having 80 participants. And I got in a room one week and I was like, I have 80 people that I've asked about this, and here's the workflow, and I'm really confident in it." I have a lot of rich examples of this.

Christine:
With that artifact, it's actually super easy to talk about the value of doing that kind of work because you have it. You don't have to argue for something that doesn't yet exist. If you can just mash those ideas up together and figure out how to get them integrated I think it's easier to make progress.

Noam:
I'll say that I love two by twos. I imagine we all do. I love a good two by two. And I think the two by two we discussed becomes much less of a straw person if we replace quality with something else. So just to explain, I don't really know how to dial up or down the quality of my research. I'm not sure how I would actually do that, but I do know how to dial up or down other things like the level of confidence I have that sort of answer the question at hand, or the level of completeness that I feel have in terms of covering the entire universe, the scope of the question.

Noam:
I think we can definitely make decisions about while coming out reasonably good quality ethical research, we can talk about how ... If we have this level of knowledge on we presented this question, this level of confidence, does this business decision, product decision, design decision require an extremely high level of competence based on dozens of discrete research projects or is it enough to conduct a couple of relatively quick projects and move on and maybe do some more work later? That I can deal with as a research leader and researcher. So maybe that's one way to look at it. Just replacing that quality metric with something that is easier to comprehend and work with. I'm curious to hear your thoughts.

Janice:
I love that reframe. I wholeheartedly agree with it. I think that if you think about it with confidence, being the other [inaudible 00:16:19], who would argue that something where we require high confidence like statistically significant confidence, who would say, "But I want that in a day." It just totally shifts the focus in a really productive way I think.

Michelle:
I love that too. In my teaching, when I'm teaching new practitioners or even working with a new team, one of the first questions I'm asking is, "Are you looking for a smoke signal or are you looking to make a hard decision?" Because that will often help inform how I approach, what plan is built, how much time we need, what resources to consider. And I use that term smoke signals a lot, and I try to get those smoke signals similarly. Christine, I love what you mentioned earlier about 15 minutes adding on into your foundational research. I like to add what I refer to as smoke signal questions in my server screeners and in my participant pools surveys to kind of get at assumptions or something that I think is coming down the road, a couple sprints down the road in advance to give me those smoke signals. So I think it's a great reframe, Nom, on that access. I totally identify with that as well.

Christine:
I wonder if we need more acknowledging the beauty of the two by two. It does feel like there's the level of risk involved in the decision, there's the ease with which you can revisit it, there's actually I think more ways to think about it. I think they're all potentially subsumed by this idea of confidence, like how much confidence you need, but it might be worth unpacking for people who are trying to practice this in their day-to-day like what are the things that go into your assessment or what kind of confidence is needed? Because it's actually kind of a complicated assessment I think.

Ben:
Absolutely. I would love to hear from you all if there are anecdotes you have. I'm furiously writing questions that some of the people nerds folks have raised in the past and many of their questions are around ... I'm working with this sort of stakeholder, be it a more typical stakeholder like an engineering team or a more artistic stakeholder like a designer. And again, they're saying like, "We can we do this a little faster." Are there instances where ... and to Christine's point where you're you're saying, "Well, let me educate you about what's going through my mind." Wherein I can say to you in order to be more confident so that you can then make a decision that leads to less rework or it doesn't have to be redone. Here's what we need to do. We can start with working with a more technical team since I think a lot of us often are working with stakeholders who might not have that experience hence our being positioned in such.

Noam:
Maybe just a comment on Christine's point first kind of to start us off here, I think on average, people do not assess risk well, and on average, people tend to be overconfident. And so first of all, let's start with helping train researchers and working with our team, not on drinking from a firehose, which by the way, no one can do especially not literally, and rather helping them and helping them help their stakeholders better assess the risk that is inherent to certain decisions, or what confidence level would be helpful? And there's definitely a need for a different style or a different language, so to speak, depending on which team you're speaking to, right? To your point about a more technical group, or a design group, or any ... or marketing group, or any other group. But I think it's important to keep the focus on these aspects and not on the scrappiness aspects, if that makes sense.

Ben:
What do those conversations look like? What is your decision trees? Janice, you had mentioned during our kickoff that it is often the kind of approach you might be taking that there are some projects where you need ... confidence is maybe more baked in. Something that is longitudinal, or a persona, or maybe a product focused where there's a process, or something to track, are there kinds of projects where you find yourself advocating for more confidence compared to others?

Janice:
I might not give the best example out of this crew to that question, but I can get the ball rolling. I think that the work that I tend to do is focus more upstream. So usually, the scenario I find myself in is engaging with a partner where the concept is still pretty early stage and so there's lots of really juicy exploratory questions that could be asked, but being technical, they're kind of looking at it more specifically. So that's kind of where the conversation I get into starts. And so for me, I find myself talking a lot about the level of zoom that we're looking at the problem at, and kind of talking about how ... First of all that there are different levels. So not trying to conflate everything and act like you can do it all at the same time. Just kind of making a spot for each part of the space that you're trying to explore and then talking about how those different levels kind of feed into each other is one thing I could do.

Janice:
So maybe if a technical person has a specific question about a feature, I could talk about how that bubbles up into the mental model that the user needs to have in order to be able to work in that moment and doing that specific thing. And then it very quickly kind of feeds into the conversation around methods because I think, going back to the pie thing, it's like you also know ... You know what you know, and you don't know what you don't know. And so I think it's just really great when there's an opportunity for us as researchers to show stakeholders all the other options that we do have at our disposal that they maybe didn't know to ask for. And then that kind of also feeds into kind of linking the thing you're focusing on, the questions that you're trying to answer with different types of methods, and then how timing feeds into that.

Noam:
One thing about working with technical teams, most often I would suggest engineering teams work in sprints, or just product teams in general would work in some form of sprints especially if it's kind of the agile method, lean methods sorts of teams. And within each sprint, typically technical teams have some measure of how much work they are attempting to accomplish. How many points, discrete points of work that they're trying to accomplish. So one way to think about these things is if we know that there are three potential solutions to a problem and those solutions have very different numbers of points, amount of work for technical teams, we need to understand that information, talk all that's information with engineering teams, and when those differences are large, that's where a lot of potential risk lies. If we choose the wrong technical solution of the three and it happens to be the one that costs the most, that takes six additional spins of work. And maybe we're taking some back end engineers off of another project just to be able to deliver this project in time, that's a huge risk for for the business.

Noam:
So being able to speak that language, understand how the sprints will be constructed, understand the amount of work, understand what type of engineering work would be required, and how novel that direction is for the engineers in that team and the company as a whole. All of these things are really crucial in deciding how deeply to dive into the research when working with technical team specifically.

Michelle:
Yes. I've also found it helpful particularly when working with technical teams. If I'm up front and say, as the researcher if this were my product, I wouldn't have the confidence that I would want at this point to make that type of decision. I think that we would want to triangulate it, here are some ways that we might get at a greater level of confidence and provide some options. But when you say as the researcher, as someone who is deeply involved and invested in a positive outcome, I don't have the information that I feel I would want at this point to make XYZ decision. It's obviously up to you how to proceed, but when I make a statement like that, that tends to raise some views. I don't how do it lightly, of course, but I certainly would insert myself if there was a risky decision being made that I didn't feel that we had enough confidence to make that call. That kind of gives people pause.

Ben:
Baked into the inception of this panel is the assumption that I have that you all are getting pressure from stakeholders, or collaborators, or leadership to go faster, be scrappier. Let me check that here. For each of you, what is your perception of the expectations around the kinds of research that you're engaging in? I think I mentioned a friend of mine who works at a large tech company in Silicon Valley. She did a project that was supposed to take four weeks. She completed it in two, so half the time. And she does this for a technical stakeholder team working on a new feature set. And she said she went to them very excitedly and said, "I think I have some recommendations that I can make given on things in it." I asked her, "They must have been thrilled." She said, "No. They wanted it a week earlier. It wasn't enough."

Ben:
So I'm curious, what sorts of ethos related to temporality broadly each of you perceive? You're all in different parts of the innovation and human-centered design thinking world, do you still get the sense that speed kills in the best way of describing that word? Do you still feel that there's a pressure on more mixed methodologists to folks who might otherwise be in the field or conducting lab work to move fast and break things?

Christine:
I'd say I feel more pressure around time for more exploratory types of work than I do for more evaluative types. As a discipline, I think our ability to deliver more evaluative insights has sped up quite a bit. I don't know about other people, but I get fewer and fewer kinds of questions about how long that will take or wanting to speed that up more than it possibly can. That stuff feels pretty streamlined to me right now. The places where I usually encounter timeline issues are when people want to do something quite big or ambitious and they want to start next week. They have engineering ready to go or something like that, and it's net new or something like that. And you're like, "Well, I would love a little more time to try something weird or do something a little out of the box instead of thinking, "Okay. What's the easiest possible answer I can get you on this?"

Christine:
My approach to this is typically is let's collect what we know today. We have existing work, we have hopefully lots of it that we can synthesize and have a place to start, but that's usually where I feel the most pressure where I'm like, "I wish I could go and do this analogous industry research or things which are really creative." I feel time pressure against getting really ... having more fun. I guess being a little bit more, a little more [inaudible 00:30:02].

Janice:
Because that stuff is less known also maybe it feels riskier to allocate the time to that even though the risk reward ratio is also super different. If you did get time to do that deep rich exploratory work, the type of findings you're going to get out of that should be more durable in a different way, and will be more novel because the methods that you were using to look around were different. And so it's tricky I think building the case for it when you don't have such a concrete thing to point to that you're going to have at the end of it. I think that you kind of touched on how I interact with the time part of my work as well. It's less about pressures on deadlines because again I'm ... So in my team, I'm not dedicated to specific product. I'm more of an internal consultant helping different partners develop new stuff.

Janice:
I don't have the stereotypical pressures on turnaround. It's more around opportunity cost of how long am I going to spend on this thing? And that means that I'm not helping out with this other partner who's also got an interesting question that we could look at. That's kind of the wait time that sits in my work.

Michelle:
Janice, that resonates with me too. I think I feel more time pressured around doing four things at once as opposed to trying to make each of my projects faster.

Janice:
Yeah. I could survive and get all four done, but I could also really deliver something smashing if you let me put all of my brain on two or something.

Ben:
Michelle, I'm wondering if this informs your pedagogy. As you said, when you're training new practitioners and new frontline thinkers, do you ... what sorts of conversations do you have with them about expectations around the time and the deadlines? Is there something that you bake into your course and your pedagogy to understand it better?

Michelle:
Yeah. I think there's two things that I try to ... I shouldn't say educate, but share. And one is that not every project is going to be a fit for you. And that's okay. And if your primary stakeholder is asking for a diary study to be turned around in three days, at some point you're going to need to feel confident to politely educate the stakeholder and say, "I understand your question and your request. Here's a way that we can get at that iterative approach." But a diary study isn't feasible for these reasons. And to consistently ask yourself whether the UX maturity of the stakeholders you're working with and the team that you're on is aligned with your goals because if you continue to butt heads and there continues to be this, "We want it tomorrow, and we want it super ... at a level that we can make super confident decisions." At some point it's a question for you whether this is the best culture for you to operate in. And there's no right or wrong answer, but to be introspective about some of these things as well, while you're trying to find your best fit, and where you can do your best work and continue to contribute and learn is something we do spend a good deal of time talking about.

Michelle:
And I would say also that many stakeholders who are asking for things that aren't possible, if you will, I have found ... have not seen behind the curtains of the process. So really demystifying what actually is takes place and what their involvement, and their agreement, and their investment in time means to the process. And how that betters the outcome for everyone really goes a long way. So baking that collaboration in from the start when we're identifying assumptions, when we're building that plan, generating that interest in mind, really demystifying it. Once they've been through the process once or twice, it's like they've had a religious experience and nothing can really replace that. But it's getting them through that first cycle, if you will. So demystifying it and understanding that not everything is going to be ... or everyone, or every team is going to be a good fit for you, and what you want to do and the quality of work you are interested. And that's okay. I'm curious if anyone has thoughts on either of those.

Janice:
Yeah. Actually, you made me think of an old project that I did when I was a consultant at Darwin. We put on a human centered design training for ... It was 150 bank employees. It was a huge project basically. Every Friday all day, we would have a module of training to give them. And I think going into that project, one thing that I was worried about was kind of equipping people to know enough to be dangerous and worrying how do we teach them something substantial, but over the short amount of time that we've got with them? And then coming out of it, it was actually great because we help demystify stuff, to use your language. So one of the modules was based on interviewing users, and so we actually had participants come in and the employees we were training had to do their own interviews and stuff. And they experienced the fear, and the nerves, and the adrenaline of being ready to actually do an interview. And I think that part resonated with them.

Janice:
Then the next week, we had a module on analysis where they had to look at their notes and try to do that. And we were worried, "Are people going to think they're analysis experts after this one week of doing it?" But digging into it at that level actually helped the employees better understand what they could do and what they could not do, and almost made them ... hopefully make them better clients of the services that we provide as these experts are doing that. So I thought that was spot on the demystifying part.

Michelle:
Yeah. I really think it makes them appreciate what we do so much more when they actually go through it and do a portion themselves. And it also makes the research that much better and more impactful. It has the whole IKEA effect, right? The whole, "I was a part of it, I believe in it, I was there, I was ... we're on the team, we're going the same direction." I couldn't agree more.

Janice:
I have heard from some folks ... I'm sorry. No, go ahead.

Noam:
One thing to add. And again, it's kind of funny given the topic and the notion of scrappiness. I think Christine touched upon this, is that the research field has matured, and many new tools and processes have been introduced. Especially on the evaluative work side, we can move a lot quicker because we have access to some incredible tools. For example, we can very simply run if we have the right tools in product surveys and catch people right at the moment when they are performing some sort of action we care about and immediately get feedback about how they feel about that experience. But I think with that maturity the expectations change. And the expectations also become higher and more intense. And the expectation is that you continue to adopt these sorts of tools and processes and go with that. As in some cases, you can't.

Noam:
For example, I work in a heavily regulated industry and it's much more challenging and difficult to adopt certain tools if you even can at all. We're talking on Zoom right now, I can't use Zoom in my particular workplace as an example. And so interestingly, we can only be scrappy, so to speak, because as a field, we are less scrappy. We have way more tools and way more resources than ever before. And that's what's enabled us to be scrappy and move faster. So it's just interesting that it's kind of the opposite direction I guess, if that makes sense.

Christine:
Nom, do you feel like adopting some of those more rapid testing approaches has affected your ability to do longer work?

Noam:
So I define longer work in that make sense?

Christine:
It sounded like you were saying that by you getting very quick responses to an end product survey tool or something like that, that was somehow ... that trend was creating tension for investing in work that just naturally takes longer. Was that what you were saying?

Noam:
No. So I think it's having those tools has shifted expectations in terms of timing, and in terms of maybe when people expect work to be done and on what timeframes. I don't think it's shifted expectation in terms of, again, to your point around, sometimes it's the fuzzy or early research where it feels like there's more pressure, but it hasn't shifted the balance of the types of projects we conduct or the expectation around the balance of all those projects. I do think that is a higher expectation than before, say, five or 10 years ago to complete to evaluate a project with more speed and more efficiency. I also think, again, to your Christine, that it's often those fuzzier early projects that people are the most excited about. The executives come up with a new hypothesis around maybe a potential new market, a potential new vertical, a potential new area and want to dive deep into it to figure out if there's an incredible business opportunity there.

Noam:
So a lot of the pressure does lie in those sorts of projects. And they are very challenging because you cannot typically complete them by asking people three questions in product service.

Ben:
I want to do two more things with our time together. And one of them is to probe if any of you have found challenges with now being a mostly remote research. Is there pivot to platforms and services that can help you do diary studies, and recruit, and do exit intent surveys? How is that changing the role of time and timelines in your work if at all?

Janice:
Yeah. I can go first. The reason I was a minute late was I was hand scoping a brief meeting trying to figure out how we're going to do a new research effort. I think that the new constraints we're working within have made us need to be more intentional about things that I would actually have always wanted, to always be thoughtful about. And so now it's almost like we have this new bucket of COVID complexities that serves as further focus on why we should be specific about the scope of focus for a project or how we're going to do something. And so now it's almost like everybody understands that things will take longer and so it becomes easier for me to make the argument. That's why I have this long list of guardrails for things that I'm not going to be able to touch on this research study. And why it's even more important for us to take an iterative approach and focus on this to start.

Janice:
I know that these other things are great questions, but we can't do them all at once because we can only handle so much complexity and the complexity bucket is with the COVID complexity. So let's try and be really focused on the research goals.

Noam:
I think one thing quite often people who are trying to break into UX research or early career researchers maybe don't fully realize is how pivotal collaboration, and relationships, and partnerships are to the success of research. I get asked much more often about tools and methods and not how to build better relationships or communicate better, et cetera. I do personally feel that COVID has introduced some real challenges when it comes to collaboration. And that the research challenges really depend on how research was conducted beforehand because at Wealthfront where I work these days, we were mostly conducting remote research anyway. So that hasn't changed. What has changed is our ability to collaborate and partner and build relationships. And we're working very hard to keep up the incredible level of collaboration and partnership that we had before COVID. And it's a work in progress. We've made a lot of advancements and good steps in the right direction, but it's tough in my opinion. It's tough.

Michelle:
Since COVID, just a happenstance, I launched the first curiosity 10 classes a week after shelter-in-place. So I've spent much more time in the education space in the last six months, but I would say the amount of people interested in increasing their research skills, and learning how to ask better questions, and wanting to become more confident in the space, whether it's a better stakeholder, or a better collaborator, or a better researcher, or a planner, or [inaudible 00:45:10], or a recruiter has been extremely eye-opening to me. Second, on what you said, Nom, about the collaboration in the partnership.

Michelle:
I think many people think about doing research as a series of steps to be followed with some creativity in there, but there is a huge opportunity to educate the newer generation about how important it is to be likable, available, flexible, agile, friendly, educational in the whole collaborative aspect for the sheer reason because most people originally think of research as a roadblock. It's either going to cost too much or it's going to slow us down. So you're coming into it sort of from the back-end. And I to joke, Wonder Twin power is activated, form of the most helpful researcher in the world. So we can enable and remove those roadblocks. So that collaboration and partnership if I could take out a billboard, I would. It's super hopeful and it's much harder to do in a remote setting without the water cooler kind of talk.

Christine:
Yeah. We were also doing mostly remote work I think before shelter-in-place. I don't know that changed a whole lot. I think the only thing that feels like it's a little bit different we've made more investments in asking people to do pre-work, making types of activities where we have-

Michelle:
Are these your stakeholders or participants?

Christine:
Participants. Because a lot of times remote conversations tend to take a little bit more time or you're just trying to get into the meat of it a little bit quicker. It's a little less awkward to collaborate on something where you're both manipulating or kind of communicate visually together. We have done a lot of fun things. It tends to be a little easier because we do a lot of research with Figma users and so they're extremely comfortable in visual creative tools and we don't have to do any kind of here's how the tool works type of upfront training, but I've always enjoyed those activities a ton. Even to see them accelerating, I'm like, "Yes." More and more.

Michelle:
That's so fun. If you have any interest, I'm sure that there's a whole episode in that to share kind of what these activities are and how you do them. I would love to learn from you, and I'm sure many other people would do, Christine.

Christine:
We can share them publicly as public Figma files where you could just grab them, and copy them, and then change them, and do whatever you want with them. We're getting quite a collection now. I enjoyed it because it's also speaking of stakeholders. Stakeholders really get into it too because I think it feels like a place where they can really actively shape what the participant will talk about and will make together.

Ben:
That's great, Christine. Yeah. If you wouldn't mind passing those along we'd love to share, again, what we're able to with folks who are interested. And I think that's where I want to end for folks who have ... Thank you, for those of you who are still watching it. I think we've interrogated and unpacked some ambiguous and complex ideas. I'm wondering if each of you might have one last thing that you might want to give in terms of advice about a tactic or a strategy that someone can employ or think about using. You have implicitly mentioned many of them. Relationships, and reframing that time, but confidence. Are there other things that you've used in the past to either buy yourself some more time, reset the conversation with a stakeholder or a collaborators so that a little more time can be used or anything else that someone out there watching who's like, "Yeah. Okay. But tomorrow I have to advocate for a method and I'm still kind of bumping up against time." Things that they might be able to do.

Christine:
I think the easiest way to get out of the time trap is just to be proactive about what research you think that the organization should do as opposed to getting requests from people because by the time you get requests, the chances are likely that people already need an answer. And so the more you can be like, "Oh, I think there's actually a risk that's coming up on the horizon. I want to start looking into that in the way that makes sense to me." You just bypassed that whole negotiation.

Michelle:
I can. A more tactical level, there's things that we can do to just shore up the research process. Building participant pools and having a pool of people who already meet our criteria. I like to joke, we have more bodies waiting. That in and of itself can really save a lot of time. It doesn't take that much more of an investment to build and manage a pool. It just pays off in spades and shoring up your analysis and synthesis the process with some debriefing tools and agendas for collaborative debriefing or what the expectations are before you show up, when you show up what we're going to leave with. The beginning and end of the process tend to have the most opportunities for improvement.

Janice:
For me, I think one thing I would share is kind of a personal mantra that I have when I'm working through stuff. It's not very catchy at all so I mostly just say it in my head. But iteration takes the pressure off. And so I think that when you're in a time constraint, I try to remind folks that we have the opportunity to do more than one round of research, talking about the way sequencing can help. And actually that the gaps in between one study ending and another one starting is an opportunity to kind of pivot and reflect and think anew about what it is we want to do next versus looking for silver bullets or trying to do everything all at once.

Noam:
That's a fairly tactical one. I think that similarly to how Google Slides or other such tools it's a solid tool for presentation, but in my opinion not the ideal tool for ideation and the collection of the ideas you want to present. I don't think that exploring questions, tools that Google Docs or other document-based tools are great for. We're all fortunate to work with really talented designers and I'm very thankful for tools like Figma, by the way ... Thank you, Christine, for enabling me to think more visually when it comes to questions, and unpacking questions, and the universe of uncertainty in front of me. I think by doing so in a more visual sense, it's easier to kind of calibrate what sorts of timing, what sorts of effort you'd need to again, reach a state where you can confidently answer questions. So I would encourage people to explore questions more visually and find ... talk about tools which we can talk about quite a bit in this conversation. Find the right tool to explore questions and then you can calibrate your timing accordingly.

Janice:
I love that, Nom. I'm just going to piggyback real quick. A book that I've been reading which I love and I think kind of fits in with what you just shared. It's called Communicating The New by Kim Irwin. And in it she kind of talks about communication myths, and one of them is, it's almost when we don't make it be more intentional, we assume that communicating is for persuading. And actually what about if you're communicating to explore or to think through something together? And so I think that talking about the difference between a collaboration space versus a slide deck totally fits into that mentality that Kim Irwin brings up.

Ben:
And I have heard, Nom, from lots of product designers and managers who are on smaller teams who don't have dedicated researchers, who themselves are doing that exploratory and evaluative research, they apply the visual tool of a roadmap to the research that they're doing so they can keep the engineer, the marketing, and the other folks involved. So even if it's just several cells that say fielding here, design here, it helps again, fit that into if they're in a sprint cycle. It says like, "Here's when your sprint begins and ends, and here's what I'm planning to do the research." To inform the work that you have to give them. So I plus one that. I've heard that from a lot of folks that I've heard from. That's great. For those of you watching at home, these ... I can talk with these people all the time and behind the scenes, I've bothered them quite a bit. So I cannot thank you enough, Christine, Michelle, Janice, and Nom. Find these folks on LinkedIn. They write and speak often about smart things. Find the teams, and the tools, and the things that they work with because they are informed by the sharp things that you've heard today. So thank you all for being here. It's been a great pleasure and I hope we get to cross paths physically in a not too distant future.

Christine:
Thank you.

Janice:
Thanks for having us.

Ben:
You got it, friends. Take care, okay? Be well, and we'll talk soon.

Noam:
Thank you.

Ben:
Bye, everyone.

Janice:
Bye.

The Latest