Skip to content

How Teams at Facebook and Github Scale Insights with Rolling Research

Join research pros at Facebook and Github as they share how you can start, build or better leverage iterative user research at your org.

Featuring Akilah Bledsoe, Zara Logue

When "doing research" is precious, qual is low-frequency and high-stakes.

But are you regularly connecting with your users? Are you regularly getting enough feedback, and asking the right questions?

Rolling (or iterative) research is easy to implement, closely connects orgs with customers, and shapes essentials like product development & usability.

Join user-research pros at Facebook and Github as they share how you can start, build or better leverage rolling research at your org.


Transcript:

Ben:
Once again I am Ben from People Nerds. Thank you so very much for spending some of your Tuesday with us. This is Scaling Insight with Rolling Research. We are so very excited to have the guests we do today. With that I'm going to introduce our guests for the day. I have Akilah Bledsoe who is a research program manager at Facebook and Zara Logue, a senior manager for user research at GitHub. Welcome to you both and thanks so much for spending some time.

Zara Logue:
Thank you.

Akilah Bledsoe:
Thanks.

Ben:
I was hoping we could start with brief introductions about maybe how you first heard about or thought about rolling or iterative research. We're going to get into much more detail, but I was hoping you might introduce yourself for the attendees.

Akilah Bledsoe:
Yeah, so I'm Akilah. I'm a research program manager. I've been at Facebook about four years now and we have rolling research programs across our 600 person board. Rolling research is really integral in how we do research and scale to make sure we get the insights coming in as quickly as we need them.

Zara Logue:
I'm Zara. I'm at GitHub right now. I used to be at Simple and actually worked with Akilah a long, long time ago at a place called AnswerLab. AnswerLab was my intro to rolling research because a couple different companies we were working with were doing these weekly programs, bi-weekly programs to have regular research insights coming in and to be able to tell their product teams, "Hey, this is how to get in front of customers." Have taken that model and tried to adapt it to the companies I've been at since then.

Ben:
That's great and these are two folks who I've had the pleasure with chatting for the last few weeks. They have been infinitely patient with my, "Do you want to sync one more time," and I can tell you they have so much great stuff to talk about so let's get right into it. I'm going to stop sharing this and we'll move then into the full view and we'll start our conversation. Let's begin, if we can, with defining what this is. The three of us went back and forth a few times on whether we should lead with iterative research, just rolling research, call it rolling research that iterates. Could we talk first at what each of you defines or what you mean when you use these terms iterative, rolling, either?

Akilah Bledsoe:
Yeah I can start, so I prefer the term rolling research because I think it captures the various ways a program can be setup, it doesn't narrow it, it keeps it nice and broad to make sure people can pretty much make it what they need it to be.

Zara Logue:
Yeah, and I think when we were chatting about this it's more about structurally setting something in place and using it to the best benefit for your particular organization. Whether that's with a single team that wants to be doing things on a regular basis and therefore maybe it's more iterative because that same team is getting insights about a particular topic over and over, or if it's more of a program to be open to the entire organization and say, "What do you have to put in front of customers, let's do it." I think we came to the conclusion it's more about establishing a structure and trying to fit different pieces into it and see which is going to work best for your particular situation.

Ben:
From our last conversation both Akilah and Zara, each of you mentioned that iterative could be a potential outcome of a good rolling research program. That rolling is another word of saying it's always on, it's more about awareness within the org that if folks have a question they know there's an outlet for it, that they can either take part in directly or again lean on researchers and thinkers like yourselves. I thought that was an important distinction between or rather part of rolling research that if it's doing its job it is iterative because the insights or the data keep coming in. I think Akilah you mentioned this, it really should be feeding back into the program, sharpening the questions, looping in different stakeholders, and making the work more productive if we can call it that.

Akilah Bledsoe:
Yeah, I totally agree. Honestly it's research on a schedule. It's at a set cadence, you can plan for it, researchers can plan their research around it and through it. Honestly it's something that's stable and folks in the org or the research team they know it's always happening and they can always plug in to make sure everyone knows we've got users here, we can ask questions to, and make our products better. I think it's really good.

Ben:
You mentioned... Sorry, sorry my apologies Akilah.

Akilah Bledsoe:
Okay.

Ben:
You had mentioned a few times that importance of deadlines and I don't need to impress upon any of our attendees that user research is only as good as the eyeballs that it's seen or some various version of that adage.

Akilah Bledsoe:
Totally.

Ben:
I'm curious if you could talk a little bit more about your transition from frontline, do it all user researcher, designer, and thinker, to now where you're now in the operations side trying to scale and democratize that. How rolling research has made that possible. I'm very interested in [crosstalk 00:05:23].

Akilah Bledsoe:
Absolutely, so as a program manager I'm really, really big on schedules and timelines. They're a huge thing for me. I say one of the best ways to tie everybody in is by being really transparent in terms of we're doing research every Thursday or research the first Monday of every month or whatever that might be, and then working back from there to say, "Okay, if you got a question, you want to get in this new round, you need to put it in a week before." It also keeps your designers and your engineers "on the hook" to say if you want to be in this we need the mocks, we need the prototypes by X date because we have to formulate the questions, we have to test it, make sure everything is working out. Really brings everyone together when you actually have a set schedule for how to make this efficient and functional.

Akilah Bledsoe:
That way it's literally something that you can set your clock by. Everyone knows it's happening, everyone knows if you want to be a part of it, or you even want to know the insights, the insights that have been delivered, you know they're coming two days after the sessions. It's something that can pull everybody together and also make people excited. We know insights are coming the Tuesday of the month, whatever it might be. We know insights are coming, we know where users are coming in.

Zara Logue:
Yeah, and I think if you can have a schedule for it it also helps build some visibility around it, depending on whether or not you're co-located team or whether you're a remote team, how you're going to handle people participating and doing the research. It's simple in particular co-located team mostly, some remote folks, so this was about establishing a space for the research to happen and a visible area, it was right as you were walking into the big open office space that was shared by our product peeps and engineering and stuff. Being able to have a physical space that people can walk by, stick their head in, be like, "Oh what is this person talking about?" Bring their lunch into the room. It can be a really nice place to be able to do that because people are like, "Oh yeah, every other week I expect this thing to happen."

Zara Logue:
Same for remote folks too. Being able to do sessions and broadcast them via Zoom so that people can drop in and out and they can have this ease of access but plan for it. Know there's going to be a full day of sessions on this particular day has been really helpful.

Ben:
I'm furiously scribbling here because you're touching on some deeper foundational questions that I'm hearing from both research and design teams, which are how do we get other folks to care about the work we do, even though it directly impacts the work that they have to do, meaning product teams, engineering teams, certainly more technically minded teams who might not be user research first orgs. They might need those data and those insights but they're not necessarily going to say, "Well you know what we need to do? Let's do some in-homes." They might say, "Well I need to know if," or, "I need to know how," or, "I need to know when." That's where of course our user researchers like the two of you come in.

Ben:
It gets more of those folks involved, and it also feels like scheduling it puts it on the same level as other technical teams in terms of both importance and building that visibility. Well we're going to do agile sprints because there's so much that needs to be built, pressure tested, and then shipped and learn. Why don't we have research follow in that mold? Let's not make research a completely different animal, I mean it doesn't always fit into agile, lean, or spring timelines but it can and rolling research is a way to get yourself working on that habit of setting schedules and socializing those schedules throughout the org. Then finally it feels like if you do this enough, as I hope the two of you will talk about, you can finally get up to those levels where you're asking or being asked some of those strategy questions.

Ben:
I hear from frontline researchers all the time, "I get that my work is important. That AB testing, or prototyping, or usability lab testing is important and I really want to get to more strategy work." They don't exactly know what they mean by strategy, but I think it's that itch that, "I was a trained sociologist," or, "I've got a background in anthropology," and so my expertise is digging into this rich, but what does privacy really mean to people? Why might you choose this service over another? Those are questions that can really impact product and design and a whole bunch of things and it feels like rolling research can elevate the game of researchers such that they might be offered or get asked or have the space to say, "Let's do some strategy work." I'm curious if you've had any experience in getting those heavier lift questions from rolling research programs?

Zara Logue:
Yeah I think this is a good one because Akilah and I diverge a little bit. Akilah do you want to go first speaking from the big scale side?

Akilah Bledsoe:
Yeah, so I think for us rolling research is a way to scale up our researchers. There is a lot of us over here, you got 600 plus, but there's also a lot of things to cover. I think one of the biggest things that rolling research has enabled our researchers to do is to focus on those bigger questions because they know the very important intro to questions or more usability questions can be covered in rolling research. They know they can get that covered, so they actually have the space and the mental wherewithal to think about the bigger pictures. For us that's why we have so many I'd say, and that's why they've taken so many different forms because it differs based on the team, based on the product that you're working on or where it is in the product lifecycle. That's definitely one of the bigger things to scale up our research team.

Ben:
Zara how does that flex or change from where you are sitting now at GitHub?

Zara Logue:
Yeah, well there's a couple different examples I could give. Coming from Simple, Simple was a really small company, 300 employees or something like that, and so this was getting to the how do you show the value of research and then get to answer those bigger questions. At first we instituted this regular research program as a way to say, "Okay executive team, we will do usability, we will test everything before it goes out the door. Maybe not everything but we'll test things before they go out the door and get that more tactical feedback for the teams of this needs to move around, this doesn't make sense people, etc. I think it was a way to, over time, be like this is interesting but what are those bigger themes that come out of it that we keep hearing over and over about people's mental models around money in particular. That kind of thing, and I think that the program that we had and having that regular cadence was helping us to build some... I don't know what the right word is, some trust I guess with the organization and think about other questions we could ask.

Zara Logue:
With GitHub it's a totally different animal, and so it's nice to have seen a couple different places. At GitHub we're doing more programs that are for the team specifically but rolling, so it's more specific to a particular product area, what is the customer that they need to be getting feedback from, and how do we make sure that they're going back to them on a regular cadence. It's a little bit different in the sense that there's already a lot of trust for research at GitHub, so it's fine if you're going after those big questions. It's more about how do you help teams do that regularly?

Ben:
Those are both great insights. Both of those comments are important because we've got questions here from folks about maturity of the org and how that affects the extent to which a rolling research plan can be initially created and the success of it, and then there's a question about small teams, teams of one. Folks both Bronwyn and Jennifer we will get to those questions and we'll use it as a nice segue to our next section, which will be... I wish we had some, but we don't so sorry you all. You got to deal with all this. Now that we have an idea of what iterative and rolling research might be able to do, fix and get timelines in there, socialize and elevate the research practice, get stakeholders weaved in even at the executive level. Let's talk about how you might do this.

Ben:
I want to give a big caveat, these are two very smart folks who have done it a couple of different orgs and my guess is that to a lot of your questions they might answer, "It depends." Like with anything in design thinking or user research more broadly, you got to try it and see how it works for your stakeholders, your teammates. The beauty of these folks is that they've had experience across a couple of different mature organizations working on different questions. I am sorry if we don't get to the well here's the exact way that you can. We like to distill those sometimes into those chunkable listicle ways, but let's start with the structure that you each talked about. It sounds like to get started with a program like rolling research you need structure, you need timelines. How do you go about doing that? What are some of the first foundational steps that a person out there needs to be thinking about if they're a small team or even a big team and they want to get going with a rolling research program? What's step one?

Akilah Bledsoe:
Zara do you want to take this or you want me to take it?

Zara Logue:
Go ahead.

Akilah Bledsoe:
Okay, so I think one of the first things you have to think about is timing. Because the first way to get burned out is if you're doing it so quickly that you can't... it's not sustainable. One of the first things I always think about is how frequently does this need to happen for it to actually be actionable and for whatever input you're getting from your designers and your engineers, you're not going to burn them out as well.

Akilah Bledsoe:
Weekly rolling research it definitely happens, we have definitely some teams in my org that do it, it is very difficult though I would say because it is nonstop. Bi-weekly I'd say is a little more common, but it also comes down to how much work you need to do and how much head space, as a researcher, or how much you can lean on an embedded ops team if you have that, to help support it. Looking at your resources is the next thing. Do I need to go to a vendor to recruit participants, or do I have an embedded operations team that can help me do that? Looking at who you've got to help you get it off the ground and that helps you keep forking down the road. Zara you got anything else to feed in?

Zara Logue:
I think the only thing I would add is thinking about it's related to the resources part. The more people you have that you can lean on to minimize the work that you're doing around either the setup of it or the synthesis of it. I think the danger in these programs is you're doing these things really quickly and then no one is absorbing what actually comes out of them. For us it's been really important to have the product manager, maybe the engineering lead, definitely the designer be involved in the sessions in terms of what questions we're going to ask and what's actually getting recorded during the sessions. Having people note take and what not and having those de-briefs after the interviews so that you can walk out of the sessions maybe even with... I know we'll get to deliverables specifically, but maybe you're walking out of there without even necessarily having a formal written deliverable because the people that needed to be there were in the room and therefore changes were being made during the session or the conversations were happening during the session.

Zara Logue:
I think that enables you to do more frequency if you need to, to really have the right people to move on the insights right away, as opposed to being now the researcher has to go create this deck or write up all the things that they heard. That's been very helpful to us to have those involved engaged teams.

Ben:
Yeah both of you mentioned, on our last conversation, reducing the distance. That one of the benefits of these programs can be reducing the distance between the decision maker, stakeholder, whatever noun you want to use, and the researchers creating the potential insights. That if they are in the room, as you said Zara, or if you're using a remote platform and they're added as a viewer or they have account access and can watch your session on Zoom or dscout or whatever you're using, they can be there and they go, "Oh, that's what they're saying." Then they don't have to read the deck that you're saying, this is what they're saying.

Ben:
Zara you mentioned something the last time we spoke that I wanted to talk about, it's another one of these cornerstone questions I'm getting a lot and Akilah you're in it now given your role, and that's this idea of democratizing who can we loop into the research process, who gets to be called the researcher. Zara you had said something that when you're creating these rolling research programs it is a good time for you to think, "Okay, if I'm a generalist researcher, who do I need in the room with me so I can not stay in my lane, but flex the muscles that I have built and use the skills that I've crafted and honed while making sure that the person who has the product questions is there and the person who has the design questions are also there." I'm hoping you can talk a little bit more about the roles that make for a rolling research program. You touched on them briefly. What are the thing you should be thinking about?

Zara Logue:
I think, I don't know, in the past it's been more about having the product manager there, having the tech leader, engineering manager there, or some of the engineers on the team, and having the designer there. That typical EPD engineering product design squad being aware of the research, be participating in it. At GitHub the focus is really on democratizing the skills of research across the organization, so our jobs are as much about training up RPMs and designers to be the askers of questions and facilitating the interviews, as it is about us doing that work ourselves. Rolling research at GitHub has taken on... I've only been there for a couple months, also caveat. This is all evolving, but one program I did was with product marketing director. He's really interested in learning better ways to talk to customers.

Zara Logue:
What we did was setup a couple of weeks of sessions where we had six people each week. I facilitated the first week. We talked about what came out of it, we did do a little finding stock, and then the next week I was going to be on vacation. His job during that first week was to watch me moderate a couple sessions, he did one where I was giving him feedback via Slack as he was asking the questions, and so it's very much about also can we train people, can we find people who are really adaptable to this type of role where we're not so much giving up our expertise, but we're doing it in situations where it's a place where people really can take part in the questions. It's stuff around messaging or brand or more usability type things, things where we do feel like other people can learn the skills sufficiently to be able to run those sessions.

Akilah Bledsoe:
I would add one more thing in terms of what I think rolling research can help with. I think it's another way to bring your [inaudible 00:21:32] partners along on the research journey. It's another vehicle to get everybody in the car to see what it takes to get research off the ground, what it takes to get insights into product, and things like that. I think that's a really good use of it because everybody's looking at that schedule and they're seeing how tight it is to get the guide written and then getting the prototypes in and testing and then writing the sessions and then getting the insights and then getting the deck out, or whatever type of deliverable you might need.

Akilah Bledsoe:
It's really a nice eye opening opportunity, I think, for people to see all the different things that it takes to get research off the ground.

Ben:
We're having a lot of questions from folks about the specific tools or general tools that you all have used in the past. You're talking about... Akilah you mentioned a few times that folks can publicly see the timelines or have it on a calendar. Is this publicly available? Are you adding particular people? Is there a repository that they go to and say, "Oh here are the programs being run." What are some of the ways that you socialized the structure so that other folks can get involved?

Akilah Bledsoe:
Yeah, so we use workplace groups. The different teams that have rolling research programs either for a specific team or across a product space we have groups for that. The different researchers will add their [inaudible 00:22:57] partners, so it grows and grows and grows. It grows organically and it's really great to see when people start tagging each other in the different decks that come out and the different questions that are getting asked to see, "Oh this would be helpful for our team." "These insights would be really helpful for us even though it's completely different." Pretty much people keep getting added and it pretty much grows from there.

Akilah Bledsoe:
In that group we pretty much have the schedule posted, we're saying, "Okay, the next round is coming up. This is what we're planning on putting in there. Does anyone else have anything else?" It's also great because we chunk it out so different parts may not be useful to everybody but they'll see, "Okay great, the last 30 minutes of sessions, that's when it's our time to shine and we should jump in." That's so we can keep people engaged if they only have a small amount of time, they know when something applicable for them is happening.

Zara Logue:
Yeah and I feel like I'm learning on this one at GitHub because we're a bigger organization. At Simple it was tell a couple Slack channels that the research is happening and then everybody knows that the research is happening. We would publicize it basically via Slack in advance. Here are the upcoming dates product managers, who wants to take these different dates and what's going to be your subject matter? Then we had a customer research Slack channel that would actually have the Zoom links and the chitter chatter from all the people watching the sessions. You could go in there and follow what was happening during them.

Zara Logue:
At GitHub definitely we track everything in GitHub, nobody uses GitHub like GitHub uses GitHub. Everything is tracked in projects and issues and whatnot, so it's definitely findable by anybody and most people are getting used to the fact that there's a customer research team and they have a repo and that's where you go to find this information. That's down a couple layers and so we're trying to experiment with different ways of how do we let people know what we're doing? I feel like it's still evolving. It's a little bit of Slack and a little bit of internal team posts and stuff and telling people three times what you're doing so that they remember kind of thing.

Ben:
We've got some questions from folks about resourcing, which Zara you and I... The three of us were talking about wanting to mention on this discussion that for some folks out there this might be the first time they've thought about rolling research or heard about it or this is something that they've really wanted to do and they're the only one in their org who likely knows, to your point Zara, that they're the user research team at their place. I'm hoping we can talk about the scale of I'm a scrappy team of one, here's what you can reasonably expect to do in the next three, six, nine months, and then if you're maybe at a place that's like Facebook or Google or Twitter where they're a buy in and a maturity around user research or empathy broadly wherein we know human insights need to be seen, and yet we still want to do some more rolling research.

Ben:
I'm hoping we can talk about each of those cases. Again, knowing full well that we're making broad generalizations about those kinds of orgs and the kinds of roles at those orgs. Let's start on the scrappy side since the two of you certainly have... Those were your beginnings doing this as and how you could. What does rolling research look like when your resourcing maybe is either unknown or low?

Akilah Bledsoe:
I'd say it's honestly, at the core, it's figuring out what you can do on your own that won't drive you crazy. Those sessions it might be once every two months because you have so many other things going on that you can't fit a bi-weekly. Honestly it's going to be a bit of a guess, and the best way I'd say back yourself up with that kind of diving into the pool, is setting expectation that you're working out the timing, you're working out who the audience might be for each round. If it's going to stay the same or if it might change each month. You're setting those ground rules so that you have the flexibility to make it actually functional for you.

Akilah Bledsoe:
When you're a team of one everything's on you so you want to make it the easiest lift you can, so you want to build yourself some space. If you're doing it once a month make sure you're looking at everything else you've got going on and put it on your easy week. If you have one of those you definitely want to try for that. Be really clear about the expectations for the output of this ahead of time. If it's just you don't tell folks you're going to give them a big deck, don't do that, don't do it. Tell them you're going to get a key findings email or your top 10 findings from the sessions. Keep it easy on you and the best way to do that is set expectations about what you can do, what your timing is, and how you're going to proceed with making it better from there, especially as you get more resources you can do more things. That's my take on it.

Zara Logue:
Yeah, I think treating it as an MVP as we would with many other things in a product organization, what's the minimum viable thing that we can do around research where we feel like it's valuable and we're not burning ourselves out as the sole researcher. I feel like so much of it comes down to recruiting. How difficult is it for you to recruit relevant customers in your organization? I guess I've worked for a couple of companies where I've been very lucky in this arena. At Simple it would be very easy for us to recruit customers who would definitely show up, they would give us helpful feedback, they were articulate and awesome and all this stuff. Pretty much same thing happens at GitHub, but it's a lot more difficult to reach out to them. We have more policies in place around how we actually touch customers.

Zara Logue:
That is a huge part that I think you have to take into consideration. If you don't have any resources to help you with recruiting, then think about how long that's taken you in the past, what can you front load? Could you make an effort to do two sessions in a month and have those people already lined up so that you're not doing that last minute juggling of this person canceled, I have to replace them. As much as you can automate for yourself also, using tools, Calendly, appointment scheduler, plugins, that kind of stuff. They automatically send a Zoom link, they automatically send your NDA for you, those are tools that have been really helpful to me personally just doing those session at GitHub.

Ben:
Akilah you mentioned about the role of ops in being able to scale something like this. I know that for our customers at dscout, two things. One we're a research tool for researchers and as a researcher in that place the levels of meta there are quite heavy. One of the things that myself and the other researchers here have found successful when we're trying to get a rolling program... Again doing research on researchers is tied to OKRs and KPIs. I think the two of you mentioned that before. If you're someone out there who's like, "Okay great, I need to loop in stakeholders but what the heck should I ask?" If you're someone out there in an org who looks at or whose executive and leadership team is motivated by OKRs and KPIs start there. What questions can you ask? That opens up a whole bunch. It might lead you to usability questions.

Ben:
I know we have some folks asking, "Is it always usability?" It might be exploratory and discovery questions if you've got an OKR that touches on or needs that kind of data. Then Akilah you mentioned this a few times, getting stakeholders to the table or getting visibility for them, if you can say, "Hey I've got some information that might turn into something that can help you meet this KPI or get further along in this OKR," dude you've done it for yourself. If you're someone out there who's still churning, if you have access to or are in the stage of creating OKRs and KPIs start there man, those can be really, really great ways to slingshot you into... Again, as these folks are saying, take a swing at something, do a few, invite some folks, and then see how it goes.

Zara Logue:
Yeah and I think one place that's been a consistent theme across organizations is always the new user experience. There's always a growth team, there's always some team that's trying to figure out how do we juice that metric around whatever your metric is, monthly active users, engaged users, people who deposit money, all that kind of thing. That is usually a huge focus of your executive team and of a particular product team. That's one way into it too. If there's a particular product team at your company that you think would be an excellent partner for this to do this NVP, if you know and are aware of what that team is trying to achieve and you can tie it to that, that's definitely a way to get support later because they'll say, "Hey, Akilah you're so awesome, you helped us do all this stuff in a public Slack channel," and you're getting these kudos and stuff. Then other teams are like, "Wait what? What's happening? How do I get in on this?"

Zara Logue:
Tying it back to the OKRs is definitely a place to show value. It's something that you can tie the success of your program to and growth teams, new user experiences is always a great place to start.

Akilah Bledsoe:
Totally, plus [crosstalk 00:33:12].

Ben:
Well we've got... Sorry Akilah.

Akilah Bledsoe:
No.

Ben:
I'm curious if we could move to something that some of our question askers are lingering around, and that's... Oh gosh the TV's turned on. I think someone's trying to share in our room. Another one of the fun parts of a startup when you don't have many conference rooms. I'm curious what the outputs look like. We've talked about Slack channels, copying and pasting quotes, I know for dscout a lot of our folks create highlight reels and they drop them in Slack channels, they'll drop a few of the charts in. Again, they're not delivering full reports because they know their stakeholders aren't likely to look at them, but they know their stakeholders have time to go, "Okay, these are the top three features, here's the percentages of each of them, and here's a video of someone trying to navigate it."

Ben:
Like you said Zara, the onboarding experience really sucks at this one part, here's an example of it. What are some of the deliverables that you have found to be successful both in looping people in and then starting Akilah, as you said, that feedback loop wherein then they get questions and they knowing the schedule can plug into rolling research and make it really cook?

Akilah Bledsoe:
Yeah, again I want to ground it in the amount of lift it could take for the researcher or the group of researchers who are doing this. Sometimes it could be a little beefed up key takeaways document, or other times it also depends on what your organization consumes most. We like video over here, we're real big on video, so sometimes it's quick snippets, as you were saying Ben, of themes that have been pulled out. Most of the time we're pretty much directing people, the folks who ask that question and also other folks we believe could benefit from that as well, they might not have been thinking about it yet.

Akilah Bledsoe:
That's one way we definitely share video is really big, but also that takes time. If you're feeling like that's not something you might be able to do, again that key findings, that quick key findings doc is still very usable and also it's easy to consume. It's very quick so it's not going to take a lot of time to take it and say, "Okay great, so these are the things we need to keep working on." It's easy for everybody.

Zara Logue:
Yeah, and I think not being afraid to repeat yourself in multiple places. Definitely go to the place like Akilah's speaking to, where does your organization consume the most content? For GitHub in particular it's within GitHub, so we use... If you're familiar with GitHub we have these things called issues that we used to track the entire project. Here's the discussion about the questions we're going to ask, here's who we're recruiting in a nice little table, we put their recordings in there if we recorded the interviews, and then that's where this top level findings thing is recorded. But then you have to be able to drive people to that too, so dropping little snippets in Slack in the team's channel, so going into that particular team, even going so far as to comment on their user stories.

Zara Logue:
If they already have a user story around this thing, adding this supporting quote and finding from research. That can take a lot of work to get into a team at that level where you understand where they are in their process, so I think as much as you can keeping it very digestible, very short. People's attention spans are short, it's annoying but it's true, and putting it where they normally would consume it is great advice.

Ben:
Have either of you seen that the bite size, more accessible styles of deliverables encourages them, them meaning the stakeholders, whoever they may be, to get more involved in the process? I've heard from some of our customers that they both like the excitement that their stakeholders experience when they see a video reel from dscout and a couple of quotes and then they say, "Well I could do that. I could write one of those questions," which for a user researcher might be a welcome help, but I'm getting some questions here about yeah, but how do you both train some of your stakeholders to get involved in the rolling research and then somehow create, not a bright line, but some demarkation between researcher space and teammate, additional insights question space? I'm curious how that is happening where you both are.

Akilah Bledsoe:
Yeah, so I think one of the really important things about when you're setting up these rolling research programs is being really clear about what should be included and what should not be included. Because if you have a rolling research program that's pretty much for usability, it's iterative, that's it, you should not start putting those foundational questions in there. That's going to break the whole thing. When you're setting it up, that's when you can also start educating your stakeholders that are participating. This is what we're using this for, and when they start trying to put in questions that don't really fit, that's when you can say, "This is a great question, I'm really glad you brought it up, but I think we should think about that in a discreet project. Let's talk about that tomorrow," or whatever it might be. That's where you can really keep those boundaries to make sure the things that you're trying to tackle make sense in that form, in that venue. And the things that shouldn't be in there you're not starting to try to boil the ocean, as one of my old managers used to say.

Zara Logue:
The best term. Yeah I think that's where your guidance and expertise comes into play. We want to encourage people to participate as much as possible. If a product manager comes into a discussion guide of mine and drops in 21 questions I'm like, that's amazing. It's something we want to encourage, so figuring out some diplomatic way to explain, "Hey these are the scope of things we can address in this 30 minute session." Or if you're doing multiple topics with a single participant we have to limit it to this discreet type of question. That's where you can help people understand what's appropriate and what might be more appropriate for a different type of research, a different type of methodology, that kind of thing. That's when people, I think, start to... It's again, building that relationship and trust with people. No, yeah really I have done this and I know what will work, but we want to channel your enthusiasm into this next project.

Ben:
I've found that when I've talked to folks who are democratizing or working on either explicit scaling and democratizing projects or doing it as you're talking about it, friend to friend colloquially, trying to get other folks involved, one of the byproducts is not only understanding about what user research or experience research can do, but also what you, the stakeholder, do need a researcher for. Once the usability questions they get comfortable in sitting in on interviews or adding a few questions to an interview guide or doing some observations with you, or again jumping in on a platform, then it feels like that space, that strategy space opens. Oh, well we know a lot about this but I'm really curious about what led them to or what happens after, and so that's when you as the brilliant research can go, "Ah-ha, well let's leverage some of our design thinking principles, or why don't I pull out my anthropology textbook and we'll dig into people's minds."

Ben:
Because then your stakeholders have identified naturally the need is there, and again if you can say, "Yeah and actually it fits with OKR number 24." Hopefully you don't have 24 OKRs my goodness, but you might. I've found that that's one of the byproducts is that as you educate your stakeholders on what you can do, the line for what they might not be so comfortable doing becomes clear. I'm hoping we have about 15 minutes left and I want to make sure we get to some of the questions we haven't yet. The last topic area is one that's another... dscout has this new research report called Moves for Modern Research, I'll have Matt drop it in the chat if you haven't checked it out.

Ben:
We asked hundreds and hundreds of researchers like Akilah and Zara, what are the things you're thinking about and wondering about for the future of your org and for the research team in your org? We talked about a few of them. The one that we haven't touched on that that is I think on the minds of some of our guests is repositories. It naturally strikes me that okay, if you're doing rolling iterative research, that means the data and insights go up. Where does it live? How do folks access it? Again folks, we're not going to have the answer, your mileage may vary here. I'm hoping we can close with that discussion on what are repository related to or vis-a-vis a rolling research program might look like, what it can't do, success stories, anything you might be able to contribute to that.

Akilah Bledsoe:
Yeah, off the bat I can say where you don't want it to say is with that individual researcher or that individual product team. You don't want it to stay just with those folks. What we've done is we pretty much take it in chunks, six month chunks, go back and pull out the key themes that we've seen across those six months. I want to be really clear there, that's not pulling out every single theme because that is too much. It's really the key ones that are still present after six months of research.

Akilah Bledsoe:
It'll also differ based on how you setup your rolling research project, whether it's for a product team or several product teams. The way the themes will evolve differ, but that's one thing we've done is pull those themes out and try to think of creative ways for sharing them. We've done one we had a deck that we've shared at the end of each half or each year, whatever it might be. We've also done one with video clips that we've shared in different places to get people jazzed up to start the next round to keep it where we're still doing this, we've got more things, we're working on these things. That's another way to do it.

Akilah Bledsoe:
You want to make sure things don't end up sitting with that one researcher, that one product team. You want to make sure it's being shared more broadly.

Zara Logue:
Yeah, I think again this is an evolving thing at GitHub in terms of how we handle it. At Simple we had a lot of different sources of customer feedback that we were combining into this quarterly voice to the customer report, and so it was a natural place to be like, "This is stuff that comes from qual, this is stuff that comes from analytics." Here are some top 10 customer pain points, top 10 customer loves kind of thing. At GitHub we do have a centralized customer research repo. Everything is in there that's a former project but we don't have to, sort of what Akilah's speaking about, we haven't yet gone through and tried to pull out what the common themes are across projects. Yeah, I'm about four months in so I feel like in a couple of months even I'm going to have done enough research where it would be possible to do that kind of thing and say, "What are these major themes?"

Zara Logue:
As far as making it public we're trying to figure out, I think, the best ways to communicate with people so we can remind hey, here's all the things we've done and here's what we've found. Still trying to figure out what the best venue is for that.

Ben:
That's great. Okay, I'm going to share these folks' emails so that if there's a question that we don't get to or you want to holler at them directly, here are their emails. With that I want to get to a few questions that we haven't yet. There's a lot of questions here about doing qual in quanti orgs. I think both of you are at organizations that are human centered to be sure, and also use quantitative metrics to drive outcomes and resources. Are there any suggestions that you can give to someone, maybe even not just those doing rolling research programs, but those trying to do qual in a more quanti or metric focused place to get them started doing this kind of work?

Zara Logue:
I can start this one if you want Akilah. GitHub is majorly, majorly the pendulum swings towards data. There was a future iteration of a customer research team that fizzled out and then I was hired a couple of months ago to restart it. We have some other people who have moved over from Microsoft, which is really cool. Now we have an actual team, which is really exciting. A, it's been about making friends with the decision science team. I feel like research did this really well at Simple, we were a pairing of the data science team would come to us and say, "Hey, we're seeing these particular trends and we don't have answers for them. We have no way of knowing why people are doing this. It doesn't make any sense. What does your team already know maybe that might help us answer this or how can we partner to figure that out?"

Zara Logue:
It really comes from a place of having that relationship with that data science or decision science team in the size of org that I am in, this might be very different at Facebook, so we'll hear Akilah's answer. It's really about making clear. What we've done at GitHub very recently is decision science and research are partnering on this decision tree thing. Here's the type of question I want to answer, how does that flow through? What can decision science do really well? What does qual do really well? Where are the places where it makes sense to answer it from both sides? I think that's step A is having a really good relationship with whoever your data people are so that you see each other as partners instead of some kind of adversarial like, "Well data can answer that better because it has statistical significance behind it."

Zara Logue:
Then the other part I think is talking to product managers about the different ways that they are measuring customer success. Whether it's through an experience metric, which honestly is fairly rare at GitHub, or whether it's through more of a typical success metric or a revenue metric, whatever it is. What are the different ways that qual can help inform increasing whatever it is that they're trying to do, and explaining some ways. Maybe historical examples that you might have of here's how we did this at a previous company and it was really successful in this way. I think PMs are biased against qual sometimes because it takes time, you have to talk to people, that kind of thing, but I think once people have that good transformational experience they get advocates for it. It's like finding those first partners that will be really great advocates for talking to customers for doing qual and using it to balance against whatever metric it is they're trying to drive.

Akilah Bledsoe:
I'd say definitely plus one to all that. In terms of thinking about how do you make those relationships, onboarding is really huge. If you know you have a new data scientist joining your team, meeting them to explain what you do, especially if they're not familiar with research, can be really helpful and you start making a friend from the beginning. It's also really good training mechanism, so they understand these are the type of questions that I can answer, and this is how I can support you.

Akilah Bledsoe:
Making those swim lanes, as I know folks like to call it, at the front end can be really helpful so you're not trying to forge those when you have to. Making the friends when you don't necessarily need them is the best way because then it's not a transactional relationship.

Ben:
That's awesome, getting involved with their onboarding and Zara what you said, making the decision tree. We do have some outstanding questions about folks curious about how you work with technical teams. Each of you has noted several times the importance of having QA folks, eng folks, engineering, and design PMs on and in these sessions whether remotely or in person. I think that's one of these where your mileage is going to vary because we don't know who those folks are at your org, but they have goals that your research could help support or help them reach, and so understanding what those goals are, educating them a bit about the work that you do and how it can play nicely with multiple regression or structural equation modeling that your data science team might be doing or the types of things that your design team are building. That engenders empathy.

Ben:
Remember, if you're a user researcher one of your core foundations in tenants is empathy, you're really good at empathizing with your user and bringing that story hopefully to the decision making table. I would encourage you to do the same with your colleagues. They are, hopefully, working on a thing that you are working on together, some sort of larger goal. Flex that and say, "Well you know this work that I'm proposing would help you in this way, and I know that you need to learn these things and we'll get to them." And, the yes and. There were a lot of questions here about working with technical folks and I think the two of you have done a really nice job with that. We've only got a little more time for questions, but I'm curious if each of you, if there's anything we didn't get to that you think someone should know if they're looking to begin or join a rolling research team that we didn't touch on? Any lasting comments? I mean we've got some themes but I'm curious if there's anything we didn't get to.

Akilah Bledsoe:
I wouldn't say this is something we didn't get to, but I want to make sure I say it again. This is definitely not something that is impossible to setup. It's definitely something that you can do, you want to take some time to sit back and think, "How can I make this work for me and who could I pull along on this journey to make this something that's sustainable?" It's totally doable.

Zara Logue:
Yeah, and to add to that, it can be super scrappy and scrappy is fine. You don't need a fancy lab. GitHub has no lab. It's all via Zoom, it's all very low cost. You can't give incentives because there's some rule against it. Figure out some other way that you can encourage customers to participate. Maybe you can give donations to a non-profit or something, maybe that's easier for your company to do then it is to give people cash incentives. If you have swag to give away that people desire, that's another way to get them to show up. I think putting all perfectionism aside and being willing to put something together and see how it goes, and being really clear about if this doesn't work out we'll try something else. We'll give this two tries and then if it's not valuable to you, if it's not valuable to me then maybe we don't continue doing it. Being really clear about expectations and being willing to be scrappy on the setup.

Ben:
That's great and one of the things that I noted as we were both talking this time and last, is that I think qualitative in particular and I'm in this camp, qualitative researchers can sometimes be too precious with their research. Oh but we really need to get it right and A, there are times when you really do want to make sure that interview guide or that onsite visit and the outline that you have or the survey, whatever it is that you're using, whatever tool or instrument you're using, yes precision and accuracy is important, especially when you're dealing with themes and exploring notions of culture and perception. And your stakeholders may over-index on how precious qualitative is such that we can get it wrong. I think rolling research is a great way to show them qual is important, it can be speedy and meet timelines, it can index to the metrics and the OKRs that we're using to make business decisions.

Ben:
I think the other thing that I'm hearing from the two of you is we as researchers need to be okay with running scrappy work or not doing the perfect interview because that probably doesn't exist. Did you learn some things that might help another team? Cool, are you answering questions? Eric Hall's idea of are you answering questions? If you're answering questions then you're doing it man, you're doing it. That's another thing for me that I continue to be struck with is how this notion of precious research and how there are times when we need to be careful with it because the topic might be sensitive or the impact on the business might be strong. There are more often times when we need to meet deadlines and if we can inject some empathy into the decisions cool, we've won.

Zara Logue:
Yeah and I think this is like... Oh go ahead. You're going to wrap up. You're doing the thing.

Ben:
No, no I want you to stay here forever but you all have more important things to get to. Yeah no please, continue Zara.

Zara Logue:
I was going to say one big hurdle for myself personally with this stuff is getting over perfectionism in the discussion guides, formatting, and stuff like that. Also the prototypes, you may not get a lot of time with the prototype. Even if you tell the people the prototype has to be on a Wednesday because the sessions are on Thursday, you may still get the prototype Thursday morning. Just getting over the fear of it's okay if you don't know everything, have a partner on Slack who can be like, "Oh that's there on purpose. I did it on purpose, here's what they should be seeing." That is a hurdle to get over and it's real as a researcher who's used to being buttoned up and I'm ready to go. This is flying into stuff by the seat of your pants sometimes.

Akilah Bledsoe:
Yep, being comfortable with things being buggy and maybe stop working, it happens.

Zara Logue:
Yeah, [crosstalk 00:56:02].

Akilah Bledsoe:
Yup.

Ben:
I love it. Well Zara, Akilah thank you so very much for our guests. Thank you again. You have emails for these folks. You can holler at me ben@dscout if there's anything we didn't talk to. Thank you so very much for your time and we'll hope to see you on a next one very soon. Thank you both.

Zara Logue:
Thank you.

Akilah Bledsoe:
Thank you.

Ben:
Bye you all, take care.

The Latest