Skip to content

Beta Testing: A Mixed-Methods Playground

Beta testing offers ample mixed-methods opportunities, providing rich insights on lean timelines.

Words by Tony Ho Tran, Visuals by Thumy Phan

Beta testing is an often underutilized—but impactful—tool for research.

That’s because it goes beyond usability testing, letting you see how your users actually interact with your product. And, perhaps more importantly, it shows you how your users don’t interact with it. These insights can help shine a light on those persistent yet aggravating blind spots that often arise over the course of a project.

And when beta testing is led with research in mind, it provides rich mixed-methods opportunities so you can glean qual without sacrificing the quant (and vice versa).

That’s the heart of Abby Hoy Baylen’s, UX Research and Copy Manager at Later, talk at the 2021 UXR Conference. She helped establish the UX research and copy practice at Later and build better tools for their social media management platform using beta testing.

Now she wants to show you how too. That’s why we recently caught up with Abby recently to discuss her career, what UX research looks like at Later, and the opportunities that beta testing can help create.

dscout: How did you end up at Later?

Abby Baylen: I was doing global research at HSBC for seven years before I joined another enterprise company called Sage. I did a lot of good global research there as well, but unfortunately, they had to go through a reorg. As part of the reorg, the whole research team was let go.

That was a real reckoning for me. It burst my bubble in many ways because not only was I unemployed, but I was a part of this corporate enterprise world for over a decade. I had never bothered to look to see what was outside of it. When I landed at Later, it was a real pleasant surprise what the startup world can be like—how refreshing, liberating, and freeing it was.

What were some of the differences between your previous enterprise world and the startup world?

Something that I really loved was the sense of autonomy that I got within a smaller-sized organization. I was able to run things the way that I wanted to and have access to incredibly smart people.

I think it's changing now with the whole climate around research and how there's so much cross-pollination and cross-functional collaboration, but I think people used to really hold on to access to certain things. There's obviously a certain level of power that one would have within those enterprise organizations with not giving access to key information.

Often, I would come to dead ends if I wanted to find out certain levels of data. I'd have to jump through hoops and ask some source manager—and I don't even know if I'm heading in the right direction. But with a small organization, I could just beeline it to so-and-so's desk who was the expert, and I could find out the information immediately.

Along the same lines in enterprise organizations, you just don't have the same level of access to tooling, so you have to jump through hoops and get approvals. Then it has to be approved from a technical standpoint, "Will I be violating some internal rules by using this particular tool?" A lot of organizations, especially financial ones, don't let you use certain websites.

Right now, everything is SaaS, so everything is online and the amount of flexibility in tooling is huge.

Was there much of a research practice when you showed up to Later or were you patient zero?

When I joined, the person who was running research before me was our amazing CEO. He has a real passion for UX and making sure our product is easy to use. I couldn't believe that he was doing research, writing copy, head of product, plus running the company. That’s typical of the startups though!

I alleviated the UX writing and the research when I came on. I don't have a background in copywriting, but because as researchers we tend to communicate a lot through written means, it was a natural fit in some respects. I think good UX writers tend to have an innate level of empathy. They think of their audience and researchers do the same. I'm very fortunate right now because I have a UX writer on my team and she's fantastic.

It’s often the case for startups that a lot of people wear different hats—especially in those beginning stages. What does the practice look like now?

We have two wonderful user researchers and another qualitative researcher who is shared with another team. By the end of the year, I hope to tack on a few more user researchers. We reside under product right now and work closely with the product owners, UX designers, and visual designers, but I imagine we’ll evolve into more of a service model soon.

You're giving a talk at the UXR conference called, “The Research-Led Beta is a Mixed-Methods Paradise” that goes into the role beta testing plays in your research practice. Why did you choose that topic?

I had never worked with betas until I joined Later. I never knew about them and never had the opportunity to participate in one. I always thought it was a development thing. After joining Later, it’s now something I’ve had a lot of experience with.

Betas are really essential for the organization because we basically put out release-ready versions of features to specific segments of our users and have them test it out. It's fantastic because it helps us test things at a larger scale. Not only do we get to see it in action in the wild so we can find and fix bugs, but it helps us to understand usage patterns. How are people using it? How does it impact other areas of the product?

And there are a number of other benefits to it. For example, as part of participating in the beta, we get people to complete a survey that allows them to opt into the beta and get access to it. But as part of the survey, we gather requirements for future features. That helps us to understand existing practices. How are they doing things today and how could a future feature help them?

It's interesting because when I first started, I took over running the betas from the CEO and from the UX designer at the time. It was very quantitative focused. We would get the data from the surveys and then we would look at usage data so we can see where people were tripping up.

I ran a few and I thought, “Something's missing.” It was just very automatic to me. We're not hearing anything from the users. So, what I did was that I added a component to capture all of the qualitative feedback.

This is where the mixed-methods piece comes in. That’s when we would run the user interviews. From those interviews, it would be a mix of some usability testing the beta features, but also using that as an opportunity to test any other future features that we could be working on using the low fidelity designs that we had.

That's why I call it a “mixed-methods paradise.” There's so much that we could fit in and experiment with and try. It's a really exciting fertile ground for us to play with from a research perspective.

Not everything has to be aesthetically pleasing, because aesthetically pleasing doesn't mean easy to use.

Abby Baylen

I’d imagine there was a big in-home element to it.

It is a lot of in-home remote research. Once they've opted in, as part of that survey we ask, "Hey, do you want to participate in future research sessions for this beta?" If they say, "Yes'' then I contact them and schedule something through Zoom, which they will usually take from home due to the pandemic.

If they haven’t used the feature, then I get to watch them do it for the first time. If they have used it, then I get to hear feedback from them and see how they’re using it. And from those pieces, you get to learn so much more than just what you get from the quantitative portion of it.

Once we ran a beta and it was around our Hashtag Analytics tool—an area within our analytics tool that's focused on hashtags. The data in the tool itself had different levels of highlights. You had a darker color highlight if your hashtag performed well, and then a lighter color if the number was lower. I asked them, "Which are your highest performing hashtags?” and then, they would scroll up and down and scan and say, “This one is high. And then down here, this one is high” etc.

But at the top of the table, there were sort buttons, but no one was picking it up. The interesting thing is that because no one was able to pick up on those sort buttons at the top, it just made for a very frustrating experience to use Hashtag Analytics.

But the crazy thing is that this same design was already applied to the other analytics tables that we had. So, we had multiple pages around analytics where you could sort, and it was all using the same design—but we had no idea people weren't picking up on how you can sort until we ran this beta and talked to people personally about it. It indicated a problem, not only with this set of analytics, but it indicated a problem with the entire analytics design.

It was the tip of the iceberg.

It really was. Internally there's a lot of heated discussions sometimes between design and UX and how should things be displayed. And I think this was another key reason why we should be talking to people because it informs everything from a design perspective.

Not everything has to be aesthetically pleasing, because aesthetically pleasing doesn't mean easy to use.

What advice do you have for researchers or designers who might be using or thinking of using betas for the first time as a part of their process?

If this is something that you would want research to make an impact on, you have to find the right product stakeholders and understand what the beta process is. From there think big and pitch big, but be okay with starting small. Often for other organizations, if they haven't had research involved with betas, it might be something that they're a bit more cautious about because it's traditionally a product-run initiative.

So, I would advise to plan big, but just remember to focus on the key areas that you can make the most impact on. In this case, I think a lot of the impact that researchers can have would be around the qualitative aspect and doing the usability testing to validate the success from an experience perspective. From there, start to go bigger, and try to understand processes of users. Try to also see if you can get involved from a quantitative perspective.

Think big and pitch big, but be okay with starting small.

Abby Baylen

What’s a question that you wish people would ask you more?

I'd say it’s, “What advice would you give to new mothers?” I know it's not work-related, but because we're in a time where with COVID and everything, there's all this focus on self-care—and there should be because I think a lot of people are suffering mentally from this situation that we're in.

And I'd say, especially for new mothers, do the one thing that gives you peace and sanity every day. That’s a form of self-care. For me it was taking a shower on a daily basis. That was my time and my peace; it renewed me.

Self-care is something I try to impart onto my team too. I think as researchers, we tend to really thirst for a lot of information. Sometimes this means overexerting ourselves, and really, it's a fast track to more mental exhaustion. That's more than we should be handling during these crazy times anyway.

And also see a chiropractor.

A chiropractor?

Yeah. Our spine is core to our central nervous system. If we're sitting at home a lot, we're not probably set up in the best way possible. And I think our bodies suffer for it.

Take care of your back and your mind. Great advice.

Do it all for your health. The healthier we are, the better research we can conduct. And if we are the voice of our users, then the healthier we are, the stronger the voice our users will have.

Tony Ho Tran is a freelance journalist based in Chicago. His articles have appeared in Huff Post, Business Insider, Growthlab, and wherever else fine writing is published.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest