Skip to content
Ideas

A/B Testing for User Researchers: A Low-Lift Way to Answer Quick Product Questions

Add A/B testing to your "save time, solve the problem" arsenal. Here's what you need to know to conduct your first test.

Words by Nikki Anderson, Visuals by Allison Corr

One thing I have learned over the years that you don't have to research everything.

And when we get stakeholder questions that can be best answered other ways—like surveys, unmoderated testing, and heuristic evaluations—we free ourselves up some time to do good research on some of their thornier questions. 

One way to get answers quickly, and support teams effectively, is through A/B testing.

What is A/B testing

A/B testing is confirming which version of something is better for users and the business. You take two versions of one asset, challenge them against one another, and see which comes out on top.

For example, you might send a two different newsletter with two different subject lines to two different sets of subscribers. Then you can analyze which one performed the best.

When deciding between two concepts or ideas, you might think of usability testing. In usability testing, you can put two different prototypes in front of a user to see which prototype performs better from a usability standpoint. While the concept is similar, A/B testing serves a different purpose.

A/B testing compares incremental and straightforward variations. As opposed to usability testing, you don't compare two completely different versions of a website or prototype. A/B testing is data gathering. You can see what metrics the impact of the variations, but you can't understand why. If you have too many variations on your website, you will not know what changes attributed to the winning version.

The best way to understand A/B testing is an example, and I am sure you have seen many (maybe without even noticing)! One of the most significant examples is from Humana's homepage. Humana is a healthcare insurance provider. They wanted to test the banner image on their homepage, as it is the first object a user sees when they arrive on the website. This simple A/B test led to a 433% increase in clickthrough for the company.

Source

As you can see, they changed incremental parts of the banner for Version B (the challenger):

  • Reduced copy
  • Created a more persuasive CTA (call to action)
  • Increased clarity of messaging

All of these are small changes, but they made a huge difference.

You can test more than just two variants, which is called a multivariate test. A simple way to understand the difference between these types of testing:

  • Single variant test or A/B test: Does this CTA perform better with a green button or a text link?
  • Multivariate test: Does this CTA perform better with a green button, red button, or a text link?

When to use A/B testing

A/B testing may feel like a simple tool, but there are circumstances to use it and times to avoid it.

Use A/B testing when you are:

  • Experimenting with aspects of a interface—seeing what simple questions you can answer before you put in effort on more complicated
  • Questioning which drives more sales on a specific type of product, a discount, or a promotional gift
  • Testing small changes of content copy
  • Debating whether to add small features to your website or not

Don't use A/B testing if you are trying to:

  • Compare two completely different versions of a website, app, or prototype
  • Look for bugs or usability issues
  • Evaluate a site with less than 20,000 unique monthly visitors (you won't hit statistical significance)
  • Validate your assumptions. You need an informed hypothesis.

Some examples of A/B testing include:

  • Testing the copy with two different push notifications to determine which brings more engagement to your product
  • Experimenting with campaigns (similar to Humana) by testing if photo + text or just text drives more clickthrough
  • Deciding between two (or more) different photos for a campaign, promotion, or banner
  • Trying to understand the best timing (day of the week, time of day) for a promotion or a campaign
  • Choosing whether or not to include a new, small feature, such as a quick view of products
  • Seeing which color buttons for CTAs call the most attention and clickthrough

How to conduct an A/B test

A/B testing is an excellent tool for user researchers. As researchers, we are familiar with building hypotheses and choosing success metrics. A/B tests are also a time to flex the statistical significance muscle. There are several steps to running an A/B test. You can either do these on your own or work with a product manager.

Steps to run an A/B test:

  1. Identify a problem and hypothesis. A/B testing follows a basic recipe. Begin with a question you'd like to answer from data or user research. Develop a hypothesis identifying what appears to be the best solution to your problem. You can write a scientific hypothesis as an if/then format. A model is: "If I [change X], then [Y improvement will happen]."
  2. Pick a variable. Based on the problem, look at the various elements of your product. Brainstorming different design, copy, or layout that may lead to a positive change. Other things you might test include email subject lines and images.
  3. Pick a metric. Metrics will show you if one version is better than the other. The metric you pick is the improvement you want to happen, and what is not performing well. For example, if you want more users to sign up for your campaign, you would track the two different variables' clickthrough rates. Have a goal for this metric, such as increasing clickthrough rate by 5%.
  4. Create a 'control' and a 'challenger.' Set up the unaltered version of whatever you're testing as your 'control.' The control is the unchanged product as it currently exists. From there, build a variation or a 'challenger.' The challenger is the version with the modification you determined earlier.
  5. Determine sample size. What number of users will offer statistical validity? What percentage of your daily/weekly/monthly user base/subscriber list/active members will make these results conclusive? You should also make sure to run the test for as long as you can and not jump to any conclusions ahead of time.
  6. Split the control and challenger groups equally and randomly. You want to ensure your control and challenger groups are split evenly. By dividing the groups equally, you will get valid results. Imagine if more people saw version A than B, your metric would reflect an incorrect measurement. Additionally, split your groups randomly. If only specific demographics are looking at version A versus version B, you won't be able to derive conclusive results.
  7. Define the confidence/significance level. The significance level is the goal you set earlier; for instance, the 5% increase in clickthrough rate. If the changes improve the clickthrough rate by only 2%, do you care? Additionally, you'll want to make sure you reach a confidence level of at least 95% to 97%. A/B testing case studies show that this is the best way to get accurate results that lower the chance of random occurrences influencing the test.
  8. Measure the significance after the test. Once you've determined which variation performs the best, you need to measure statistical significance. Measuring the significance helps you justify if you should make the change. To discover this, you conduct a test of statistical significance. You can do this manually, but there are loads of free online calculators where you can plug in your numbers.
  9. Take action! If your B version meets your threshold for effectiveness, and if it proves your hypothesis, go with it! If the answer is a no, or a maybe, stay with version A, or create and test a new theory.

It's also essential to test all the variations of your site simultaneously and give yourself enough time to gather a large sample base. Depending on the business size and amount of traffic, this could range from hours to a few weeks. If your business doesn't get a lot of traffic, it'll take much longer to run an A/B test.

It is also crucial to only run one A/B test at once on a given campaign. For example, if you A/B test an email campaign that directs to a landing page at the same time that you're A/B testing that landing page, how can you know which change caused your metric to change?

By working with others (product managers) and using tools, you can streamline this process. Watch others if it is your first time running A/B tests. You may not need to be part of the entire process but can help support teams develop hypotheses and variations based on user research findings.

Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs. 


To get even more UXR nuggets, check out her user research membershipfollow her on LinkedIn, or subscribe to her Substack.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest