Have you ever been in that weird middle ground between discovery research and usability testing? Your team has a few concepts in mind, but they aren't fully formed ideas. They want to test to see if they are going in the right direction, but the concepts aren't usable.
For a while, I figured usability testing would be a suitable approach to these problems. I made the designer layer clickable areas onto these prototypes, even when it didn't make sense. I put the low-fidelity ideas in front of users and tried to understand if they thought they could use what we created.
However, we still lacked some information by the end of the usability tests. We had an idea how users might (or not be able to) use a particular flow, but still not much knowledge of the concepts' validity. After some time, I learned about the nuances of concept testing.
What is concept testing?
I love it when my teams come to me with about a million different ideas they want to test. Particularly after a brainstorming ideation session, there are so many ideas. Once you gather the most impactful and usable ideas to test, how do you know which one is the best for your users?
This scenario is where concept testing shines. Concept testing is when you put these early-stage ideas in front of participants to determine which direction the team should go in or whether or not your team is going in the right direction. These tests will help you understand how people feel about your concepts/ideas. The most common questions concept testing can help you answer are:
- What makes this concept appealing or unappealing?
- What, if any, problems does this concept solve for our users?
- What are the gaps in the concept/experience?
- Which concept should the team focus on?
- How does this concept compare to competitors?
- How well/unwell is the concept and value prop communicated to users?
Overall, there are two scenarios in which you can use concept testing:
Concept testing is beneficial in the discovery phase because it helps teams narrow down the scope of their ideas. Say that you are working at a company that provides retirement plans. During generative research, you found that younger people who should be thinking about a retirement plan don't understand the value or how it works.
During an ideation session, your team comes up with several ideas to help communicate the value of retirement plans and how to get started with one right away. Your team has a total of three ideas and isn't sure which one is the right path. Concept testing is a fantastic way to figure out what direction the team should take.
After you and your team choose an initial direction (or two) and finish some low-fidelity prototypes, you can do more concept testing to continue to validate your design decisions. Concept testing can be a continuous process from low-fidelity to high-fidelity concepts. I always recommend starting with lower fidelity prototypes and working up to more polished versions.
Once you finish concept testing, you can try alpha and beta testing if you want to continue testing your product.
Concept testing is not usability testing!
Please keep in mind that concept testing and usability testing are very different:
- You use concept testing to see participants' reactions to general ideas or concepts. You are getting an idea of what people think of concepts.
- Usability testing is to assess how effective, efficient, and satisfactory a prototype or product is. You can understand how participants use a given product or flow.
Always think of your goals before deciding on your methodology. If your team has a few different ideas they want feedback on, concept testing is a great choice!
How to run a concept test
Once you have decided on concept testing, there are a few best practices to follow:
1. Set your objectives.
Knowing what you want to learn not only ensures you are choosing the correct method but helps you outline the rest of your test. With clear goals, your tests will have focus, and you will get the information your team needs by the end of the concept test. Some sample objectives for the retirement plan example might be:
- Compare the understandability of the three new retirement plans
- Examine how participants currently feel about signing up for retirement and how they respond to the new concepts
- Identify pain points and confusion with the three new retirement plans
2. Recruit participants
Once you set your objectives, you have to figure out who you want to recruit. Ideally, you are targeting people who would be using the product/feature in the future. They can be users of your current product/service or a competitive product with similar concepts to what you are testing. Creating a screener survey is the best way to ensure you get the right people. Ideally, you will speak to about ten people per segment for concept tests to get the most robust results.
3. Review the concepts
I tend to ask for the concepts to be completed three to five days before the first test. By having everything ready ahead of time, we have time to review the ideas we will be testing. Examining the concept with the designer gets me familiar with the flow and the information they want from the test.
4. Outline the flow of the test
Once I review the concepts with the team, I create a flow of the test. If you are testing more than one concept, make sure you vary the order. My concept tests follow similar structures and sections. Here is a breakdown for a 60-minute concept test:
- Introduction: 3 minutes about the test, who I am, signing an NDA/consent form, instructions
- Warm-up: 5 minutes for asking general questions to get the participant in the mindset of conversation. My favorite warm-up questions are, "what hobbies do you love?" "what do you do in your free time?" or "what have you watched recently that you loved?"
- General questions: 10 minutes to focus on the problem-space of the concepts. For example, if we were testing different meal kit plans, we would use this section to ask about meal kits or cooking habits, in general.
- Concept A: 20 minutes focusing on concept A. I start with general impressions and have participants explain the concept to me, as they would friends or family members. Then, I dive deeper into the different details of the concept.
- Concept B: 20 minutes focusing on concept B in the same way as above.
- Follow-up (optional): A 5-minute buffer to follow up on the concepts and check if the participant has anything else to talk about or add.
- Outro: 2 minutes of thanking the participant, answering any of their questions, and explaining any next steps, such as when they can expect the incentive.
5. Create questions
Once I have an idea of the flow, I will create the questions for the general section and concepts A and B. Here are some common questions I ask during concept tests:
- Walk me through your overall impression.
- How would you describe this to a friend or family member?
- Have you seen something similar (ex: competitor)? What do you think of the competitor? What do you think of this?
- Have you used something similar to this in the past? Why or why not?
- Walk me through how you have used something similar to this in the past? What was great? What was terrible?
- What is confusing?
- What is missing?
- If you could change anything, what would you change?
6. Think about quantitative data
Since we are talking about qualitative concept testing, it is essential to consider adding quantitative data into the mix. For example, you can ask participants a satisfaction score at the end of each concept ("On a scale of 1-7, how satisfied/dissatisfied are you with this concept?"). Additionally, you can follow up moderated testing with unmoderated testing to bolster your numbers.
7. Do a dry run
Once everything is together, conduct a dry run to ensure the flow makes sense for the team and participant. Sometimes dry runs can uncover gaps or questions that we didn't think to ask. No matter what test I am conducting, I always do a dry run before the first participant!
8. Synthesize the data
Once the concept tests are over, I go back to each question I asked and write each participant's responses. For example, if I asked the question, "Have you used something similar in the past?" I will tally up how many people did or did not. Then, for the "why or why not" follow-up question, I would write down the answers and cluster anything similar between participants. This method gives me an idea of the overall responses and feelings toward each concept and which one resonates better with participants.
Moderated versus unmoderated testing
As I mentioned above, unmoderated testing can be a great way to get a larger sample size to answer your questions. Whenever possible, I always recommend starting with moderated testing, as it will give you deeper insights than you can glean from unmoderated testing. By proceeding in this order, you can also clean up the script to make it as efficient for unmoderated testing as possible.
Keep in mind that concept tests are not about preference testing or seeing which concept users "prefer." I have seen many concept tests go awry when people ask participants, "Which concept do you like more?" If you are looking for preference, check out comparative usability testing instead. However, suppose you are curious about how users respond to and talk about your initial ideas. In that case, concept testing will help you and your team move forward with the best decision.
Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs.
To get even more UXR nuggets, follow her on LinkedIn, join her bi-weekly newsletter, or read more of her work on Medium.