Getting on-the-ground and closer to your customers as they interact with your digital experience or product is the goal of this approach. The context around a decision or action—the why—can lead to more impactful, longer-lasting improvements—be they to the UI or broader offering. The applications and use cases of digital product testing are varied, so starting with an outcome or problem statement will go a long way toward illuminating how to construct your design.
- Will they like—or even use—this new feature?
- If we want users to start [action], what should the product coach?
- What is missing from our experience?
All might require a slightly different set of questions, methods, and approach. Considerations include:
- Modality: Is this a mobile, desktop, or multimodal experience?
- User base: Does your question implicate current, new, or both users?
- Temporality: Can we "see" the moments of interest in one sitting or do they unfold over time?
- Product proximity: Are we curious about something that happens inside the digital experience or something that might happen before or after someone "lands" there?
Again, before a question is even programmed, ruthlessly prioritize what you need and want to learn from users, why that data is important, and how it helps answer the question. Scope creep is ever-present, and especially so when digital products (and their limitless potentialities) are involved.
Sample study design: Understanding new features in context
Testing a new feature allows you to see how project in beta is will actually be received before it’s out in the wild.
Traditionally, research on new features would involve sitting in a room with someone while they get a sense of the product. But that will only get you first impressions, which is really the start of a user story with a new feature or product.
Using a longitudinal methodology, or a tool like dscout, you can get a variety of insights into a new features’ reception that more traditional methods won’t. You can get a sense not only of what people think of right off the bat, but also a sense of natural usage. When are people turning to your new feature? Does it naturally fill a need? How does their usage style change as the new feature or product is introduced?
In-the-moment data can also give you insight into use cases you hadn’t considered before, and how a new feature slots into someone’s actual, concrete life.
Also, you can get feedback from them after a period of acclimation, rather than right off the bat, so you can see in real-time how their opinion has changed once they’ve gotten used to the product.
Below is a one-size-fits-most, longitudinal, unmoderated, design. As mentioned previously, product testing encapsulates a host of use cases. Modify the parts of this design to fit your specific need, your toolkit, or to your most accessible methodologies.
Part 1: Getting to know you
It’s always nice to start with a baseline! This gets you more familiar with what users think of your brand or company before a new product gets thrown their way. Ask them how they use the product, what it means to them, and what other products they use in conjunction. This also acclimates new participants to the study, before doing more in-depth activities.
|How often do you use this product?|
|How long have you been using this product?|
|What role does this product play in your life?|
|What is this product best for, in your opinion? What makes it great?|
|What would you change about it?|
|What other products do you use to accomplish similar tasks?|
Part 2: Meet your new product
This is a technical stop-gap measure to make sure that your product is getting rolled out the way you intend it to. In dscout, design this as a single entry part. Here, you want to require participants to complete any technical requirements they need to complete (e.g. restarting their computer, updating their app, etc.) and show a picture or screenshot of your new feature. This lets you make sure everyone’s on the same page before collecting feedback.
|Show us a screenshot of your updated app/new product/new feature.|
|Do you have any questions before we proceed?|
Part 3: Highlights and lowlights
This is a fun inventory-style activity that lets users highlight the best and worst of your new feature or product. This is better for a big change, like a UI overhaul or a new product, and less good for a small change like a button moving locations. Ask users to submit entries for “highlights” (what stands out as great) and entries for “lowlights” (what stands out immediately as a concern).
If you’d prefer to get this feedback after scouts have had a chance to acclimate to the product and have a better sense of what it can do, feel free to switch this part with part four.
|What feature or element of this new product are you showcasing?|
|Is it a highlight or a lowlight?|
|Take a screenshot of what you’re showcasing.|
|In a 60-second video, explain to us what makes this stand out (either as a victory or a concern).|
|If it’s a lowlight, what would you change to fix it?|
|If it’s a highlight, what about your experience would it improve?|
|Rate the impact on a scale of 1-10.|
Part 4: In-context use
Ask users to capture moments where they use this new feature or product throughout their days. Users describe what they’re trying to do and how the product is helping or hurting their attempts to accomplish their goals.
If you’re interested in broad usage information, you can just have participants report that via open-ends. For more tactical information about how they’re navigating your product, ask for screen recordings and videos of them accomplishing the task in real-time.
|What are you trying to accomplish right now?|
|Where are you/who are you with/etc.|
|How did you make use of the new feature / product in trying to accomplish this goal?|
|Take a screenshot / screen recording / video of you accomplishing this goal using the product, narrating what you do as you go along.|
|On a scale of 1-10, how do you feel about this product at this moment?|
|What would improve your experience?|
Part 5: Reflection
In the final part, ask participants to reflect on their period of using this new product.
|In hindsight, what stands out to you about this new product/feature?|
|What works particularly well?|
|What would you change?|
|If given a chance, would you keep using this product/feature?|
|How would you rate this product/feature?|
Alternate design: Naturalistic data (the “surprise feature” method)
This is a slightly different version of a similar flow. Instead of TELLING users they are getting a new update, you simply update their product mid-field and see what happens. You can use this if you’re interested in non-primed, naturalistic data from your participants, and if you’re interested to see how intuitively your update will integrate with existing usage patterns.
Note that you do run the risk of getting little to no data on your new feature, if for some reason it doesn’t stand out to users in the short period of time it’s available to them. You may need to pivot and direct users to your update if that’s the case.
Note that troubleshooting will also be trickier in this version. You’ll have to monitor screenshots extra closely to make sure the update made it to the participant, without actually asking them about it.
To do this style, remove part 2 from the above flow, and make the moments part a little longer. Collect baseline moments for a few days to get a sense of normal use, then introduce your new update. Use tags to delineate pre-update entries, post-update entries that use the update, and post-update entries that don’t include the update.
You can still use reflection parts at the end of your study.
Use dscout Live (or another research tool that allows for desktop and mobile screen-sharing) to do a more traditional user feedback session. If you want super tactical feedback on specific actions, invite a few participants to sessions and have them share their screen and walk through the tasks you have in mind and narrate their processes. This will give you a high level of detail on some specifics, which is nice to triangulate with more natural data that you’d get in an unmoderated study.