Skip to content
Ideas

5 Tips to Increase the Impact of Your UX Surveys from Lauren Isaacson

UXRs are responding to a rapidly changing experience world, and with it comes mixing methodologies to explain the most of the picture. Go from stale to stellar with these tips.

Recapped by Ben Wiedmaier, Visuals by Emi Tolibas

This is a recap of a more comprehensive workshop on impactful UX surveys. You can stream that workshop on demand here.


First, why surveys for UX?

In this time of ever-shrinking deadlines fueled by design sprints and innovation cycles, folks are awash in data and sometimes limited in insight. Surveys—and the quantitative data that come with them—can be a way to advocate for a user in a succinct, on-time way.

Yes, there is an overreliance on surveys and quantitative approaches for questions that involve context, ambiguity, and perspective, but part of being a UXR is leading with empathy, and that goes for your stakeholders as well as your customers. Finally, pairing surveys with your interview, ethnographic, or textual analysis data only bolsters your recommendation rigor. Win-win.

Action item: Consider your stakeholder audience and their level of statistical acumen and expectations. If they're a product, design, or marketing team, descriptive statistics like frequencies paired with user stories might be enough. If, on the other hand, you're working with an eng or back-end team, they may not need more survey data, freeing you up to explore the edges with contextual, in-situ, and qualitative methods.

Narrow, focus, and sharpen (AKA, go short)

This is more a macro point, in that it encompasses and informs all the rest, but given the omnichannel milieu of your customers and users, surveys programmed for 8.5" X 11" pieces of paper often don't cut it on the tablets, smartphones, ultrabooks, and phablets on which your users are engaging with your experiences.

As such, filter each question, whether closed- or open-ended, through the lens of a mobile device. Does that four-line question stand out? Will the participant even read it completely before answering? Do yourself and your users a favor and strive for pith, succinctness, and mobile clarity.

Program your response options to be vertical, read from top to bottom (instead of the classic Professorial four option grid). This doesn't mean dilute your questions, but instead consider the medium.

Surveys have a tendency to be catch-alls for your stakeholders' questions. They want this, that, and the other thing "answered" with your survey. You know, of course, that a circuitous and expansive survey only hurts data quality, so insist on a small set of objectives; strive to keep it at the right focus altitude throughout (e.g., "In this survey, you'll share your perceptions of a new experience for shopping.") Surveys can be spun up quickly, so advocate to keep it focused.

Similarly, as attention spans narrow, keeping to a general rule of 10 minutes, start-to-finish, is a sound practice. Sure, partial completes are kind of completes, but full pictures require full answers. Starting with a mobile-ready survey that takes less than 10 minutes to complete gets you further than more.

Action item: Pilot your study! It's shocking how few folks (not just UXRs) take their surveys from the participant's POV. Try it on all the modalities possible: mobile and desktop. How do the questions read? Are the response options cut off or misaligned? Most survey software platforms offer free testing, so use it!

Map questions to an analysis plan

This is prudent advice for any research method, but especially surveys (see above on the "include everything" risk). Ask this as each survey question is programmed, "If I knew this answer, what would I do with it?" This not only grounds your survey questions in the research objective(s), but it can serve as a handy way to inventory what analysis you'll need to conduct to examine hypotheses and answer research goals.

Exploratory and discovery questions are important to uncovering new innovation avenues, but restrict how many are included in a survey; consider the time of the participant answering as well as your own when it comes time to clean, analyze, interpret, and report the findings.

Action item: Create a separate codebook or a symbol in the draft survey, noting each and every analysis you'll undertake for a question or question set. For example, "Questions 1-5 will together create the persona measure that I'll correlate to question 10, intention to purchase." It might read tedious, but the up-front work now will save time later when stakeholder asks (or timelines) inevitably change.

Smooth the path to accuracy

A poorly constructed survey is a rigid, blunt instrument for experiences questions, which are nuanced and complicated data needs. Here are some classic survey advice aimed at injecting flexibility into your questions and making it easier for participants to answer as close to the "truth" as possible:

  • Offer "Not Applicable" or "N/A" whenever possible, and strive to make it the final response option among your list. This coaches participants to expect it and not feel forced to guess or select something that doesn't represent them.
  • All response options need to be both mutually-exclusive (there is little to no overlap between any individual response options) and exhaustive (you've covered every single potential response with your options). If your single-select question is looking like it's heading beyond 20 responses, it might be best to make it open-ended (remember, mobile-ready).
  • Include an "Other" response, which offers a window in unexpected or edge cases, and ensures completeness in response selections. Furthermore, neutral IS a appropriate response, especially if the question is confusing or opaque.
  • Consider your scale response ranges very carefully. Are you using potentially value-laden options like "Irregularly" or "Below Average" and how might that skew your data? Strive to match response ranges to the spirit of the variable under study in the question: attitudes, behavior, perception, value, etc. You may need to create specific numerical ranges to collect valid data.
  • Be specific and reasonable in the question's prompt. Avoid jargon exclusive to you company, product, or experience; remember, you are one of many with whom a user likely engages. Set clear timeframes (e.g., "In a typical week..." or "In an average session...") and be specific in areas of experience (e.g., "When creating a profile..." or "Times when the app slows to a halt...")

Action item: Just like piloting a survey can imbue empathy with the would-be participant, READING the survey aloud also works wonders to spot clunky wording, complex response sets, or incomplete prompts. Read aloud to yourself and, if possible, one collaborator for another gut-check. If it sounds clear, it'll likely ready clearly too.

Ethical, not extractive

As with the "banking model" of education (i.e., a student's head is to be filled with information), a transactional relationship between you and your user will not produce a positive benefit, and may likely be introducing friction. If participants feel less than human ("You're just surveying me and moving on to the next person...") your data and experience will suffer.

This is often referred to as "extractive" research, where benefits are borne at the expense of another, lesser powerful party. There are lots of simple ways to equal the power between you and your participant:

First, and most importantly, provide some incentive. Not everyone can provide financial compensation directly, but gift cards, new or early product access, or even in-kind donations to charities of choice can go a long way toward making your survey equitable.

Second, be as clear as you can up-front with the research goals, and offer a debrief session or the chance to view the insights after analysis is complete. A customer might not immediately "get" the impact of their research, but by increasing the visibility, you're empowering them with agency to learn how their responses are informing the experience.

Finally, be respectful when recruiting, especially if you're "going into" (either physically or digitally) communities looking for samples. Partnering with community leaders, creating invitational messages on social media groups, or looking for population-specific providers are all ways to respect the nuance in experience different communities have.

Action item: Work with research operations or recruiters to gain access to invitation messages, sites of sampling, and other aspects that set the tone for how a person comes to know and engage with your survey. Equitable research begins with the first missive, so make sure your being gracious, clear, and open from the start.

Ben is the product evangelist at dscout, where he spreads the “good news” of contextual research, helps customers understand how to get the most from dscout, and impersonates everyone in the office. He has a doctorate in communication studies from Arizona State University, studying “nonverbal courtship signals”, a.k.a. flirting. No, he doesn’t have dating advice for you.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest