Skip to content

Survey Design Basics for Qual-First UXRs

Learn how to build surveys that work for respondents, get you good data from the research, and help you analyze responses to surface key insights.

Featuring Lauren Isaacson

Product and service improvement often relies on survey data, but poorly composed surveys yield bad data—and bad data leads to poor decisions.

The good news is that researchers and designers already have the skills to make surveys easier and more enjoyable for the respondents.

Watch to learn:

  • Basic and advanced questionnaire design tips
  • How to reduce the respondent cognitive load (making the task more enjoyable)
  • How to ask better questions in usability testing surveys
  • Ways to analyze and track the data for better insights

Transcript:

Ben:

Let me get to the esteemed bio and introduction of Lauren Isaacson, who I've been so happy to work with. She has been nothing but flexible and brilliant in her time. We have a people nerd profile of Lauren that I'll be dropping in the chat. Yes, you've got it all. I'll drop that link one more time. We had a great People Nerds profile of Lauren. She is an accessibility thinker, and is someone who's leading the principles around and the best practices for accessibility, and I think you'll hear some of that in the ways that she talks about survey design.

Ben:

So, she's based in Vancouver. So, any of our Pacific Northwesterners, you've got a friend here today. She's a market and user experience research consultant. She began in LA doing research and brand strategy for digital ad agencies. The thing that I admire about her is that she's worked across a ton of different capacities. I work with a lot of People Nerds who are very siloed, and that's completely fine. Lauren has really gone with the sampler platter. She has led market research departments. She has been a subcontractor for agencies like Blink and Applause, and she has also been a UXR design team of one.

Ben:

She's the current chapter co-chair of the Qualitative Research Consultants association, which I colloquially referred to as QRCA. I don't know if that's how anyone ...

Lauren Isaacson:

Nobody calls it that.

Ben:

Damn, well, I call it QRCA, so the Pacific Northwest Chapter of the Qualitative Research Consultants Association, which I've had the pleasure of working with really sharp, smart people. She's also the former chair of the British Columbia Chapter of Marketing Research and Intelligence Association. Lauren, I've done far too much talking. Thank you so very much for joining us, and welcome.

Lauren Isaacson:

Great. Thank you very much, Ben. All right, so like Ben said, I'm Lauren, and I'm an independent market and UX research consultant from Vancouver, British Columbia. Yeah, before we get started, let's see if I can get this going. I think I need to click this. There we go. Okay. Thank you so much, everyone, for being here. I hear that a lot of people signed up. I don't know how many people are on the line, but again, thank you for making the time for being here, because you know what? Surveys are very important, but they are often a neglected part of research and marketing. But as a Canadian, I would be remiss if I didn't say, I am so sorry that you are here because we're going to be talking about surveys, and nobody likes to talk about surveys.

Lauren Isaacson:

This is why it's often neglected because it's something that no one is really that interested in except for a few very rare and wonderful people who make it their careers. A little bit more about me. I am primarily a qualitative researcher, but I spent a significant portion of my career doing quantitative research. One of my first jobs as an in-house researcher, it was for a company that they needed to completely redo their voice, the customer program, and they had a lot of branch offices. So, I went in and I did a deep dive on survey research, learned everything there is to know about it. Took classes, read books, and then I attack their national survey. That was a really great education on how to design good surveys and how to deploy them well and how to deal with the data afterwards.

Lauren Isaacson:

Yeah, if you can get something like that going, good luck to you, it's quite a challenge. We have a lot to get through, so let's get started. Surveys, why do we need surveys? We love to send surveys, and we love to send surveys for good reasons. I really love this quote by Lord Kelvin. You don't just want to be able to say something's better, you want to be able to prove it. And if you cannot measure it, you cannot say that you are truly improving something. Benchmarking and tracking surveys are key to showing improvement, but those surveys, surveys that are longitudinal in nature, they require that the original survey be built to last. It has to be perfect from day one.

Lauren Isaacson:

You can't adjust later on, because if you do, if you adjust the survey later on, you are saying, it is okay that I throw out all of the data that I had before. That's why it has to be perfect and exact from the very beginning. Surveys also are used to measure things like market share, customer satisfaction, brand awareness. All of those things rely on quantitative data. Also, executives, when we talk to executives and people who are outside the research or say the UX function, they crave data. Data gives them a sense of confidence and credibility that helps them make business decisions. Sometimes qualitative, saying that someone said X and Y, just doesn't cut it. Being able to show that a large population of their user base has this problem or this issue or this need, that helps make the decision easier for them.

Lauren Isaacson:

We also like to send them because they're easy. So, the sheer number of vendors available with decent usability is making it easier to get all the data you want. This used to not be the case. It used to be that you would have to send paper surveys and you would have to tabulate each and every question that you asked on an individual basis, and there was poor accuracy, and then everything had to be done by hand or had to be logged into a computer by hand. Getting fast turnaround on data was just not possible. Now, it is. You can get instantaneous feedback. You can get everything you want in a really easy format, but there's a problem with that.

Lauren Isaacson:

I believe that when something valuable, and survey data is indeed extremely valuable, when it's too easy, something's wrong. It means that we're not really paying attention to it. Doing something important, it requires effort and training and hard work. That hasn't changed. But as a result of the lack of effort that people are making, we hate to get surveys. We hate to get surveys for lots of reasons. For one thing, we get too many of them. Every day we open our inbox to someone wanting their opinion on their product, their service, that meal we just ate, whether or not the bathroom is clean enough. All kinds of things. We are inundated with interfaces for surveys.

Lauren Isaacson:

Another reason why we don't like taking them is because a lot of the people who have designed these surveys haven't put the effort in. They really didn't kind of really consider who they're sending to and whether or not that survey really requires the effort that it should, and that's okay. You know what? We're naturally lazy. That's the human nature, and phoning it in a survey is often understandable. It's not exciting work a lot of the time. We also don't like taking them because they're too long. A while back, Etsy sent me an invitation asking me to take a short 30 minute survey with no mention of an incentive.

Lauren Isaacson:

Now, that was quote, "We want you to take a short 30 minutes survey." Now, I love Etsy. Half of my closet is from Etsy vendors. This dress that I'm wearing right now, this is from an Etsy vendor. But I was not going to spend the time on a 30 minute survey if they weren't going to incentivize me for the effort, so I deleted that email. Practically, automatically, because I just wasn't going to take the time. Because clearly, they hadn't taken the time to design a good, short, brief survey that people would be glad to take. Speaking of those incentives, we often don't offer any. I had a professor in college who, back when it was like paper surveys, he would enclose a $2 bill with every survey he sent to a potential respondent.

Lauren Isaacson:

More often than not, people would send him back a completed survey because they felt guilty for accepting $2 for doing nothing in return. So, he kind of put a little Jewish guilt in their, courtesy of my mom, right in that envelope, and send it out and would usually get that survey back. These days we don't offer gift cards, we don't offer a service credit, and we don't even offer a chance to win something. This says that a respondent's time isn't valuable and it shows a lack of respect. We talked about what makes a bad survey. Let's talk about what makes a survey good.

Lauren Isaacson:

We should be keeping it short. Survey should have a very singular focus. I've been on the other side, I've been the client, I've had clients, and I get it. You're starting a survey and it's about oranges, and then someone else on your team wants to throw in a couple of questions about apples along with it. Then a team that isn't even in your department wants to throw a few questions in about bananas while you're doing it. These competing objectives balloon a survey. You want to avoid answering different research objectives for different stakeholders. You need to be very focused and you need to be very ... you can't take any prisoners, so you have to leave anything that you don't absolutely need behind.

Lauren Isaacson:

You also want to minimize your use of open-ended questions. Sure, we all want to know why, we all want people to talk about subjects in their own words, but those are taxing on the user. It's much easier for them to click a box than to type out an answer, and that also extends the length of the survey. So, you need to be thinking about that as well. A survey should be no longer than 10 minutes. All right? 10, maybe you can go up to 15, but the less time it takes to take a survey means more completes and less non-response bias. And non-response bias is a real thing. Non-response bias means that the amount of people you sent the survey to don't respond.

Lauren Isaacson:

That is a biasing thing. So you're only hearing from the people that actually have an issue and want to send your survey back. So, it's only the people who hate your service and want to tell you, or it's people who really love your service and want to tell you, but even the people who love your service may not bother. They should also be mobile first. This is old data. I wish I had more recent data, but it's really hard to find. 44% of SurveyMonkey surveys were taken on a mobile device in 2017. Think about that. That's almost happier surveys, and that's in 2017. When they showed the data and where that was going over time, it was only going up, so I'm sure by now, it's even greater than that, considering that a lot of people, their only access to the internet is using their phone.

Lauren Isaacson:

Now, according to research by GFK, another problem with surveys being taken on mobile is that a lot of people will enter their survey in portrait mode, so up and down, and 90% of them [inaudible 00:11:38], and half of that 90% won't bother to switch it to landscape. So, if your survey and questions aren't built for portrait mode and they're actually horizontally designed for landscape mode, that means that you are losing data from a significant portion of your respondents. So you need to think about that as well. You also need to think about what ... remember about keeping it short, we need to think about questions we actually need. So, what is your need to know and what's just nice to know?

Lauren Isaacson:

A good way to define what is a need to know versus a nice to know is to ask the question, if I had this data, if I knew the answer to this question, what would we actually do about it? If the answer is, we wouldn't do anything about it, it just makes me feel better, or it makes me more confident, or it helps me prove a point, that's not a question you should be asking. You should only be asking questions that give you actionable data. That means having an action plan for each piece of data that you collect. You should also be thinking about making your questions, hopefully, classic tweet length questions, that means 140 characters or less.

Lauren Isaacson:

You can do 280. I've gotten used to 280 surprisingly fast. But if you can, if you can keep it really short, really brief, that's a good question. Now, this is an arbitrary rule, but it helps you be concise, and you need to think about being concise over anything else. We should also think about being friendly, not formal. Don't try use formal language because this is a survey, it's not Shakespeare. If you're using formal language, that makes it not quite as enjoyable and less understandable for the people who are trying to answer your survey. So, if you can be just really casual and fun in your language, that helps people take the survey, that helps them feel better about what they're reading. It helps make an actually enjoyable rather than a taxing experience.

Lauren Isaacson:

You should also be thinking about how we can help our respondents give us correct answers, because we need to think about that as well. Now, what do I mean by that? For one thing, we need to make our answer options exhaustive. So, do research before you write your survey. Find all of the possible answer options and put that into your survey. You also need to be considering your ranges, sorry, very carefully. Say you want to do a survey on candy consumption. You can ask, how many pieces of candy do you eat in the last week? One to 10, 11 to 20, 21 to 30. People aren't going to see those ranges and see a set of numbers that may or may not fit their consumption habits. What they're going to see, they're going to interpret those ranges as below average, average, and above average, and answer according to how they feel they fit in that.

Lauren Isaacson:

Do they feel like they don't eat a lot of candy, they feel like they eat a normal amount of candy, or they feel they eat too much candy. That's how they're probably going to interpret that. Mostly because people are not that diligent about their consumption habits or their behavior. So, people don't accurately track their behavior on a regular basis unless they're super into that whole quantified self thing. But very few people are into that and very few people will track the kind of data that you're probably interested in. So, you need to make sure that you are considering that when you are writing your survey questions about whether or not this is something that people will actually consciously be aware of and accurately reply.

Lauren Isaacson:

Hey, so you also want to be specific with your questions. Don't expect people to remember anything. So, say that you're at a conference and you were asked to take a speaker satisfaction survey. You should see the speaker names and their subjects and their talks object in the question, because you're probably ... I mean, you're probably not gonna remember my name, but you will remember what I talked about. That would also help. You can also put a link to the conference program in the survey. That way people can have something to refer to and say, "Oh yeah, that person, I really liked their talk, or that person, they were kind of okay.

Lauren Isaacson:

You should also be offering an opt out, other none are not applicable for all of your questions if you can. Only use blank answer options if you plan to analyze that data. Blank answers are often a burden for the survey taker and so you want to minimize their burden as much as possible. It's just like any other user experience. You want to make sure that it flows really nicely and that they can get through it with ease. And if they have to stop and answer something else, and you're not even planning to analyze that data at the end, don't bother. Why are you making them do that?

Lauren Isaacson:

You should also stop making all of your questions required. When you make questions required, what you're doing is you're forcing people who feel like that question doesn't necessarily apply to them to give you bad data, and bad data is worse than no data at all. So, I kind of laugh when I see a question where everything is required and then I see a badly worded question, or question that I think that doesn't really apply to my situation and I have to answer it anyway. I'm not helping you and you're not helping them either. Let's fix a question.

Lauren Isaacson:

So, do you have a non-human companion? Dog or cat? For one thing, this question is not simply stated, so let's fix the wording. What kind of pet do you have? See, it's much more conversational, much friendlier. This is something that people will actually think about answering and feel less of a pop quiz type of thing, but the answer options, they're not very exhaustive. There's lots of pets, not just dogs or cats. Now we have exhaustive answer options, but the radio buttons indicate that only one answer is possible. That's what a circle means. That circle means that you can only select one answer. People are complicated. They often have lots of different kinds of pets. So, we make it a check box. Check boxes indicate that multiple answers are possible.

Lauren Isaacson:

Now we want to randomize all of those answers, and then we also want to anchor other and I don't have a pet at the bottom. Those should always be at the bottom. That's a setting in your survey program, you should be able to do that at any time. So, why do we want to randomize? We want to randomize because randomizing answers reduces order bias. People are more likely to select certain answers depending on the order that they are in, in a list. With this, we now have a survey question that is short, it's written in plain language, it has exhaustive answer options, it allows for multiple answers, and it's randomized and anchored, and it's not mandatory. We are champions. All right, so do we have any questions so far?

Ben:

Lauren, we just got a really good one from Jen who asked, how do you balance being exhaustive with responses versus having too many responses to scan easily? Do you have any suggestions on maybe a number or ways that you can approach a question like that?

Lauren Isaacson:

I don't have a number, but one thing you can do is you can do a little extra research and see if there's anything that are more plausible than others. If there's some that are really rare, so you'll notice, like in my pet list, I didn't have anything like an axolotl, or something like that. I just left that as other. So, you can truncate things in the other selection, just have it be the most likely options on top of there. That's a way that you can balance that, but you want it to be kind of the common ones that people are most likely to answer. You want to have that in your list.

Ben:

Yeah. That's a really great suggestion, and I was just writing a survey this morning on public changing tables for parents when they're out running errands, the extent to which, or whether it all they'll jump into a Walgreens or a Target and use a changing table, and I was trying to come up with a survey question for all the characteristics that would make them more or less likely. I think that's a good example of when .... I can get my cognitive hands around all the factors and variables that are in a public bathroom versus all the colors that a person might list as their favorite. That might be, Jenna, a moment where you turn that from a closed end to an open end. Let's see.

Ben:

We have some questions around bias of the participant Lauren. I know you're going to get to other aspects of bias in the researcher and that we have some bias exploration from the participant side. Let's see, there's a question from Corey about non-response biases to get more representative data. Have you in the past done anything above and beyond to try to account for non-response bias?

Lauren Isaacson:

There's over recruiting. So, when you are selecting your respondents, when you're talking to, say your panel recruiter and people like that, you can say, I want a specifically ... you can have quotas. You can say, I want this many people from this profile to answer my survey, this many people from this profile to answer my survey. I know that a lot of surveys, they also use weighting. If there is a population that is underrepresented in the survey, what they will do is they will weight their answer as greater than the answers of the people that are overrepresented in the survey. That is something that you can also do. Those are two options, but it all depends on being very careful and knowing what your plan is going into this, is to what you need and how much of things you need.

Ben:

Let's see, I don't have a name for this person, but they have an IPSOS poll that shows that up to 73% are mobile first. I don't know what that date is, but thank you to that person who who dropped that. So, it's gone up to 73%. I guess, one other question for something that you just mentioned from Randy, Lauren, Randy wants to know if you have any suggestions on advocating for a focused singular sort of survey. He writes, how do you handle it when other teams try to squeeze in many more oranges with your apples? Do you have any more suggestions?

Lauren Isaacson:

Yeah, you have to have a strong no button. Sometimes you don't have that option, unfortunately. You can push back as hard as you can, but you have to be also aware of your political position. If you have a really strong advocate on your team who is able to push back, it depends on how junior you are. If you're senior, you should be able to push back on things and say, we can't do that. If you want your own survey, we can arrange for you to have your own survey. But you should have hopefully an advocate on your team who is senior enough to push back on them, and if you don't, then try and just figure out, talk to them, try and negotiate. Just say, how much of this is just a nice to have, how many questions can you get rid of? But yeah, competing objectives, that's a survey killer. That's just something that you have to negotiate and be politically aware of in your own career.

Ben:

Yeah. It's fraught for, if you're an external person or a consultant working, and you might not know some of those political dynamics are. I'll turn my video on here. I know that when I'm working with [crosstalk 00:23:42].

Lauren Isaacson:

Oh sometimes that's a benefit because you can just go, no.

Ben:

Yes, exactly. Yeah, precisely. Sometimes it's easier when you're an external person because you can have that sort of clinician's, well, I'm not sure the dynamics here ...

Lauren Isaacson:

You can walk away.

Ben:

Yes, precisely. I always try to stress to folks, and especially now more than ever, when time, time, time, and Agile, we're going to be doing a People Nerds panel on research timelines and whether scrappy research is crappy research. I would stress to them, what's your timeline? If you want to make a decision next week, well then, including these 10 other grid questions, which Lauren is going to get to in a minute, might not serve the greater good that the survey and the other research is trying to do. So, I'm always spinning it back onto either my stakeholders or my customers to say, well, let's go back to what you want and the timelines you have. How might these, it might not seem like a lot, but 15 extra or even five extra questions, especially if they're open-ended, how might those shift my analysis timeline, the synthesis time that I've built? All those sorts of things.

Ben:

Okay. There's a couple other questions, Lauren, but they'll be addressed in some of the content that you'll get to later about survey tools or sample size. Power analysis is something I'm happy to talk about with [crosstalk 00:24:53].

Lauren Isaacson:

Well, sample size, we're skipping. So, we won't be able to talk about that, but we can ...

Ben:

Maybe we can drop a bit of ... I know you have a sort of magic number, and maybe there's a website or two you can throw the folks away if they want to learn more.

Lauren Isaacson:

Absolutely.

Ben:

Great. Thank you.

Lauren Isaacson:

Okay. All right. So let's talk about scale questions. Overall, scales, we want them to be odd numbered, balanced and vertical. We want to use Likert scales. Likert scales are always odd numbered, and usually come in fives and sevens. So, a five point scale will give you clarity, satisfied, or very satisfied. That's what you can get out of that. Then you can also opt for a seven point scale, which will give you some nuance. Somewhat satisfied, satisfied, and very satisfied. But often in the wild, we'll see nine, 10 or 11 points scales. That's not so awesome, because we want to be able to label the points on any scale that we use, because we want to make sure that our interpretation of a four or five is the same as their interpretation of a four and a five.

Lauren Isaacson:

And if we label all of our options, that's what we're getting. We're all on the same page that way. So, if we were to do that for a 10 point scale question, we're going to get this, so this is a hot mess that just gives a false sense of accuracy. Nobody, nobody is this specific about their feelings. It's not like somewhat satisfied, satisfied, satisfied, plus very satisfied. I don't know. I'm okay. I'm satisfied. That's cool. That's what you get there. So, five is clear. Seven has nuance. Nine, 10 or 11, that's not worth it. Why do we want odd numbers in our scales? It's because we want a neutral. Neutral is a valid answer.

Lauren Isaacson:

I've had clients tell me, which is like, "Well, I want them to make a choice. I want them to tell me whether or not they had a positive experience or a negative experience." And I'm just like, "No, you don't want that. What you want is to give them the ability to say, "I don't care." My husband, back in the good old days, he used to run a beer event every summer. The first year he sent out a survey to all the attendees, and one of the questions was he asked about the non alcoholic beverage options, whether or not they liked him or not. And 90% of the data that he got back came back as neutral.

Lauren Isaacson:

He looked at me and he was just like, "I don't know what to do with this data. It tells me nothing about the non-alcoholic beverage options." I looked at him and I was just like, "What are you talking about? They're telling you, they don't care about the non-alcoholic beverage options at a beer event." So, keep that in mind. Let people tell you that they don't care about something so that you can focus on what they actually do care about. This will set you free. So, to review. Good scales are Likert scales. They are in five or seven point formats. They are balanced. They have a neutral option in the middle and they have the equal number of positive options and they have an equal number of negative options.

Lauren Isaacson:

They should also be mobile first. Hopefully, they are vertical instead of horizontal, because they want to work in portrait mode, not just landscape, and every option should be labeled. Fully labeled scales are shown to be more effective in studies. Let's fix a scale question. Please rate your level of satisfaction with this conference. Okay, back to question wording. How's the conference going so far? So, now it's written in plain friendly language, but what about that scale? For one thing, it's in the wrong direction, it kind of defies user expectations. Depending on the culture, some cultures it's opposite, but in North American cultures, you usually want excellent to be on the right hand side, the more positive one to be on the right hand side, it's moving from negative to positive, less to greater.

Lauren Isaacson:

So, you don't want to give people a reason to overanalyze their answers. You want them to get through it, just like water running down a hill. A survey is a user interface just like any other. Okay, let's see. So, now the scale is reversed, but this isn't a Likert scale. This is four points, not five points. So we add a neutral. It's Likert, but it's not balanced. What is decent? Is decent positive or negative? I don't know. It just depends on who you ask. So, this isn't clear. Let's fix it. Okay. It's really bad. It's kind of bad. Neutral. It's pretty good and I'm having great time. So, not only is it balanced, but it's written in plain and friendly, fun language, but it's still wrong because it's not mobile first.

Lauren Isaacson:

Going from horizontal to vertical makes it easier to answer in portrait mode on a mobile device. There are some platforms that will automatically fix your question if someone is taking it in portrait mode versus landscape, but if you want to play it safe, and you also need to make sure that it does that, some don't. If you just put everything in a regular up and down method, you know that it's going to work in portrait for sure. Now we have a question that is short, that uses plain language, it offers a neutral option, it is using a Likert scale. That scale is also balanced, and it's vertical. We get a gold star. Good job. All right, so do we have questions on scales?

Ben:

Let me see here. Email contact required. Okay, we don't have anything about scales. Someone asks about an anchoring. Could you go over anchoring just one more time?

Lauren Isaacson:

Okay. Anchoring is a setting on your ... so you can randomize your answer, so it's a setting in whatever survey platform you're using. I have never really encountered a survey platform that didn't offer randomization. Randomization is good because it reduces order bias, but you want to be able to put other or not applicable at the bottom in all of those questions. What you can do is, if you anchor it, that means that every time someone that sees that question, one of your respondents sees that question, they will see anything that is not anchored always in a different order, but those last two that you anchored will always be at the bottom of those answer options. That's what anchoring does.

Ben:

Perfect. I have a question from someone at team DScout, shout out to team DScout. I always forget that my colleagues are watching as well. They ask about Likert type scales, where you have a five, strongly disagree to strongly agree. They ask, thoughts on using an even point scale to force a non neutral opinion. Have you ever used this to any success, and if so, could you talk about the situations?

Lauren Isaacson:

No, and I always recommend against because then-

Ben:

Okay, why is that?

Lauren Isaacson:

Yes, because you're telling people that they can't say they don't care, and sometimes they don't care.

Ben:

I guess I'll ask the attendees if there's ever a time when they've had something where they did want someone to be forced to care, or maybe ... I always ...

Lauren Isaacson:

Why are you forcing people to care? What if they just don't care?

Ben:

Yes, and I always think, I know when you're doing statistical analysis, so many folks focus on the finding of an effect, but I think it could be sometimes equally interesting to find a lack of an effect. So, I think to Lauren's point, if you find that a lot of your users or customers are neutral about a thing, maybe you think something is important that your customer base really doesn't. So, you could be spinning your wheels in a direction that your target user base really isn't seeing. I know that we do that a lot with very siloed usability, like click this button. Did you notice this button? What did you like about the button? And they're like, "I didn't even know it was there. I was trying to do this thing over here, but you're forcing me to use this button."

Ben:

It's so illuminating when you do those sort of contextual usability studies and you see how folks actually think about an experience, your product service or whatever it is you're working on.

Lauren Isaacson:

Yeah, you want to focus your time and energy on things your users actually care about.

Ben:

Let me see if there's anything else. Ah, this is a great question from Chris. I have a preference on this, so Lauren, I'm curious of your preference. On portrait scales, do you recommend having the positive on the top, so the first thing that you see, or the positive on the bottom, and do you have a reason?

Lauren Isaacson:

I prefer positive at the top.

Ben:

Okay, why is that?

Lauren Isaacson:

But there are people who disagree with me and want to put positive at the bottom. But honestly, it doesn't matter as long as you're consistent.

Ben:

Sure.

Lauren Isaacson:

If you put positive at the top for all of your questions, that's all that matters. If you put your question always at the bottom, that's all that matters because you're training your respondents to know that positive will always be at the bottom, that positive will always be at the top, and you're not switching it up.

Ben:

Is that informed by your view on accessibility in making the research less extractive and more cooperative?

Lauren Isaacson:

It's more of a user experience thing. It's like you want it to be water running down a hill. You don't want to make people think that you want them to just get through it just as easily as possible.

Ben:

Beatrice asks about five versus seven point response scales. My small time studying statistics, I can tell you Beatrice that there were no statistically significant differences between five and seven point options. Sometimes you're dealing with a rather ... If you're dealing with a sensitive topic, maybe a political issue, folks do want to have more gradations. By and large though, at least I haven't found in my experience, any differences. Sometimes the analysis can be more precise with a seven point scale, but only minimally. Lauren, do you have any sense on five versus seven in your history and experience?

Lauren Isaacson:

Not really? It's just, what do your clients want? Do they want the nuance? Okay, sure, why not. We'll do seven.

Ben:

Yeah, that's great. I think, let's see, variations. Yeah, we had a lot of folks ask about that positive on top versus positive on bottom. I had a faculty member in experimental design who said, who always recommended changing them all the time so that participants ... this was more during the paper and pencil days where you could have a participant just write straight down. I do wonder about, in the stats world we call counter balancing or experimental design, where you are modifying things that can, to your point, especially if you have a longer survey or a larger stem, or pardon me, root, meaning the question is a little longer. You are, to your point, Lauren, making it more challenging on the participant and you want your response scale, or you want your responses to be high, and so make it easier on them to answer.

Lauren Isaacson:

People don't enjoy answering surveys. It's not fun. So, don't make it harder on them.

Ben:

We have some folks asking questions about free recruitment tools for nonprofits, a few things on just how to get started. If you have any favorite tools that you use to clean quantitative data or launch quantitative surveys. I'm not sure if you have any favorite platforms or if there are places that you would recommend folks go to learn about it.

Lauren Isaacson:

I don't recommend specific platforms as far as if you're not in the nonprofit space. You have your user base, so you have your subscribers, people subscribed to your newsletter. That's where a lot of nonprofits get their survey answers from. But as far as free sources, no, there aren't that many that do that. But what you can do is approach some survey panel recruiters who usually provide survey panel respondents for a fee, and you can ask them if they're willing to donate some of their service to your nonprofit. That is something that you can do. Best of luck. It's hard.

Ben:

Let me see if there's any other. Ah, okay. [Safina 00:37:36] asks a really great question. If you have a response scale with both a neutral and an N/A. Is there a question that doesn't apply to both? Should an N/A response be provided for all? And I had someone say Likert versus Likert. Again, in graduate school, I had some faculty members who claimed to know Rensis Likert. And they said, no, it's actually Likert. I knew Rens, whatever. I'm not sure if it's Likert or Likert, so Lori, I can't actually say what it is, but thoughts on Safina's question about an N/A versus a neutral, Lauren.

Lauren Isaacson:

Sometimes I'll provide both. It depends on the context. So, if it's something that people may do an N/A, either that, or they can just skip the question entirely. That's the nice thing about making your questions not required, is that if they don't feel something that is applicable to them, they can skip it.

Ben:

Sure. Yeah, and I know some survey providers do have the ability to just click next without finishing a question. So, Safina, depending on whichever tool you're using, you might not need an N/A if you put in the instructions, skip any question that's not applicable. Of course, some participants may forget that along the way. Okay, I think we're ready to keep going, Lauren. Thank you so much for these pauses. There's so many great questions.

Lauren Isaacson:

Okay. Well, we still have a lot to get through. All right, so let's talk about grids. So you want to avoid using grid questions. Let's say that you have a 10 question survey, but one of your questions looks like this. This is not one question. You may have programmed it as one question, but this is actually 14 questions. You said your survey was 10 questions long, but it's actually 23. That is cheating. That is lying to your participants. Also, this is not mobile friendly and it's super tiresome. All right? Some platforms will turn a grid into individual questions for mobile, but that does still equals more questions. A lot of quant research professionals do not use grids anymore. They are flat out refusing to put grids in any of their surveys.

Lauren Isaacson:

Experiments are being done, are being made in accordion formats for scaling grid questions in mobile, but that's a complicated solution. Why would you use a complicated solution when a simple one, which means not using grids, making your questions vertical. That works really well. Okay, so bias. Bias is something that you need to watch out for. So, I've talked about non-response bias, I've talked about order bias. Let's talk about acquiescence bias and desirability bias. Acquiescence bias means we don't want to disagree with people. We are naturally social creatures, and we'd like to try and get along with everybody, even people we've never met and we don't know. You want to avoid using questions that you use agree, disagree, yes, no, true, false.

Lauren Isaacson:

If you can avoid doing that, then you are mitigating your acquiescence bias. Then there's desirability bias. Desirability bias is when we want to appear better than we actually are. We want to view ourselves in the most positive light possible. What you want to do, instead of asking people to reflect on their past experience and what they did or did not do, because they will probably give you an answer that is either not alive, but what actually perceive, but that may not be accurate. They may not think that they ate as much candy as they did. They may think that they actually ran more than they did, or they intended to run more than they did.

Lauren Isaacson:

So, that kind of counts, doesn't it. If you can actually use real consumption data or usage data, rather than asking people to self report, like recall their own experience, recall their own behavior, you're much better off. There are ways to get either through your analytics or through third party vendors, you can get real behavior data. Then there's sampling and research bias. So, sample bias is when the sample doesn't reflect the population. You want to recruit enough people to take the survey and giving up some control over who gets the survey. So, you can use quotas if there are certain populations that you specifically want to target, but you can also use things like random number generators, dice.

Lauren Isaacson:

That helps break up and takes away your control of who actually receives a survey within a population. Now, research bias is when your research point of view gets in the way of your survey. So, you want to word your questions very carefully. It's really easy to telegraph what you want, and respondents are really good at picking up on it. We use things like data quality questions. We use really neutral language when we are recording our surveys. And we also use things like red herrings. Red herrings are another form of data quality initiatives. So, red herrings are fake answers that eliminate people who just want the incentive or who aren't really paying attention. It helps increase our data quality, and that's really important.

Lauren Isaacson:

Let's try and fix a question that will cause bias using red herrings and neutral language. So, do you use Gmail? Now, this question, it's kind of telegraphing what we want. We either want people who do use Gmail. That's the most likely scenario in this, or we're looking for people who don't use Gmail. That can kind of bias what someone's trying to do if they're just looking for the incentive. There's a way that we can make this far more neutral, and that is by switching it to which of these email services do you use, and have an exhaustive list of email services. This highs what we're after, but there's still a problem. We are using radio buttons, and we know that people are complicated and they're very likely to use multiple email services. So we have to switch that to check boxes instead.

Lauren Isaacson:

Now, about that red herring I talked about. I made up FantasticMail. It is a completely plausible email service. If someone wants to take that and run and create FantasticMail, go for it. But as of right now, it doesn't actually exist. I even checked. So, we use red herrings for data quality. If someone selects this, we probably throw out their responses because they're either not paying attention or they're answering falsely, because they think it'll help them qualify for the incentive. Red herrings can be peppered throughout the survey to test for accuracy. You can also use things called data quality questions, which are very basic questions with very basic answers.

Lauren Isaacson:

One thing that I like to use, I like to say on a clear spring day, the sky is blank and the grass is blank. Then I have lists of color sets, and one of those is blue and green, but then we have like magenta and purple and blah, blah, blah. If people do select those, select anything other than blue and green, then we can throw out their answers because they're clearly not really paying attention, or they're a bot. There are some services that actually have pre-programs data quality questions that you can just insert into your survey, and those are really great, but I've even heard sometimes even those aren't perfect. I remember a vendor, I was asking him about data quality and he said, "Yeah, we have pre-programmed data quality questions."

Lauren Isaacson:

I even had one person get really pissed because they answered the data quality question wrong. He was mad because he really did genuinely think that the moon was larger than the sun. So I'm just like, maybe it's okay if that, that person didn't take my survey. Then we want to randomize the top answers and we want to anchor the bottom. Now we have a question that doesn't lead, it uses plain language, it offers a data quality red herring, it allows for multiple answers. It is randomized and anchored and it's also, it's not mandatory. They can skip it if they want. And will Smith thinks we are super cool. Okay. Do we have any questions on biases, neutral language or red herrings or data quality questions?

Ben:

There's a good question from Corey on when you might obfuscate a question or be direct with it. Do you have suggestions on when you might want to bury versus like, are you a parent of a two year old with this kind of stroller, versus asking, sort of rounded that way?

Lauren Isaacson:

Well, if you do that, if you ask very directly that way, then there's a strong possibility that that person who is not a parent or doesn't own a specific stroller will answer in the affirmative because they know that, that's what's going to qualify them for the survey. I can't think off the top of my head of an instance where you would specifically want to be direct with a survey question. I almost always advise that you want to be tricky. You want to be tricky with your questions. So, be clever, try and find ways to not telegraph what you're after and you're going to get better data.

Ben:

Yes, absolutely. Lydia asks if there's an optimal way to make grid questions vertical, or is it just each line as an individual question? I think that might go back to your-

Lauren Isaacson:

Don't use grids.

Ben:

Yeah, do you need the question? [inaudible 00:47:26] what it is. It feels like, once you smooch them all together, it's easier to say, well, yeah, we need all these questions, because look, they're all here together. Lydia, to echo Lauren's great piece of advice. I would ask you to go back through and ask yourself if you need all those questions. Some of you have been asking about the requirement of a question or the necessity of a question. Lauren had a lot of great things that I asked her to remove from the presentation, and we're running up on time already, so I'll end with this before we let her finish. She has some great pieces of advice around going through, actually with a participant and doing what's called a cognitive interview, where you're asking them question by question, what do you think I mean with this question? Do these responses make sense to you given the question I've asked?

Ben:

You might, again, if you have the time to do so, sit down with a potential participant, or maybe a small group of them and go through question by question. This is something that the studio at DScout does all the time before they launched their survey to a great big mass of folks. They go through and they do a cognitive interview with a participant to say, what do you think I mean with this question, or how might this question relate to this behavior? So you can get an insider's perspective on how they're perceiving, again, a small sample, but then maybe you can home in, tailor and refine the questions before you release it to potentially hundreds of thousands of people.

Lauren Isaacson:

I've also used unmoderated usability tests to test a survey, and those also worked really well.

Ben:

That's great. That's great. All right. Keep on going, Lauren. Thank you so much.

Lauren Isaacson:

All right. Okay, let's talk about incentives. I want us to start giving incentives again. They are not necessarily expensive or difficult, and there are services out there that will even do it for you. So, incentives can really increase your response rates and reduce your non-response bias. So, a good sample is meaningless if nobody answers your survey. This is about respect for the people that you want information from. They don't owe you anything, so don't act like they do. Do you not have a budget for incentives? Well, then you should maybe question the importance of your study. If they're not willing to invest appropriately into doing the survey and paying for a good sample and an incentive for them to take the survey, then maybe it's really not that important to their bottom line and so they should be questioning it.

Lauren Isaacson:

I sometimes work with a non-profit, and every year they'd survey the people who use one of their services. One year they offered a chance to win free ice cream for taking a survey. Now, this was like fancy gourmet ice cream that was donated by a local ice cream maker. But because of that, they got 500 responses within a week and had to shut down the survey. So, if you're going to do this, I would consult with a lawyer before doing the prize drawing, because you want to make sure that you're in compliance with either state, federal or provincial laws. Think about that as well.

Lauren Isaacson:

Now that we have this great survey, it's well written, it's wonderful, it has a great user experience, we want to put that thing in a gratitude sandwich. So, you want to thank people for taking part of that survey at the beginning, so thank you for taking part in this. We really appreciate it. And this is what we're going to do. Then at the end is like, thank you so much for answering our questions. That really helps us do X, Y, and Z. So, if you can also afford to be transparent, not everyone can afford to do this. I have heard of companies that are able to be transparent with the data that they get. That is also really helpful, because that helps them know that you're doing this for real.

Lauren Isaacson:

It also helps them, sometimes, they're just really curious to see what other people answer, and so they'll be willing to answer your survey just because they're curious about how other people do it. Then always, you want to have a final open-end, open ended question to allow people to tell you what they think is important, not what just you thought was important. So, sometimes they'll give you feedback on the survey while they're doing that open-end, and sometimes that feedback on your survey is completely valid. Yeah, sure. I've had people tell me that my survey sucks, and I was just like, "It doesn't it suck." And I went back and looked at it, and it's like, okay, well, maybe that question did kind of suck, so why don't I rewrite it?

Lauren Isaacson:

Yeah, sometimes that feedback that you get on your survey is totally worth listening to. We have data. How do we analyze it? Remember that data analysis plan that I mentioned earlier, where you know exactly what's going to happen with this data once you get it and what it's going to inform? You want to tie that data analysis plan to the project objectives. Write it out, get everyone to agree on it and be specific. Know what kind of questions you're going to ask and what type of quantity of data and what type of software you might need in order to analyze that data before you get started. So, what kind of variables? Do you want to use your cross tabs? Are you going to want to separate things by demographic? Well, if that, then you're going to want and have so many people from that demographic, so many people from that demographic, and so many people from that demographic to take your survey.

Lauren Isaacson:

You want to have a significant dataset once you have all that data back. Are you going to be doing conjoint regression or text analytics? You're going to need specific software for that. Make sure that you have access to it. Then you're also going to need ... Yeah, I think that kinda covers it. Clean your data. Data needs to be cleaned. You need to toss out the low quality responses. Get rid of the extreme outliers that are pulling your averages out of whack. Now, a box plot will show you extreme data points and how much they diverge from the median and mean. You also want to remove people who didn't answer all of the questions, that use the red herrings or who straight-lined. So, straight lining is like, when you answered C to every question on the test.

Lauren Isaacson:

Probably didn't get you a good grade, doesn't get you great data either. So, look at that as well. Do you still have enough data to meet your sample requirements? Yes? Great, you did a wonderful job and you over recruited. No, well, why don't you just remove the worst 20% of your offenders and then work on the dataset from there? Okay. So, what is standard deviation? We hear about it a lot, that there's something called standard deviation, and apparently it's important. This is what they're talking about with standard deviation. Standard deviation tells you how much the data deviates from the average. It's kind of like clustering. So, the higher standard deviation, the more variance in the population.

Lauren Isaacson:

A low number will tell you that you're on to something. There is something that is definitely happening and it's across any demographics. You have heterogeneous data if you have a high standard deviation, then you can either say, "This is kind of meaningless data," or you can say, "Hmm, I wonder if certain populations answered this question differently than other populations." Then you can kind of break down that data more, but breaking down data requires that you have a large enough sample to do that. Okay, now I harped on and on about Likert scales or Likert scales, however you want to say it. I don't care.

Lauren Isaacson:

But how do we analyze them? I like to use top two boxes, and this is kind of a standard for quantitative professionals. So, it's top two for five point scales. It is top three for seven point scales. For when I'm presenting this data for people who aren't really into quantitative analysis, I call it percent positive. That really helps them understand what that data means. If these are the answers to our conference satisfaction survey, we would say 76% of conference attendees are having a positive experience. This is how we would display that data. I use 100% access bar. Everybody should be using 100% access bar if they're dealing with percentages. Okay? It should always equal 100%.

Lauren Isaacson:

If it doesn't equal 100%, what went wrong? Okay, so now, if you have a large portion of negative or neutrals, then it's worth breaking it out, otherwise condense it, because the positive story is a story that you want to tell. Notice how this is in two dimensions. If you're using three dimensional data display, that's amateur hour. That's not necessary. Don't use chart junk. Hey, do we have any questions? There we go.

Ben:

I think we will keep going. We have a few who have to jump. There are tons of questions, but I think it would be great for us also to be able to respect yours and their time and get through the presentation. Thank you, Lauren.

Lauren Isaacson:

All right, here we go. Okay, so key takeaways. This is what I want you to walk away with today. Surveys are a UX problem that you can solve. Keep it short, 10 minutes or less. Make them mobile first, vertical concise. Abandon grids and limit your open ends. Make all balanced scales, Likert scales, fives or sevens, analyzed data using top two or three boxes. Have a plan for the data before you even write the survey. Incentivize, incentivize, incentivize, all right? Then clean your data before analysis, and then think about what you're doing to ensure data quality. Now I've only just scratched the surface. There is so much more to learn about doing good quantitative research.

Lauren Isaacson:

For some books to read, I highly recommend these, The Tailored Design Method by Dillman, Smyth and Christian. It's a big ass book, but it covers everything. This is an older version. They have come out with a newer version, and so I highly recommend getting it. Also, it's a really great prop, because when someone questions your survey design skills, you just drop it on your desk with a really big fad.

Ben:

This is why you hired me, because you don't want to read it.

Lauren Isaacson:

Yeah. You can, "Did you read this book?" "I read this book." Okay, and then we have, if you don't want to read the really big book, there's also The Complete Guide to Writing Questionnaire, that is by David Harris, and then Questionnaire Design by Ian Brace. I would show you that book, except a friend of mine borrowed it and haven't given it back. Then, I would also recommend, People Are Not Robots By Annie Pettit, who is a wonderful person. I love her dearly. She has written a great book on how to humanize survey design, but I would probably read a more basic survey design book and consider that as more of an augmentation to the knowledge that you already have.

Lauren Isaacson:

On Twitter, you can follow #NewMR or the NewMR account. You can also follow Annie Pettit, Jeffrey Henning and Ray Poynter. They're all great resources on advanced thinking on surveys and quantitative research. Also, NewMR, they have a YouTube channel with all of their past webinars. That's a really great resource if you want to know what's the latest and greatest in quantitative research. Yeah, and then we're done. We did it.

Ben:

Amazing, amazing, amazing.

Lauren Isaacson:

Yeah, if you want to get in touch with me, you can reach me through my website, or through Twitter. I'm also on the grams, and yeah.

Ben:

Thank you so very much, Lauren. It was great to see you once again. Thanks to everyone who both signed up and hung with us. For those of you who didn't quite have your questions answered, again, reach out to Lauren or me. I'm just ben@dscout. There's also the people nerd Slack channel if you want to jump in there and ask questions. There are lots of smart folks, just like Lauren, who are there to help and work through problems collaboratively. Thank you so very much again, Lauren, and thanks for everyone for joining us. We'll see you on the next one. Take care everyone.

Lauren Isaacson:

All right, sounds good. Bye.

Ben:

Bye everybody.

The Latest