Skip to content
Ideas

Understanding How People Approach Car Research and Buying with dscout

How do customers evaluate and compare online product pages? Our diary study with dscout had some illuminating results.

Words by Molly Malsam, Visuals by Thumy Phan

If you’re like many other researchers, you have tools in your kit that hover the line between market research and usability research.

While many companies have a dedicated market research team, usually that team spends a good chunk of time on large-scale, expensive studies. These studies are intended to be very wide in scope and represent various segments of your product’s market, with a fair amount of precision in sample sizes.

Market research typically focuses on people’s attitudes towards a product and estimating the size of the potential market for the product.

User research, on the other hand, more often works with smaller-scale studies. While trying to be representative, they are less focused on getting a large sample and more focused on understanding the behavior of current or potential customers to get at their motivations, needs, and pain points. This data provides product and design teams with more nuanced and in-depth information to help shape the user experience.

In summary, market research typically deals in broad insights about people’s attitudes, whereas user research mostly focuses on detailed insights about people’s behavior.

Jump to

Deciding on study type

Within that Venn diagram-overlap space in research questions, your company may have people on the market research team or the UX research team who have the tools and knowledge to conduct a particular study, so it simply depends on who is available and willing to do the needed research.

In this article I’m going to share one of those types of studies, and how UX research conducted the study successfully using the dscout platform. In this situation, while the market research team would have some tools and some audiences to do a similar type of study, a diary study within dscout truly was an ideal solution. I’ll explain why below.

The research questions

I had done several studies where potential customers were brought into usability tests to evaluate the enrollment process for a particular product. However, since we don’t currently have the ability to intercept potential customers already “in the funnel”, I could only recruit those that may be interested in or suited for this type of product based on subjective questions we would ask in screening. This always introduces potential for various forms of bias.

The team wanted to know more about:

  • What those participants were looking for in such a product
  • Whether they got the information they needed on the product marketing pages before starting enrollment
  • What concerns they might have before committing to the product completely

As many of you know, these questions have more of a marketing bent. It would not be valid to assume that those we recruited for our studies are the right ones to answer these questions, since they can’t truly be assumed to be “interested buyers” without knowing more about their true intent rather than their stated interests.

But the one-on-one usability test method affords the ability to sit and ask qualitative questions of people, combined with a more in-depth user experience—all while your co-workers can watch along. It’s understandable that teams want to ask such questions in UX research studies.

Answering these kinds of questions might not be a big deal if you’re talking about a low-cost product where little thought goes into a commitment decision. But if your product is something that prospective customers would likely take more time and effort to research, compare competitors, or discuss with others, then your team should—and will—have questions about how that process plays out.

Again, a market researcher could provide some good answers to these questions. However, timeboxing that into one or two focus groups, or even individual interviews, limits the participant to immediate responses to questions they would likely answer over time rather than in one sitting.

That’s where the diary study method comes in. Participants can do portions of the process over a longer timeframe, with the benefit of time lapse between portions. That may allow for additional thoughts or activities, similar to real life.

We know from cognitive psychology research that non-trivial decisions can take people more time and can be based on either rational or irrational thinking. Rational thinking takes time and effort, which people may delay or save for a time in the future.

They may also take several steps in order to get the information they need to rationally assess the decision. For example, it starts with someone giving advice or a recommendation, which they then do some research on, and next talk to trusted friends, family, or advisors before making the decision.

A diary study that breaks up a process over time normally plays out that way, providing the opportunity for a more realistic data set to emerge. Getting back to the market research team, even if they did their focus groups or one-on-one interviews spaced out over time, they would still have to manually compile all of the results and take the time to run each study.

This is why a diary study is an optimal solution for the study approach below. It combines a solid method with the reach and scalability. It’s well suited to focus on behavior over time, combined with access to a broad audience pool, and the ability to quickly review and evaluate the results.

Back to top

How we recruited

I set up a three-part diary study over a period of seven days for this exercise based on the particulars of the study questions and the product type. You may choose to do this over a longer timeframe, depending on what you’re focused on and what the parts involve.

The study’s description explained that we were interested in the shopping process for a particular type of product. For the screener questions, I included several that assessed their potential interest in and fit for the product, being careful not to make the right answer obvious to prevent people from trying to get into the study.

This is a tricky process and isn’t foolproof. But combined with a 30-second video question, it can get you to a place where you feel pretty comfortable that you have suitable participants.

For example, let’s say that your product is automobiles. You could ask a question set like:

Select any of the following you are seriously considering purchasing in the next year. If none, leave this question blank.

  • Home
  • Automobile
  • Camper or mobile home
  • Boat
  • Furniture

(Conditional question if they selected 2+): Of those that you selected, which is your top priority and why?

You could then ask some other questions about the automobiles that they have, how many people in their household drive, whether they need a car for a new driver or replacement, and what they plan to spend. Then, in the video, you might ask them to explain what they’ve considered for purchase or what they’ve done, if anything, to investigate their purchase.

You can see how paying attention to all these answers and whether they seem to hang together can help you find those that are truly interested. Like I said, it’s not perfect, but it gets you a long way toward getting the ideal study candidates. To officially kick off this project, I used dscout's Recruit feature to find a group of quality, engaged participants.

Using this method, I found plenty of suitable candidates for the study.

Back to top

Conducting the study

As I mentioned, I broke this up into three parts. For the first part, scouts were given two days to complete. I provided a scenario which I know to be common for this type of product, where a friend or family member suggested they check it out. I explained what they were looking for and what it was generically called, and then instructed them to use “whatever online method they would use” to learn about the product.

I asked them not to dive into specific product brands yet, but just to generally understand the product. I requested that they keep track of:

  • What they did first (e.g. Google search)
  • Websites they visited
  • Questions they still have after doing the research

After they completed that part, I gathered the above answers, including the exact search they entered into a search engine if that was their starting point. I also asked what other non-digital methods they might use to learn more about the product, if anything. Lastly, I asked them to describe the product as they would to a friend, in their own words. This is a good check to see if they are really engaging with the activity.

In part two, which was also two days, I told the scouts to research companies that offer the product and then choose five that they would put on their short list based on the information they reviewed. I had them record the list and prepare to explain why they chose them.

I also recommended that they did not make extensive comparisons about the product details—you’ll see why in a bit. But at this point, I expected them to focus more on company brand attributes when narrowing down the list, rather than about what unique thing one product version may or may not have.

In other follow-up questions, I asked them to share a bit about the process they went through to narrow down the companies. I also evaluated how interested they were in the product now that they had spent more time exploring it, and what made them provide that rating. I hoped this would mirror the real-life experience more—someone learns a bit about a product, and as they learn more about it, they become more or less interested.

The final part (three days) was where I took a welcome assist from our razor-sharp dscout research advisor. I did not want the scouts to know which company I was representing, but I wasn’t sure how to let them choose while also making sure they evaluated our company.

The advisor suggested the approach of asking for the top five companies in the prior part, but then asking them in the final part to evaluate three companies: one of which was ours, and the other two either top competitors or ones that seemed to have a unique spin on the product. Then, the scouts would choose two others from their list of five to add to the list. That way, if they already chose the three I required, they still had two more to add.

I found that quite brilliant, and while I wished I had thought of it myself, alas I did not. She also suggested breaking up the part into five entries, so that they could go through them one at a time and I wouldn’t have to repeat the same questions. Word to the wise: ask dscout when you’re stuck, especially when you’re designing a study. They have lots of experience and options for you that you may have not uncovered on your own.

For each part, I instructed the scouts to locate and review the online content available for the company’s product and then begin to answer the questions. First, they selected or entered in the company and uploaded a screenshot of the webpage they were starting at. They read the content in detail, paying attention to the particulars of each offering. After reading the information, they were asked to evaluate the product in a number of ways and share pros, cons, and questions about each.

To know that they were on the final entry, at the end of question set I asked, “Is this your fifth entry?” (Yet another great idea from the advisor.) If they selected no, the study logic went back to the first question. Once they selected yes, they moved to the final question of which company’s product they found the most appealing and why.

Back to top

The results

Our hypothesis was that one or more competitors would appeal more to the age segment (plus some additional characteristics) that we targeted for the study. That was supported by our results. Also, a couple of the many hypotheses about why that was the case were supported by the data while others were not, so that was also useful to the team.

We also uncovered some questions they had throughout the process that were either not clearly answered and/or were not very visible in the product marketing pages. Those questions also helped us to hone in on what to address as they initiated the enrollment process, and to remind them before committing completely.

Overall, I was pleased with the way this study turned out and felt that I could provide some useful data in the crossover realm between market and user experience research that I felt comfortable with.

Back to top

Molly is a User Experience Research Manager in the financial services industry. She has a master’s degree in communication and has over 20 years of experience in the UX field. She loves learning more about how people think and behave, and off-work enjoys skiing, reading, and eating almost anything, but first and foremost ice cream.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest