Editors' note: For the 2016 UXPA Boston conference, the planning team decided to add to its research skills by using dscout's mobile platform to study the conference experience.
Conference design is service design. As UX designers, we UXPA conference organizers seek to continually refine our processes. In the past, we collected conference feedback after the fact and struggled with typically low response rates.
This year, one of the conference presentations focused on diary studies. When combined with mobile technologies, it’s a method well-suited to gathering data in the moment. So with help from dscout, we invited conference attendees to be our research participants. With very little planning, we gathered actionable feedback—and learned a few things about running mobile diary studies along the way.
As with any new tool, there was a learning curve, but when all was said and done, dscout made it easy for attendees to give feedback in the moment of their actual experience, which gave us more credible and complete data about how to improve in the future.
THE DSCOUT PLATFORM
The dscout study design interface should be familiar to anyone who’s used traditional survey tools. The ease of collection of visual feedback (photos and videos) gave us access to richer data than we could have gained from narrative questions alone. And the skip logic feature was critical to our “in the moment” research, so respondents could focus their feedback on relevant questions with minimum distraction.
Encourage more entries by keeping the question list brief.
Half our participants submitted 2 to 4 snippets over the course of the day. We used skip logic to ask scouts whether they wanted to submit a photo. That felt repetitive to people who submitted multiple entries. Instead, we could have reduced the number of questions by adding skip logic to the first question, gathering photos only for certain types of submissions.
Test how people might answer your questions by answering them yourself!
The “short answer” format is great when you are looking for just a few words or a phrase. In most instances, participants needed more than the short format’s 140 characters to describe their moment. We should have opted for the more standard “open response” format for our open-ended question.
Consider your analysis upfront.
You’ll save time by including closed-end questions asking participants to summarize their own feedback. We did ask users to categorize their feedback into “logistics,” “presentations,” and “networking.” But we did not include a closed-end question asking whether their entry could be categorized as positive or negative. This meant we had to tag each entry for analysis ourselves, a time-consuming step that could have been avoided.
Tailor questions to capture experience in the moment.
Don't expect respondents to make comparisons against a longer timeframe or relative reference point. The weakest question in our mission was “How much value have you gotten from the conference so far?” (see the chart below) We anticipated ratings to begin near 0 early in the day, and (hopefully) trend upward.
Instead, most scouts submitted ratings of 3 and above regardless of timing. The most likely explanation is that it was difficult for scouts to answer this question without knowing how the rest of the day would unfold. It’s also possible that scouts were rating the conference aspirationally, estimating the value they expected to get over the course of the whole day rather than focusing on a given moment. A simplified version of this question would have given us better data while still allowing us to look for trends over time.
RECRUITING + ENGAGING PARTICIPANTS
Recruiting and attrition are a real issue for longitudinal research, and this study was no different. We waited until the last minute to reach out to attendees, which meant that only 13% of the roughly 1000 attendees downloaded the app, and only about half of those participated. On the other hand, two-thirds of the participants submitted more than one entry—a good result given the conference is a jam-packed day with distractions.
Learn about your respondents before you launch.
Conference attendees were not part of the dscout panel, which meant they had to install the dscout app and sign up before submitting data. To increase adoption, we chose not to require full account set-up, but this meant a lost opportunity to gather demographic information. Starting the dscout project before the conference to ask about demographics, job roles, etc., would have deepened our knowledge of attendees and complemented our live data (and increased participation rates).
Give your respondents time to onboard.
We don’t know how many potential scouts missed the single invitation email. Once at the conference, participant attention shifted to enjoying the conference experience. Earlier outreach could have not only driven awareness of the project, but brought attendees to the conference prepared and excited to participate. With a little more prep time, we could have driven up participation rates from our highly engaged and captive audience.
Minimize attrition through engagement.
As the conference kicked into high gear, our study became less of a priority, a common problem in longitudinal research. We did not plan effectively to remind and incentivize scouts to keep submitting data. Any longitudinal study (even a day-long one) needs an engagement strategy.
As expected with any qualitative research, data analysis was time-consuming. The photos were extremely valuable for humanizing the participants and adding context, but required extra effort to manage and report effectively. There was some learning curve to the dscout interface, but the ability to export the data into Excel and to a photo management program gave us freedom to create our own reporting structure.
Don’t wait until the end of the study to start analysis.
Though we wanted to use dscout’s tagging function, browsing and individually tagging the day’s 200+ entries would have taken more time than we were willing to invest. Instead, we fell back on exporting to Excel instead of using dscout’s filtering and analysis tools. If this study had spanned days or weeks, a continuous tagging effort would have helped distribute the burden.
Harness the power of visuals.
Photos were a truly useful complement to the survey data, and added a layer of meaning and detail that words alone could not communicate. Being able to collect visual artifacts of the live experience is critical for any study of services, physical products, or whenever context of use comes into play. This is where a tool like dscout can be invaluable.
We used Photoshop to create photo collages, but this was a manual and therefore time-consuming process. Because photos were a source of rich data, our dscout feature wishlist begins with photo management features such as timeline browsing, collaging, and storyboarding. We would have especially loved to create a visual timeline of the day of the conference.
Based on our experience, dscout turned out to be superior to a home-grown diary or longitudinal study solution in several ways. The mobile app was intuitive and made the submission of data by scouts quick and easy, while the dscout platform reduced the burden of data management for researchers. But what truly distinguished dscout from other research tools, and earns it a permanent place in the UX toolbox, is the focus on visuals and ability to gather feedback in the moment.
Special thanks to:
- UXPA Boston 2016 Diary Study presenters Michael Kennedy, Liz Burton, and Vicky Morville.
- UXPA Conference Team: Dan Berlin, Fiona Tranquada, Dharmesh Mistry, Rob Fitzgibbon, Susie Robson, Susan Mercer, Christ Hass, Matt DiGirolamo, Chris Laroche, and Jonathan Patrowicz.
Interested in what the UXPA Boston learned about their conference design? Check out Eva's article on her research insights.