Introduce Small Studies to Your Org with the Bento Box Program
If your organization has high demand for research and limited staff, the "bento box" training program might be right up your alley.
After reading this headline, you might be scratching your head and thinking, “What the heck does a Japanese lunch have to do with user experience research?”
As it turns out, a bento box is a good analogy for a lot of things—not the least of which is a UX research program designed to help non-researchers do small, “bite-sized” studies themselves. As a bento box has all the essentials for a healthy meal, these programs provide the necessary instructions to complete a study from beginning to end.
Perhaps someone else originally created this idea, but we first heard of it through Julie Norvaisis, who was the VP of UX Research at LinkedIn at the time. Her team started with putting together what she called a “bento kit” for designers when faced with high demand for research with limited staff. That kit grew into a full-blown standardized training program at LinkedIn, which is the path our team progressed through as well.
We now have a formalized program for those who want to conduct certain kinds of research studies, replete with coaches and other supporting materials to assist amateur researchers with their studies. This program took a lot of time and effort to spin up, and in hindsight we’ve learned some good lessons. We would like to share some conclusions with other teams that want to do something similar.
How the need arose
I’ve written before about how our UX research team rapidly grew over the past few years, aligned with our company’s focus on delivering user-centered digital experiences. And just like those things that you never knew you needed but you now can’t live without—the internet, mobile phones, edible burrito tape—the demand for UX research quickly grew as more and more people saw the value of having a researcher on their teams.
Rather than putting a damper on their enthusiasm by turning away research requests left and right, we thought that we should have a plan for democratizing research: empowering those who weren’t full-time researchers but who had some background in UX research and were interested in learning some DIY methods.
At first, we thought this could be boiled down to a “one-pager” or something similarly brief. However, after a group discussion with some researchers who volunteered to help with the request, we quickly determined that even providing the basics for do-it-yourself research would be far more complicated.
The researchers formed a working group of five people. We wanted to responsibly help teach others to fish without oversimplifying, so a fortuitous panel conversation with Julie Norviasis along with other connections with researchers elsewhere led to the decision to birth our UX research bento program. The program was also modeled on similar company initiatives intended to provide people with practical training on topics such as A/B production testing and accessibility.
How we did it
This dedicated group of researchers donated lots of time and energy to create the materials for the program. We started out by dividing up the training into two parts, one for orientation and another for practice.
For the Level 1 training, the purpose was to:
- Introduce and explain the program
- Provide a high-level overview of UX research
- Define the program focus on bite-sized research
- Explain where this opportunity might arise in the design process
- Offer a research ethics primer
- Give participants practice defining appropriate study questions
We designed Level 2 to be the hands-on phase, where participants would plan, conduct, and share the results of a study with the assistance of a research coach.
One of the decisions we made early on was to focus on unmoderated usability tests using one of our research tools. We didn’t want to expose our clients to untrained interviewers. While some people have good interview skills, others have a harder time with that role and can come across as stiff, nervous, and inauthentic. Because a key part of conducting quality moderated research is establishing rapport and putting participants at ease, we decided to remove moderated testing from the program.
Another benefit to focusing on unmoderated testing was that it was simpler to control quality. With moderated testing, a much larger amount of time and energy would be required to provide oversight, but with unmoderated studies, coaches could easily review and edit the protocols asynchronously.
Our UX team decided it made sense for all the non-researchers on the team to attend a Level 1 training session. This was to ensure that all understood the program and its parameters so they could effectively determine whether they would want to do such studies. After that, those interested could sign up for a Level 2 research study at any time following a defined online process, after which they would be assigned a study coach to get the process started.
We started out with a pilot group so that we could test and tweak the workshop agenda and materials. Next, we conducted a few workshops to get over 100 participants through Level 1 and a handful of those that wanted to move on to Level 2. We then worked with our training team to turn the program into web-based training videos and guides to allow us to scale the future process of getting amateur researchers started.
How paired studies work
For each study, the non-researcher submits a request to do a UX research bento study. The request form includes:
- The type of question they are trying to answer (e.g., usability, findability, comprehension)
- The actual research question
- Details about the project and product manager involved
The UX research team maintains a list of coaches that are signed up for various quarters, and research operations manages the assignments. Wherever applicable, coaches are assigned to projects they are working on or have a history with to make the relationship as fruitful as possible.
The studies were intended to be planned and completed in four weeks or less, and to typically take about a week’s time for the non-researchers and five to 15 hours for the coach. The coach provides a study template and structure for the participant as they go through the phases of a study, including:
- Planning the approach
- Setting up the test
- Launching the study
- Analyzing results
- Sharing key takeaways
The project team set up instructions for the coaches for each of these phases along with a to-do list and references at every step. For example, in the planning phase, action items include having the participant review some relevant training on our research tool, practice setting up a screener, and creating the test plan.
Supplemental materials include help on creating actionable tasks, doing various types of tests in our research tool, and guidance on test planning. The UX research team also hosted regularly scheduled coaching office hours to support coaches with any questions or issues that came up during their studies.
Lessons we learned
The definition of bite-sized is challenging for some
We took a good chunk of time in Level 1 to clearly explain the kinds of research questions that are suitable for a bento study and then quizzed participants on whether certain types of research questions were or were not appropriate. However, we still found that both in the quizzes and afterward in Level 2 proposals, participants struggled to define the right kind of questions.
Some of the issues we saw included proposing studies that were:
- Too large for a brief study, like those encompassing a long process flow
- Applicable to the discovery phase of the design cycle rather than evaluative
- More well-suited for marketing research
- Better answered using qualitative research methodologies
Beware of the planning fallacy
Researchers have identified a ubiquitous cognitive bias called the planning fallacy. This was defined by Kahneman and Tversky as, “the tendency to underestimate the time, costs, and risks of future actions and at the same time overestimate the benefits of the same actions.”
Well, I guess we fell prey to this bias, because we did not expect the project would take so much time and effort, both on our side in planning and rolling out the program, and on the participants’ side doing their own studies with coaching.
On the planning and delivery side, it took more time than we thought determining what should and should not be in the program, along with managing test runs and multiple revisions. Also, putting over 100 participants through the training took significantly more time than we originally thought since we had initially anticipated a smaller voluntary audience.
We decided later on in the process to put everything online for future participants, so this also took more time and assistance from others in the company. Finally, we estimated that we could do the Level 2 studies in four weeks. In reality, it has taken about six weeks from beginning to end, for reasons that will be explained in the next lesson learned.
Not a one-and-done approach
Initially, we thought that participants could start doing their own studies after completing a Level 2 study, but we quickly realized some drawbacks to this approach. For one, becoming a proficient user of a complex research tool takes a fair amount of time and practice.
Doing an unmoderated study from beginning to end using the research tool we selected is not something that’s easy to learn, much less remember a few months down the road if it’s not your day job. If participants want to do something slightly different than the last time, it also takes deeper understanding and awareness of the tool’s feature set.
Another consideration for anyone using third-party tools is the licensing agreement, which may limit the ability to run unlimited studies. In those instances, having a bunch of non-researchers launching tests can cause traffic jams in the tool for the staff researchers.
We wanted to be sure to prioritize access for the research team. Because the tool we use doesn’t have a built-in traffic management feature, this would be another manual process to add to the research team’s already full plate (or should I say, bento box).
Finally, while some participants might have had more background and experience in UX research and did not need a lot of guidance, others were brand new to running research studies. So, doing one study simply wasn’t enough training to allow them to operate fully independently in an effective manner.
Unfortunately, this limits the ability of the program to scale—at least until the same participants have done enough studies that they feel comfortable working independently. On the other hand, they should require less and less guidance as they complete additional studies, so the time involved should go down. We are working on a way to measure where participants are in their development and a process for letting them go on their own.
This led us to conclude that, at least in its current state, our program is more of an educational program rather than a reliable research delivery mechanism.
Don’t train those who don’t want to, or won’t ever have time to research
As mentioned before, we thought it would be best to have everyone on the UX team go through the training. However, if that takes a lot of time, make sure you’re comfortable with the fact that most participants won’t go on to do their own studies. Your mileage may vary of course, depending on various factors involved.
In our experience, only about one fifth of those that took the Level 1 training went on to complete Level 2. Even with those who did Level 2, the studies often took longer than expected as mentioned before. One reason for that is because these people have their own day jobs and this is side work for them.
Things can come up during the bento process that pull participants away from the study, especially when the study is not something that is required or expected of them. This is an understandable and predictable occurrence given that we all tend to—and should—focus on our primary job duties first.
Change management is key
Our team got a great assist by getting the opportunity to partner with someone on the company’s change management team throughout the process. We don’t have staff members with “official” time dedicated to maintaining this program, and inevitably the program will and should evolve over time. Doing this with good oversight is very important to the success of the program.
Our change management partners have been critical to our planning and tactics for evolving the program and increasing adoption. They have also helped us more effectively measure our progress and impact.
Should I create a research bento program?
After all these lessons we learned, you might wonder whether it would be worth it to do something similar in your organization. The answer is, of course, it depends. Every organization is unique, and each has different staff compositions of people with various backgrounds.
You might have a company that’s heavy with people eager to do their own research but need some guidance and structure. Or you might have a model where you have a smaller number of researchers that have been hired to guide and assist others doing their own research studies.
Your user population might also be different, enabling you to take a different approach than described above. For example, maybe short moderated or guerrilla studies would be a better approach. Your product might have been on the market for a long time and is really in the fine-tuning mode, which is ideal for bite-sized studies.
All sorts of factors can affect the success of this type of program. The answer is up to you, we just hope to provide you with some challenges to anticipate in your journey. However, we can tell you the situations here in which this program has been the most effective.
When the UX bento program worked best
One effective application of the program we’ve observed is for teams that have a lot of smaller but important research questions, such as content comprehension. In parallel, the researcher is prioritized on more complex, multifaceted questions. This is particularly effective when a content strategist is assigned to the project who can quickly run with these types of investigations.
The second situation has been with designers that have both bandwidth and a strong interest in and aptitude for learning and conducting research. They will more easily identify appropriate research questions, prioritize time to do studies, and do enough of them to become qualified and experienced.
Often, UX practitioners have worn multiple hats in their past positions, doing both design and research in their regular duties. Participants with those backgrounds can be ideal candidates for these types of programs.
Another optimal situation is where one researcher is on a team with a few designers and content strategist, and they all work very closely together. If one of the non-researchers wants to do research as described in the previous example, it can be a lot faster for the researcher to identify and carve off smaller research questions for a bento study. They can also then serve as coach efficiently and effectively, having already established a good working relationship with the amateur researcher.
If you give it a try, I would love to hear how it goes for you. Also, please read some of the other published case studies of democratizing research, where you’ll see that other teams have implemented their processes differently and one approach may be better for you than another.
Molly is a User Experience Research Manager in the financial services industry. She has a master’s degree in communication and has over 20 years of experience in the UX field. She loves learning more about how people think and behave, and off-work enjoys skiing, reading, and eating almost anything, but first and foremost ice cream.
Subscribe To People Nerds
A weekly roundup of interviews, pro tips and original research designed for people who are interested in people