Words by Ben Wiedmaier and Kendra Knight, Visuals by Thumy Phan
Harried by stakeholders, rushed by competing priorities, and limited by resourcing, user research is constantly benchmarked against the ticking clock. But to what extent is this the case in-practice? How salient is time for today's user experience researcher? And how are UXRs managing, accounting for, and reacting to expectations of time?
People Nerds wanted to begin putting this puzzle together and launched original research aimed at unpacking some of these questions.
In a series of articles, we'll report on perceptions of time in completing work, expectations from stakeholders, strategies UXRs use to advocate for more time, and the extent to which organizational structure impacts the ability of a UXR to complete rigorous work. In this first report, we outline a benchmark for time-to-complete an average user research project, how that time compares to stakeholder (and UXRs' own) expectations, and what impact project type plays.
We are grateful to the 300 UX practitioners who generously gave their time to complete the survey that informed this work. Without your help these insights would not be possible.
So read on, and try to do so without checking the clock.
- The average research project—across all project types takes 42 days.
- On average, “discovery” projects (60 days) take twice as long as evaluative projects (28 days).
- When asked about a recent project, 63.1% of research said they had "Just Enough Time." 22.4% had "Not Enough" and only 14.6% had "More than Enough."
- When asked which activities researchers wish they had more time for a majority (51.6%) said analysis. This was followed by recruitment (16.3%), delivery (15. 3%), fieldwork (9.5%) and design (7.1%).
- Recruitment, site securing, operations were the biggest source of project delays (36.3%). Scope creep (19.6%) was the next most common.
- The ramifications of inadequate time was clear to researchers. 76.9% said, when there’s not enough time, that "the full extent of insights were not mined and translated.” Insights get left behind.
How long does it take to finish a research project?
Participants were asked to report a general timeframe it took to complete their most-recent project (no matter the size). Most reported their project taking "weeks" to complete (59.3%), with nearly a third reporting months (30.3%). Overall, the median number of days to complete a recent project was 42.
The type of project produced some differences in reported time-to-complete. Specifically, participants selected one of four phases of research to describe their most-recent project: discovery, iterative, evaluative, or post-release feedback. Most participants reported on a discovery project (42.1%), followed by evaluative (32.3%), generative (19.5%) and post-release feedback (6.1%).
When these project types were crossed with median days to complete, differences emerged: discovery projects (median = 60 days) were almost twice as long as iterative projects (median = 35 days) and more than twice as long as both evaluative and post-release feedback projects (both median = 28 days).
How ideal are research timelines?
Do researchers feel they have enough time?
One of the driving forces for this work was to unpack the extent to which user researchers perceive they have "enough" time to complete any one project, and how that perception relates to that of their stakeholders.
Overall, participants mostly reported that they had "Just Enough Time" to complete a recent project (63.1%) followed by "Not Enough Time" (22.4%), and "More than Enough Time" (14.6%).
Do stakeholders feel researchers have enough time?
Participants’ perceptions of stakeholder expectations shifted toward the “more than” end of the scale. Most participants reported their stakeholders believed they had “More than Enough Time” (49%) and “Just Enough Time” (48.3%) roughly equally, followed distantly by “Not Enough Time” (2.7%). In all, most (63%) of user researchers believed they had “just enough” time to complete a recent project and only a small percentage (14.6%) believed they had “more than enough time.” However, nearly half (49%) of researchers believed their stakeholders viewed their timeline as “more than enough.”
Importantly, stakeholders themselves were not surveyed—these data report on the perceptions of user researchers about stakeholder expectations. These are still useful to begin benchmarking comparative temporal expectancies.
There were no significant differences across project types for self or stakeholder perception of the time it took to complete. In other words, participants reported roughly the same breakdowns across reported time categories (i.e., “Not Enough,” “Just Enough,” “More than Enough”) for each project type (i.e., Discovery, Iterative, Evaluative, and Post-Release Feedback).
How aligned are we on project timelines?
In all, this suggests that the trend of “I have just enough time, leaning toward not enough” plays out across most types of projects today’s user researcher takes on.
These results track with colloquial information related to the stakeholder/user researcher expectation gap. That is, stakeholders want “answers” ever faster and believe user researchers have ample time to complete requests.
A small percentage of these participants reported their stakeholders believed they had not enough time (2.7%). This might be most indicative of this dialectic tension, especially when compared to the nearly one-quarter of these same researchers who reported not having enough time (22.4%). Expectations can often be honed through rapportbuilding and ongoing partnership—it would be therefore instructive to learn from these researchers how long they’ve worked with and delivered insights to the stakeholders on whom they’re reporting.
To what extent does a relationship mediate or moderate these differences? Are more senior user researchers better able to expectation-set with their stakeholders compared to more junior and new-to-field folks? These are useful questions to explore in future research.
What research activities take the most time?
Participants were similarly asked about the typical phases of a project (i.e., design, recruitment, fieldwork, analysis, and delivery) and which of these phases contributed to the time it took to complete a recent project.
Specifically, participants reported that analysis took the most time on a recent project (32.7%%) followed by recruitment (26.6%). Fieldwork (17.5%) and design (16.5%) were next most-reported, with delivery (6.7%) reported as taking the least amount of time.
What activities do researchers need more time for?
Additionally, participants reported which of these same phases they wished to have more time for. This question was also posited for research projects in general (i.e., not just a recent project, but their practice overall).
When asked which phase they would have wanted more time for in a recent project, nearly half (49.7%) reported analysis. Interestingly, delivery was second (16.3%) followed by recruitment (13.6%), fieldwork (10.9%) and finally design (9.5%). For projects overall, the trend was replicated, with analysis at over half (51.5%) followed by recruitment (16.3%) and delivery (15.6%); fieldwork (9.5%) and design (7.1%).
This trend—of researchers simultaneously reporting that analysis takes the most time and that they don’t have enough for it—was largely replicated when examining across project types, save for two exceptions
Namely, for both iterative and evaluative projects, participants reported a more equal distribution of time across all phases (sans delivery, which was still reported as taking the shortest amount of time). This trend was not the case in discovery or post-release feedback project types.
Overall, analysis—both for a recent project and for one’s practice generally—seems to be the phase that which needs the most time and feels rushed simultaneously, creating quite a quandary for user researchers.
Arguably the most critical aspect of a UXR’s charter: to render meaning and applicable insight from data, however gathered, analysis is still sorely lacking in time. However the f ield got here, it’s plain that—at least this small sample—is craving more time to focus on the aspect of their practice which most showcases their value and expertise.
One final note: It is very interesting to see the consistently low percentages (usually no more than 5%) for “delivery and share out.” Translating the hard-earned insights for audiences, via empathy-generating stories, workshops, or reports of all kinds, is hinted as something UXRs would like more time for, but don’t prioritize currently.
Again, many UXRs juggle multiple projects concurrently and, once data are analyzed it’s possible the delivery phase does not receive its just desserts due to scoping of new projects, fatigue, or a combination of both. This is speculation, to be sure, however the socializing of data findings is what helps “activate” data and truly turn it from a “finding” into an “insight.” UXRs are still searching for the time to devote to this critical aspect of their practice.
What causes project delays?
Time constraints have the potential to weaken user research’s effects and impact organizationally. Participants were asked about the reasons for time-constraints.
Results spotlighted the importance of research operations professionals, as “Recruitment, site securing, operations aspects” was far-and-away the most-selected response. 36.3% elected it as the reason for the project’s time-to-completion. The next closest reason was “scope creep” at 19.6%.
Other reported bottlenecks included “Assets for research” (i.e. waiting on a prototype or concept—11.3%) and “Approvals/legal/IP sensitivity” (10.6%). Issues like communication complexities (between clients, stakeholders, or teams), tooling, and budget were all barely reported above 1% each. For this sample of user researchers, operational aspects of “doing” the work was the bottleneck to on-time completion of projects.
As user experience research matures and expands, these data suggest that leaders— both of these teams and organization-wide executives—prioritize the operations of insights gathering. Streamlining and organizing repeated aspects of work such as recruitment and incentives, site management (where applicable), and other variables like software licensing, templates, and repositories—will set companies up to more successfully harness the power of a user research function to meet its long-term goals... and do so at a pace to “keep up” with competitors. These operational aspects free user researchers to tackle the phase of their work most harmed: analysis, which, as explored above, has real implications for the impact of such work.
What are the consequences of not having enough time?
A multiple-select prompt was created to unpack the negative externalities of reduced project time. Despite offering participants an "Other" option, most chose from the provided responses.
The results show the full extent of harried, rushed, or shortened work: a full threequarters of the sample (76.9%) selected “The full extent of insights are not mined and translated,” demonstrating in stark terms that less time may also mean insights “left behind” and new opportunities under or unexplored.
Three options were each selected by nearly 60% of participants: 1) “The rigor of the research design/method” 2) The creativity in approach/design/method used” and 3) The sample might not be as diverse or representative.” Completeness, rigor, creativity, and diversity are all affected—according to these participants—when time to complete a project is reduced.
Other reported effects include “The impact of the work within/across the organization” (31.8%) and “The visibility with stakeholders or collaborators” (20.7%).
These findings align with other themes surfaced in this report. Specifically, that analysis is the area UXRs both have little—but want more—time for. Here again, when asked to select from a list of pernicious effects, these participants raised the spectre of insights, recommendations, learnings, and innovation “left on the cutting room floor,” swept aside for—ostensibly—the next “critical” project and its “scrappy” timeline.
Method: Study Design and Sample
This study employed an online survey, fielded via Google Forms. The survey contained three sections:
- The time expectations on a recent project
- The time expectations of projects in general
- Contextual demographics (e.g., role type, industry)
The survey was shared via online communities like newsletters, Slack, and on social media. Data collection took place over roughly two weeks in August 2021. Participants were not compensated for their responses; many indicated an interest in seeing results for their organization.
In all, 300 participants were recruited. They represented a broad swath of industries, with "technology" the largest at nearly one quarter (25.2%). The sample is also dominated by self-identified practitioners (78.9%), although research-consuming collaborators (10.1%) and research team leaders (9.7%) are also present in the data. Breakdowns of industry and role are presented in the appendix below.
Sample Contextual Demographics
Which best captures the industry in which you currently or most-recently worked?
- Technology (25.2%)
- Consumer (9.9%)
- Software (8.2%)
- Consumer Products (7.8%)
- Financial Services (7.5%)
- Education (5.8%)
- Healthcare (4.4%)
Which of the following best describes your current or most-recent role?
- I conduct research as a primary job function (78.3%)
- I engage with research as part of my job function (10.3%)
- I direct a team focused on research initiatives (10%)
Which of the following best describes your current or most-recent team structure?
- Embedded Within a Team: Researchers are part of a single department (32.1%)
- Solo: I'm my company's only UXR (16.2%)
- Embedded across teams: Researchers are house across different teams (15.9%)
- Hybrid: Some researchers are centralized, some are embedded (15.2%)
- Central agency: Different departments come to us with requests (14.2%)
- Freelancer (6.4%)
Roughly how many projects do you/your team complete in an average quarter?
- x̄ = 15.86, median = 4, mode = 3
Which team or business unit are you delivering research to most often?
- Product, Design, UX/CX (62.3%)
- Executive Leadership (19.3%)
- Customer Success, Support (10.5%)
- Engineering, Data Science, Development (4.3%)
- Sales, Marketing, Account Management (4.3%)
Download the Full Report + Time-Buying Strategy Assessment
Ben Wiedmaier is a content researcher/producer at dscout where he spreads the “good news” of contextual research, helps customers understand how to get the most from dscout, and impersonates everyone in the office. He has a doctorate in communication studies from Arizona State University, studying “nonverbal courtship signals”, a.k.a. flirting. No, he doesn’t have dating advice for you.
Kendra Knight is an associate professor of communication studies. She completed her MA and PhD in human communication at Arizona State University. She specializes in work/life communication, “casual” sexual relationships and experiences (e.g., friends with benefits), and interpersonal conflict and transgressions.