Skip to content
Ideas

The Goldilocks of Project Scoping: A 3-Part Framework

Well-scoped research sharpens a team’s ability to address critical questions and drives meaningful progress. Here's how to get it just right.

Words by Connor Joyce, Edited by Kris Kopac, Visuals by Allison Corr

As I reflect on 2024, I find myself reflecting on the successes and lessons learned during my time as a research lead. Over the past year, I’ve experienced both exhilarating victories and significant challenges in managing my workload.

Working on a horizontal team, my research doesn’t just answer questions—it serves as a foundation for aligning diverse teams around critical decisions. With far more requests than I can feasibly address, I’ve had to master the art of prioritization. Success isn’t about doing as much research as possible, it’s about identifying what truly needs to be answered and tailoring efforts to fit those needs.

Last year, I learned hard lessons about over and under-scoping research. At times, I over-invested in areas, got lost in understanding every detail—while neglecting others where decisions were made without adequate insights—leading to avoidable issues. These experiences have underscored the importance of a structured approach to scoping research.

Looking ahead, I’m taking a more intentional approach to planning and project tracking. This article introduces the three-part framework I’ve developed to evaluate levels of evidence, balance rigor with feasibility, and define the scope of insights. By sharing this framework, I hope to provide practical guidance for researchers facing similar challenges.

“Success isn’t about doing as much research as possible, it’s about identifying what truly needs to be answered and tailoring efforts to fit those needs.“

Connor Joyce
Senior User Researcher, Microsoft Copilot Team

Jump to…

Balancing insight demand with limited resources

In our dynamic and rapidly evolving UX landscape, research teams face a challenging paradox. The demand for actionable insights continues to grow, yet the resources available to deliver them remain limited.

The ability to scope the appropriate level of research for a given need has become not just a skill, but a strategic advantage. Teams must balance depth with breadth while ensuring that insights are timely and directly relevant to critical decisions.

As researchers, we are often tasked with solving a complex equation: delivering impact while navigating significant constraints. The stakes are high.

Poorly scoped research risks:

  • Wasting time and resources
  • Leading to decisions based on incomplete information

On the other hand, well-scoped research:

  • Sharpens a team’s ability to address critical questions
  • Aligns stakeholders around clear outcomes
  • Drives meaningful progress
Back to top

The Goldilocks Scope Framework

Through my journey to find this balance, I’ve been fortunate to learn from sharp research operators who have shared invaluable knowledge and guidance. Instead of relying solely on predefined frameworks, I’ve developed an instinct shaped by their coaching and management, often embodied in powerful advice like:

“Think through how you can scale the things you are most effective at.”

And: “Keep pushing yourself to find what insight will be just enough for the team to make the decisions they need, for as many teams as possible.”

This guidance, along with my own experiences, has crystallized into a mental framework that, when articulated, became the three-part approach to scoping research effectively:

  1. Evaluating the level of evidence
  2. Balancing methodological rigor and practical feasibility
  3. Defining the scope of an insight

By walking through and mapping out these three steps, researchers can design studies that meet their team’s needs and constraints. It helps teams avoid the common pitfalls of over investing to placate one stakeholder while leaving the rest without answers, or missing opportunities to make real change by underinvesting when a team really needs direction.

Taken holistically or for its parts, this framework ensures that every research effort—whether narrowly focused or broadly strategic—is purposeful and impactful.

Back to top

1. Defining and evaluating the levels of evidence

The first variable to consider when scoping research is the level of evidence required. This refers to how fine or expansive the insight needs to be. Some insights are highly targeted, addressing specific areas within a feature or workflow. Others are broader, spanning patterns across multiple touchpoints or even the entire user experience.

Understanding the level of evidence needed helps researchers tailor their efforts appropriately, ensuring relevance without overextending resources. The concept of Atomic Insights—a widely used structure for breaking down research findings into discrete and reusable building blocks—provides a helpful framework for categorizing these levels.

These atomic insights can then be aggregated into patterns or trends, creating a layered structure for informed decision making. Being agnostic to the framework used, I believe researchers should start by determining if they need insights at the Micro, Meso, or Macro level.

✔ Micro-level insights

Micro-level insights are the most granular, focusing on specific components or interactions within a feature. These insights are synonymous with atomic insights, as they are discrete building blocks that can be reused to inform larger patterns. By breaking down a user problem into its smallest actionable components, researchers can uncover precise opportunities for improvement.

Example: A meditation app might discover through usability testing that users struggle with the onboarding flow. A micro-level insight might suggest introducing a new screen that gauges someone’s experience with meditation during onboarding, helping to personalize the user’s journey from the start.

✔ Meso-level patterns

Meso-level patterns connect related micro-insights to reveal actionable themes across a feature set or workflow. These insights provide a more contextualized understanding of user needs and challenges, enabling teams to see connections that might not be obvious from isolated observations.

Example: The same meditation app might recognize through user interviews and behavioral data that understanding an individual’s background is crucial for tailoring the early product experience. This meso-level insight highlights a broader need for personalization that extends beyond onboarding.

✔ Macro-level research patterns

Macro-level insights operate at the strategic level, synthesizing meso-patterns to inform decisions that impact the overall product strategy. These insights identify trends and principles that should consistently guide decision-making across the user journey.

Starting a project by defining the required level of evidence is a vital first step, as it clarifies the decisions that need to be informed and provides a clear direction for the study’s objectives. By identifying what insights are necessary to address the team’s questions, researchers can ensure their efforts remain focused and relevant.

This foundation not only aligns the research with the team’s goals, but also helps determine which methodologies are best suited to generate the required evidence effectively.

Example: For the meditation app, a macro-level insight could be that the user’s level of experience with meditation is a key factor throughout their journey, influencing not just onboarding but engagement, retention, and long-term satisfaction. This insight might lead to a strategic decision to prioritize personalization as a core value across the product.

“By breaking down a user problem into its smallest actionable components, researchers can uncover precise opportunities for improvement.”

Connor Joyce
Senior User Researcher, Microsoft Copilot Team
Back to top

2. Balancing methodological rigor and practical feasibility

Regardless of the level of insight being pursued, research always involves a tradeoff in determining the level of investment required to create evidence the team will accept. This balance is critical, as teams must weigh the importance of producing robust and reliable evidence against the feasibility of completing the research within time and resource constraints.

At the same time, it’s essential not to over-invest in highly advanced research methodologies when simpler approaches could provide sufficient answers. Striking this balance ensures that the research is impactful, efficient, and tailored to the decision-making needs of the team.

The fidelity of evidence exists on a spectrum. At one end are methods that generate anecdotal evidence, such as unstructured user interviews or informal observations of behavior. These approaches are highly feasible and quick to implement, but may introduce bias or lack the depth needed for critical decisions.

On the opposite end are methodologies like randomized control trials (RCTs), which provide the highest level of validity and rigor. These methods minimize bias and offer robust, generalizable insights, but they require significant resources, time, and expertise to execute.

Between these extremes lie a range of options, including:

Each method strikes its own balance between validity and feasibility. Making tradeoff decisions between research approaches should focus not solely on resource availability but on aligning the fidelity of evidence with the significance of the decision.

Example: If the meditation app team is evaluating a critical feature like personalizing the onboarding experience, a higher-fidelity method such as moderated usability testing may be necessary to ensure the solution effectively addresses user needs.

On the other hand, for lower-stakes decisions, like refining minor UI adjustments on the guided-meditation playback screen, quicker and more flexible methods such as informal interviews or heuristic evaluations may provide sufficient evidence.

By carefully balancing rigor and feasibility, the meditation app researchers can ensure their efforts are both efficient and impactful, delivering just enough evidence to meet team expectations without overcommitting resources.

This thoughtful approach prevents wasted effort and ensures the team remains aligned with the required degree of fidelity. It also sets the stage for the final step: defining the specific features and elements that will be tested, ensuring the research drives meaningful outcomes.

“At the same time, it’s essential not to over-invest in highly advanced research methodologies when simpler approaches could provide sufficient answers. Striking this balance ensures that the research is impactful, efficient, and tailored to the decision-making needs of the team.”

Connor Joyce
Senior User Researcher, Microsoft Copilot Team
Back to top

3. Defining the scope of an insight

The third variable to consider when scoping research is the scope of insight. This refers to the level within the product system where the feature or issue resides, which directly impacts the scale and complexity of the research effort.

Scope ranges from focusing on a single feature to a feature set, and finally to the entire product. Understanding the scope is crucial for determining the depth of analysis required, the systemic factors to consider, and the breadth of context needed to deliver actionable insights.

✔ Narrow scope

At the narrowest level, a single-feature scope targets one specific element of the product. This might involve refining a particular button, screen, or workflow. For instance, if the research focuses on improving the search bar functionality, the insights generated will be tightly honed on usability, design, or performance issues related to that feature alone. Single-feature research is often fast and focused.

Best for: Tactical decisions or incremental improvements

✔ Mid-level scope

At the mid-level, a feature set scope examines a group of interrelated features that collectively influence a particular user experience. For example, research might look at how a series of onboarding screens work together to guide new users. This level of scope requires researchers to account for the relationships between features, ensuring that changes made to one part of the set enhance, rather than disrupt, the others. Feature set research balances specificity with a broader context.

Best for: Addressing more complex challenges

✔ Broad scope

At the broadest level, an entire product scope considers the product as a whole, encompassing all features and workflows. This level is appropriate for systemic evaluations, such as assessing the overall accessibility of the product or identifying key user journey pain points. Entire-product research often requires extensive context and significant resources but provides a holistic view.

Best for: Guiding strategic decision-making and long-term planning

What to look at when considering scope

Considering scope is essential for aligning research efforts with project goals and available resources.

The scope directly influences the…

  • Scale of the research
  • Types of methods employed
  • Level of collaboration required across teams

By defining the appropriate scope early, researchers can ensure their work is both manageable and impactful, addressing the right level of insight to drive meaningful outcomes.

Example: Completing the example of the meditation app team (who at this point has decided to work on the onboarding flow), they might narrow their scope to a single feature, such as the initial onboarding screen where they need to address specific usability challenges.

Alternatively, they could expand their scope to a feature set, such as the onboarding flow through the user’s first meditation. Here, they would aim to understand how these elements work together to personalize the user experience.

In some cases, a broader product-level scope might be necessary, such as evaluating personalization across the entire user journey, to inform strategic decisions. By clearly defining the appropriate scope, researchers can ensure their efforts are targeted, efficient, and aligned with the team’s needs, laying the groundwork for meaningful and impactful insights.

Back to top

Determining the right research for the need

The process of identifying the variables of a research need, level of evidence, fidelity, and scope, is essential for ensuring that the research is scoped correctly. Without a clear understanding of these variables, teams risk over- or under-investing in research, producing insights that either fail to inform decisions or exceed what is necessary.

To align research efforts with the team’s goals, a structured three-step process is recommended.

Back to top

Step 1: Establish a clear definition and purpose

A highly effective way to achieve this is by using the User Outcome Connection Framework. This framework matches specific behaviors a feature is designed to change with the desired user outcomes and, ultimately, the business outcomes these changes support.

For example, a feature in a fitness app may aim to…

  • Encourage daily activity (behavior)
  • Improve user health habits (user outcome)
  • Increase long-term retention (business outcome)

By defining these connections, researchers can better understand what the evidence they generate will do for the product team. This ensures that the research directly supports key decisions, making it clear what success looks like and how insights will guide product improvements.

Back to top

Step 2: Assess the research need

The second step is to assess the research need by defining the feature in question and evaluating three key categories of research variables:

  1. Level of evidence
  2. Fidelity of evidence
  3. Scope

Continuing with the fitness app example, suppose the team wants to refine a feature that sends daily reminders to encourage activity. The level of evidence determines the granularity of insights needed—whether it’s a micro-level understanding of how users interact with reminder notifications or a macro-level exploration of how reminders impact overall app engagement and retention trends.

The fidelity of evidence balances the rigor of the research methods with feasibility. For example, if the fitness app team is testing a critical change, such as a new algorithm that personalizes reminder times, they may opt for a more rigorous method like A/B testing to ensure the impact is measurable and reliable.

Conversely, for lower-stakes adjustments, such as tweaking the language of reminder messages, quicker methods like informal surveys or unmoderated usability tests may suffice.

Finally, the scope of evidence clarifies whether the research focuses on…

  • The specific feature (reminder notifications)
  • A related set of features (notifications and progress tracking), or
  • The entire product (the app’s overall ability to encourage consistent activity).

If the goal is to refine just the reminder functionality, a single-feature scope is appropriate. However, if the team wants to understand how reminders interact with progress tracking and goal-setting features, a feature-set scope may be more effective.

By carefully evaluating these variables, the fitness app team can align their research approach with the specific needs of the feature and the decisions the insights are meant to inform. This structured assessment ensures that their efforts are both focused and impactful, paving the way for actionable outcomes that directly support their goals.

Back to top

Step 3: Designing the study using minimum viable research

Minimum viable research (MVB) involves scoping the research effort so it does not over-invest in creating insights that are more extensive or detailed than needed. Instead, the goal is to deliver evidence that is “just right”, that which is precisely enough to answer the critical questions at hand without over-investing time, resources, or effort in creating excessive detail or data.

For example, if the fitness app team is making a low-risk decision, such as refining the wording of a reminder notification, the use of a quick and simple method like unstructured interviews or informal user feedback to gather sufficient insights is enough.

However, for a higher-stakes decision, such as launching a new workout recommendation feature, more robust methods like usability testing or A/B experiments would be necessary to ensure the solution meets user needs effectively.

By focusing on delivering only the research required to meet the need, MVB ensures that the research team remains efficient while providing the team with the confidence and clarity needed to move forward.

Back to top

Wrapping it up

Scoping research effectively is both an art and a science, requiring researchers to balance the needs of their teams, the resources available, and the complexity of the problems they aim to solve.

By carefully considering the level of evidence, fidelity, and scope, researchers can tailor their efforts to produce meaningful and actionable insights. However, delivering impactful research does not end with designing the right study. To truly excel, researchers must also track the outcomes of their work and continuously refine their practices.

A key part of this process is monitoring how research outputs align with stakeholder expectations. This involves comparing the initial goals and needs of the team with the actual impact the research has on decisions and product development.

If there are gaps between what stakeholders hoped for and what was delivered, these should be viewed as opportunities for improvement. Over time, this feedback loop helps researchers fine-tune their methods, ensuring that future studies are even better aligned with the team’s needs.

Refinement also involves revisiting the frameworks used to scope research. Teams evolve, as do the challenges they face. What worked for scoping research last year may no longer be sufficient. Regularly reassessing the processes for defining features, determining research needs, and creating studies ensure that practices remain relevant and adaptable.

Finally, researchers should embrace a mindset of learning and iteration. Even the best-scoped study might reveal unexpected challenges or areas for improvement. By viewing each project as a chance to learn—not only about users but also about the research process itself—teams can continuously elevate their ability to deliver value. In doing so, research becomes not just a tool for understanding users, but a strategic driver for product and organizational success.

Back to top

You may also like…

Connor is a Senior User Researcher on the Microsoft Copilot Team, where he is shaping the systems behind designing and developing AI-enhanced features. Additionally, he is the author of Bridging Intentions to Impact, a book encouraging product teams to create solutions that drive behavior change and positively influence user outcomes.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest