Skip to content
Ideas

The Challenge of Evaluating Research Ethics

Though created to protect participants, company Institutional Review Boards can do more harm than good. But by making them less formal, both participants and researchers can benefit.

Words by Alba Villamil, Visuals by Allison Corr

dscout has partnered with HmntyCntrd–an award-winning community that's transforming what it means to be human-centered in our professional and personal lives. We've collaborated on original research and shared insights from HmntyCntrd contributors.


This piece is one of those contributions. Here are the others:
The Systemic Under-Rating of Rest, Silence, and Hobbies

Turning Devil’s Advocate on Devil’s Advocacy
When Blackface Goes Digital 

Advocating for People in a Profit-Driven World
4 Reasons Why Tech is Political (And What We Can Do)


Content notice: This piece briefly mentions medical racism, sexual assault, immigration deportation, homophobia and transphobia.


How do we ensure our user research is ethical?

Code of Ethics. Checklists. Manifestos. It seems like every month a new set of principles is published to help user researchers navigate ethical challenges.

But the problem with ethical codes is that they don’t actually hold user researchers accountable.

Recognizing this, organizations are starting to implement internal ethics committees for user research. Based on Institutional Review Boards (IRBs), which play a critical role in medical and academic human research, these committees review research projects for potential ethical missteps as well as provide participants a way to report researchers for inappropriate or dangerous behavior.

IRBs aren’t ethical silver bullets.

Despite their important role in human research, IRBs aren’t perfect. Not only are these committees time-consuming for researchers, their decisions can also endanger and exploit the participants they were meant to protect. If design teams want to adopt the IRB model, they need to understand how its unique features can lead to unethical research.

The goal of IRBs is to evaluate whether a proposed research project is ethically acceptable.

However, this discretion is a double-edged sword.

IRBs are committees that independently review research projects before and during a study.

These committees are made up of people who are unrelated to the research project under review but who have broadly applicable research expertise. Their goal is to evaluate whether a proposed project is “ethically acceptable,” a process that includes

  • Understanding a study’s potential risks and benefits
  • Determining the need for researcher deception and informed consent
  • Identifying the research team’s potential biases
  • Evaluating a study for its compliance with data privacy laws and other protective regulations
  • Tracking the impact of a study

One of the key elements of this evaluation process is group discretion.

In her historical analysis of IRBs, sociologist Laura Stark describes how the formation of IRBs shifted the responsibility of professional ethics from the individual researcher to a group of their peers. Institutions invite “experts” with “diverse” backgrounds who then deliberate together on whether or not a research proposal fits within the boundaries of what their particular field considers ethical.

Although this group decision-making model has historically helped protect researchers from the whims of funders and other political pressures, it also introduced certain cultural and bureaucratic hurdles that can harm participants as much as help them.

The problem of “credibility”

To justify its discretionary power, IRBs must demonstrate to institutional stakeholders and the researchers submitting proposals that committees are trustworthy. The make-up of an IRB becomes incredibly important in communicating this.

IRBs are made up of “experts” but only certain types of researchers are invited to join as one. Committees look out for markers that signal a potential member’s intellectual and social credibility. On paper, this means IRBs tend to invite researchers with doctorates and other professional degrees who have resumes that reflect the standards of their field.

Members must also be able to “perform trustworthiness” meaning that they conduct themselves in such a way that stakeholders believe will preserve the IRB’s reputation. Because of this pressure, IRBs tend to be attracted to researchers whose decision-making doesn’t disrupt the status quo. These are researchers who display moderate (if any) political leanings and are able to amiably disagree (if needed) with their colleagues during deliberations.

This culture of credibility makes IRBs fundamentally conversative in their decision-making.

Researchers who are qualified but self-identify as activists or socially motivated are excluded because they’re not seen as objective and collegial. This exclusion not only drastically reduces the racial and gender diversity of IRBs but also helps preserve the outdated research frameworks, processes, and tools that continue to harm historically marginalized participants. (See the racist problem framing in public health research.)

The problem of “expertise”

This issue of conservatism is exacerbated by the fact that IRB committees only allow members, the research “experts”, to participate in the deliberation process.

Given that these members are often self-taught with little formal training in ethics, many rely on their own personal experiences to think through the potential risks and benefits of a proposed study.

To speed up deliberations, they also tend to use previous IRB decisions as precedents for future cases, especially contentious ones. Because they’re reviewing research applications as a group, members can maintain the impression of fairness even when they don't always look beyond their individual perspectives or the perspectives of researchers who look like them.

The lack of input from a proposal’s participants and researchers often leads to either overly lax or overly restrictive committee decisions.

When IRB members come from more privileged backgrounds than participants, they tend to lack the cultural competency to analyze research plans for their contextual appropriateness and safety precautions.

For example, they may not catch offensive questionnaire wording, which can retraumatize participants. Or they may accept a project that promotes an inclusive recruitment strategy without considering how that procedure could exhaust community resources and contribute to community over-researching and hypervisibility.

Members are also less likely to accept projects with creative research designs like those that use participatory methods or provide immediate tangible benefits to participants (think: civic tech research that combines research with service delivery support) because there’s no precedent for analyzing the risks of such designs.

The problem of “compliance”

Although the main goal of IRBs is to protect participants, they also have a responsibility to protect the institutions funding them. Evaluating ethical research is pitched as a self-evident good but committees must formalize this process to demonstrate their impact and rigor.

Despite the fact that non-medical research presents its own unique set of challenges and benefits, many IRBs still model their risk management process on medical IRBs, which require compliance with government policy (e.g., automatically designating certain categories of participants as “vulnerable”) and following strict documentation procedures (e.g., requiring written informed consent).

Despite good intentions, this blanket strictness can expose participants to new risks and harms.

Compliance to protections like parental consent can expose youth and their communities to unnecessary surveillance (e.g., requiring parental consent from youth living in mixed status homes might expose their undocumented parent’s immigration status).

Adhering to “best practices” can also reduce participant agency. Anonymization, for example, might be counterproductive for stigmatized groups like sexual assault survivors who want to publicly own their narrative as part of their trauma healing journey.

Other participants, like activists, might not want their names or the name of their advocacy organization to be anonymous so that they can bring attention to their cause.

It’s important to note that while IRBs technically have discretion over what type of evidence of risk and consent they ask for, they rarely exercise it. Instead, they tend to make suggestions to the research design that completely change the nature of the project just so that it complies with precedent.

This is particularly common in research involving “taboo” topics and historically “deviant” participants like drug users or sex workers. Because of outdated beliefs about sexuality and gender, IRBs often designate projects involving LGBTQ+ participants as “sensitive” even when that same topic with cisgender or heterosexual participants wouldn’t be designated as such.

This turns into a catch-22 for researchers: to research the understudied and potentially unique experiences of “vulnerable” populations, researchers must comply with strict and protective IRB policies. But the strictness of these policies make it more difficult to study those experiences, further stigmatizing those groups and thus making them more vulnerable.

Moving forward

Although the IRB model is imperfect, user researchers can use it to guide their work. But they first need to question how such “best practices” do more harm than good.

The goal of IRB committees is to protect participants. However, as seen in the examples above, the very nature of this model and its misalignment with ethics in practice can have a detrimental impact on participants and even their communities.

This doesn’t mean design teams can’t adapt and improve upon the IRB model. Most user researchers don’t work in academic or medical institutions so they aren’t beholden to federal IRB regulation and can pick and choose how strictly they’d like their ethics committees to resemble IRBs.

Although counterintuitive, less formalization and “rigor” can actually benefit researchers and participants.

Here are a few ways internal user research committees can begin rethinking the IRB model:

  • Hire committee members who have both academic and lived knowledge
  • Involve participants in the design of the study (e.g., questionnaire design, recruitment planning) before a plan is submitted to the ethics committee
  • Collaborate with community or participant-run review boards to give joint ethical approval
  • Reconsider which topics are automatically labeled as sensitive and which participants are automatically labeled as vulnerable
  • Focus on gathering evidence for the benefits of research just as much as the risks of that research and build in moments throughout the research process for participants to weigh those risks and benefits themselves

At the core of these suggestions is to integrate care and flexibility in discussions about research ethics. Regardless if they adopt a faithful IRB model, user researchers should consider the effectiveness of so-called best practices for truly improving the lives of their participants.

Alba N. Villamil is an independent User Researcher who specializes in designing for the social sector. Her work focuses on making products and services more equitable for historically underserved and vulnerable populations like refugees, low-income parents, and domestic violence survivors. She is also a facilitator and partner at HmntyCntrd, where she teaches about research and design ethics to design practitioners.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest