The goal of IRBs is to evaluate whether a proposed research project is ethically acceptable.
However, this discretion is a double-edged sword.
IRBs are committees that independently review research projects before and during a study.
These committees are made up of people who are unrelated to the research project under review but who have broadly applicable research expertise. Their goal is to evaluate whether a proposed project is “ethically acceptable,” a process that includes
- Understanding a study’s potential risks and benefits
- Determining the need for researcher deception and informed consent
- Identifying the research team’s potential biases
- Evaluating a study for its compliance with data privacy laws and other protective regulations
- Tracking the impact of a study
One of the key elements of this evaluation process is group discretion.
In her historical analysis of IRBs, sociologist Laura Stark describes how the formation of IRBs shifted the responsibility of professional ethics from the individual researcher to a group of their peers. Institutions invite “experts” with “diverse” backgrounds who then deliberate together on whether or not a research proposal fits within the boundaries of what their particular field considers ethical.
Although this group decision-making model has historically helped protect researchers from the whims of funders and other political pressures, it also introduced certain cultural and bureaucratic hurdles that can harm participants as much as help them.
The problem of “credibility”
To justify its discretionary power, IRBs must demonstrate to institutional stakeholders and the researchers submitting proposals that committees are trustworthy. The make-up of an IRB becomes incredibly important in communicating this.
IRBs are made up of “experts” but only certain types of researchers are invited to join as one. Committees look out for markers that signal a potential member’s intellectual and social credibility. On paper, this means IRBs tend to invite researchers with doctorates and other professional degrees who have resumes that reflect the standards of their field.
Members must also be able to “perform trustworthiness” meaning that they conduct themselves in such a way that stakeholders believe will preserve the IRB’s reputation. Because of this pressure, IRBs tend to be attracted to researchers whose decision-making doesn’t disrupt the status quo. These are researchers who display moderate (if any) political leanings and are able to amiably disagree (if needed) with their colleagues during deliberations.
This culture of credibility makes IRBs fundamentally conversative in their decision-making.
Researchers who are qualified but self-identify as activists or socially motivated are excluded because they’re not seen as objective and collegial. This exclusion not only drastically reduces the racial and gender diversity of IRBs but also helps preserve the outdated research frameworks, processes, and tools that continue to harm historically marginalized participants. (See the racist problem framing in public health research.)
The problem of “expertise”
This issue of conservatism is exacerbated by the fact that IRB committees only allow members, the research “experts”, to participate in the deliberation process.
Given that these members are often self-taught with little formal training in ethics, many rely on their own personal experiences to think through the potential risks and benefits of a proposed study.
To speed up deliberations, they also tend to use previous IRB decisions as precedents for future cases, especially contentious ones. Because they’re reviewing research applications as a group, members can maintain the impression of fairness even when they don't always look beyond their individual perspectives or the perspectives of researchers who look like them.
The lack of input from a proposal’s participants and researchers often leads to either overly lax or overly restrictive committee decisions.
When IRB members come from more privileged backgrounds than participants, they tend to lack the cultural competency to analyze research plans for their contextual appropriateness and safety precautions.
For example, they may not catch offensive questionnaire wording, which can retraumatize participants. Or they may accept a project that promotes an inclusive recruitment strategy without considering how that procedure could exhaust community resources and contribute to community over-researching and hypervisibility.
Members are also less likely to accept projects with creative research designs like those that use participatory methods or provide immediate tangible benefits to participants (think: civic tech research that combines research with service delivery support) because there’s no precedent for analyzing the risks of such designs.
The problem of “compliance”
Although the main goal of IRBs is to protect participants, they also have a responsibility to protect the institutions funding them. Evaluating ethical research is pitched as a self-evident good but committees must formalize this process to demonstrate their impact and rigor.
Despite the fact that non-medical research presents its own unique set of challenges and benefits, many IRBs still model their risk management process on medical IRBs, which require compliance with government policy (e.g., automatically designating certain categories of participants as “vulnerable”) and following strict documentation procedures (e.g., requiring written informed consent).
Despite good intentions, this blanket strictness can expose participants to new risks and harms.
Compliance to protections like parental consent can expose youth and their communities to unnecessary surveillance (e.g., requiring parental consent from youth living in mixed status homes might expose their undocumented parent’s immigration status).
Adhering to “best practices” can also reduce participant agency. Anonymization, for example, might be counterproductive for stigmatized groups like sexual assault survivors who want to publicly own their narrative as part of their trauma healing journey.
Other participants, like activists, might not want their names or the name of their advocacy organization to be anonymous so that they can bring attention to their cause.
It’s important to note that while IRBs technically have discretion over what type of evidence of risk and consent they ask for, they rarely exercise it. Instead, they tend to make suggestions to the research design that completely change the nature of the project just so that it complies with precedent.
This is particularly common in research involving “taboo” topics and historically “deviant” participants like drug users or sex workers. Because of outdated beliefs about sexuality and gender, IRBs often designate projects involving LGBTQ+ participants as “sensitive” even when that same topic with cisgender or heterosexual participants wouldn’t be designated as such.
This turns into a catch-22 for researchers: to research the understudied and potentially unique experiences of “vulnerable” populations, researchers must comply with strict and protective IRB policies. But the strictness of these policies make it more difficult to study those experiences, further stigmatizing those groups and thus making them more vulnerable.
Alba N. Villamil is an independent User Researcher who specializes in designing for the social sector. Her work focuses on making products and services more equitable for historically underserved and vulnerable populations like refugees, low-income parents, and domestic violence survivors. She is also a facilitator and partner at HmntyCntrd, where she teaches about research and design ethics to design practitioners.