Systemic Bias in Clinical Trials

How To Prevent Systemic Bias in Clinical Trials

Research experts know that some degree of systemic bias is inherent to clinical trials. It’s naive to think that any trail could be 100 percent free from it. In clinical trials, the question really isn’t about whether bias is present or not, but rather the degree to which bias is prevented.

Proper study design and implementation help prevent systemic bias. And by understanding what biases exist, we can better mitigate any intentional or unintentional adjustments.

There are many different types of bias. Over 50 types of bias affecting clinical research have been described according to clinician-educator Chris Nickson. However, most fall into three main categories: selection bias, measurement bias and reporting bias. In this post, we explore what can be done to overcome them.

Selection Bias

Selection bias is any type of bias that results in a sample population that is not representative of the target population. The individuals or groups in the study end up being different from the population of interest, leading to a systematic error in an association or outcome.

There are also many different types of selection bias, which Dr. Justin Morgenstern does an excellent job of cataloging in a post for First10EM. Types of selection bias include:

  • Attrition bias.
  • Diagnostic access bias.
  • Membership bias.
  • Non-respondent bias.
  • Prevalence-incidence bias.
  • Referral bias.
  • Sampling bias.
  • Unmasking bias.

Selection bias can happen during recruitment or during the analysis of the study. An example of selection bias that’s gained recent attention involves H.I.V. trials. There is a clear lack of women in clinical trials of potential H.I.V. treatments, cures and vaccines.

Over half of people living with H.I.V. are women, while most research subjects are men, writes journalist Apoorva Mandavilli. “A 2016 analysis by the charity AMFAR found that women represented a median of 11 percent in cure trials. Trials of antiretroviral drugs fared little better; 19 percent of the participants were women.”

Another study that recently received criticism was one with the corresponding author being Dr. Ann McKee, a neuropathologist at VA Boston, the Boston University School of Medicine. The study examined the brains of 202 deceased American football players and found signs of chronic traumatic encephalopathy (CTE) in 177 of them, including 110 of 111 former NFL players.

Dr. McKee herself pointed out the study’s limitations. “The VA-BU-CLF brain bank is not representative of the overall population of former players of American football,” she writes. Additionally, she says “selection into brain banks is associated with dementia status, depression status, marital status, age, sex, race, and education.”

Despite this disclosure, headlines citing the study launched a wave of CTE hysteria. Some of the reporting did little to acknowledge the biases at play. Those who dug deeper into the study were less shocked by its results.

In an op-ed by the authors of “Brainwashed: The Bad Science Behind CTE and the Plot to Destroy Football,” Merril Hoge and Peter Cummings, M.D. wrote: “Many of the 111 NFL brains were donated by deceased players’ family members specifically because the players had displayed symptoms of mood, cognitive or behavioral disorders. That’s selection bias. If you only look at brains from people who seem to have neurological problems, don’t be surprised when you find signs of those problems.”

Clearly, biases can exist with the data itself and its interpretation.

Preventing Selection Bias

Randomized controlled trials are the gold standard for testing new treatments. Medical News Today explains how proper RCTs are both randomized and controlled. “The researchers decide randomly as to which participants in the trial receive the new treatment and which receive a placebo, or fake treatment.”

The Catalogue of Bias also suggests that authors assess the probable degree of selection bias at different stages of the trial or study. This includes close attention to how intervention and exposure groups compare at baseline, to what extent potential participants are pre-screened, and what randomization methods are used.

Systemic Bias

 

Measurement Bias

Measurement bias, or “detection bias,” refers to any systematic or non-random error that occurs in the collection of data in a study. Again referencing Morgenstern’s bias catalog, below are bias types that roll up into the broader category of measurement bias.

  • Recall bias.
  • Observer bias.
  • Awareness bias (Hawthorne effect).
  • Expectation bias.
  • Verification or workup bias.
  • Insensitive measurement bias.
  • Lead-time bias.
  • Response bias.

Measurement bias can happen either during data collection or during data analysis.

Let’s take expectation bias, for example. Pete Foley at InnovationExcellence defines expectation bias as “the tendency for experimenters to believe data that agree with their expectations for the outcome of an experiment, and to disbelieve and discard data that appear to conflict with those expectations.”

In clinical trials, both researchers and patients may enter trials with expectations, which ultimately impact the outcome of the trial.

This past summer, the failure to account for expectation bias may have played a key role in the  Intra-Cellular Therapies’ experimental psychiatric drug trial. The drug is meant to decrease the severity of depressive episodes for bipolar disorder patients. The outcome of the trial had mixed results, which caused the company’s stock value to go down by 20 percent.

In a BioPharma Dive article, Andrew Dunn explains that the trial did produce one successful study and one failed study. In the failed study, there were three control groups. The first received a higher dose of the drug, the second a lower dose and the third group received a placebo.

The CEO of Intra-Cellular Therapies admitted that the three-arm study could have increased expectation bias, “as patients know they have a two-third chance of receiving the drug.”

There are many other types of measurement bias. For example, participants generally take treatment for different lengths of time and different dosages in any trial, which can skew data sets. There’s also lead-time bias, which means that if a patient’s disease is identified earlier, they will likely initiate treatment at an early stage, and as a result, would be better positioned to get better or survive longer.

Preventing Measurement Bias

Blinding is an important practice for ensuring the validity of clinical research and reducing measurement bias. Blinding refers to a practice where study participants are prevented from knowing information that may influence them and affect the results of a trial.

Blinding is typically used in the randomized controlled trials we discussed earlier. “To ensure to the highest degree possible that the intervention is responsible for any noted differences between the two groups, people involved in gathering or analyzing the data might also be blinded to knowing who is being given the treatment and who is not,” advises the Institute for Work & Health.

Most scientists agree that simply blinding the patient isn’t enough. “RCTs should be at least double-blinded, and should have more blinding where possible (this includes: patients, clinicians/researchers, data collectors, and statisticians),” says Saul Crandon, an academic foundation doctor at Oxford University Hospitals NHS Trust.

Systemic Bias

Reporting Bias

Reporting bias occurs when information is selectively revealed or repressed. Reporting bias includes:

  • Publication bias.
  • Time lag bias.
  • Location bias.
  • Knowledge reporting bias.
  • Language bias.
  • Multiple publication bias.
  • Citation bias.
  • Selective outcome reporting.

There have been a number of studies indicating that reporting bias is present in RCTs. One study of note is Translational Psychiatry’s investigation, which specifically explores the issue of selective reporting as it relates to psychotic disorders. The study found significant discrepancies in prespecified and published outcomes.

And the issue isn’t limited to any particular field. A CenterWatch report estimates that roughly half of clinical trials go unreported, often because the results are negative. “Failure to report trial outcomes paints a distorted picture of the risks and benefits of drugs, vaccines, medical devices, and even diagnostics.”

Adding to the issue is publication bias. Positive results make headlines much more often than those supporting the null hypothesis. This creates a real problem because the lack of data supporting null outcomes creates even more bias amongst scientists and researchers.

“Missing information undoubtedly reduces the precision of meta-analytic estimates, but it also introduces bias if the missing data systematically differ from the data available,” explains Christopher H. Schmid, professor and chair of Biostatistics at Brown University.

Preventing Reporting Bias

In a report for The BMJ, John Ioannidis, M.D. and colleagues advocate transparency in documenting clinical trial outcomes. “The proposed solution has been trial registration, including the explicit listing of prespecified outcomes before launch, and the transparent description of all changes that occur afterward,” they write.

While it is mandatory to register clinical trials in the U.S., it is not required by law in every country. Making it so would certainly help with transparency as there is an ethical responsibility to the patient to make trial data public. A shift towards global mandatory registration would support fully informed decision making in healthcare. Until then, clinical decision making based on the “best evidence” will remain biased.

Want to stay up to date with our news?

To top