Concerns about fake news have not escaped the fields of pharmacology or clinical trials. Far from it, in fact. The COVID-19 pandemic has revealed just how easily pseudoscience and bad medical advice can spread.
Against this backdrop of medical misinformation, however, are teams of researchers working around the clock to test new therapies, to get those therapies to market and to publicize their work.
Here is how those teams can cut through the noise when they report their findings and share their knowledge with the wider public.
Just as in the political realm, scientific misinformation gains traction because of several overlapping reasons.
Brian Southwell, Ph.D., senior director of RTI International’s Science in the Public Sphere, understands this dynamic well, and part of his work has focused on helping clinical trial executives understand how medical misinformation impacts their work.
Southwell notes how various interests are competing for the public’s attention. There are intentional campaigns to “engender distrust of treatment,” there are businesses trying to sell questionable products, there are disreputable news sources, and there are reputable news sources that simply get their reporting wrong.
An accelerating factor is social media. Southwell notes how humans have a natural desire for connection, and people often share bad medical advice to their friends in good faith simply because they want to communicate a message of hope to those who might be feeling vulnerable.
The unfortunate consequence is that people grow suspicious of medical information, and for our industry in particular this suspicion can dissuade people from participating in trials, Southwell says.
In fact, the media ecosystem is contributing to what John P. A. Ioannidis and fellow researchers describe in the European Journal of Clinical Investigation as “the medical information mess.” This mess includes scholarly journals as well as mainstream media.
Ioannidis, et al. argue that a significant amount of medical research “is not reliable or is of uncertain reliability, offers no benefit to patients, or is not useful to decision makers,” but even healthcare professionals fail to recognize this. Further, those who do recognize the problem “lack the skills necessary to evaluate the reliability and usefulness of medical evidence.”
So, the scope of this problem is huge. It’s not just politicians spit-balling COVID-19 treatments on live TV. Misinformation comes from grifters, reporters, researchers themselves and the audiences who share content for its perceived emotional value.
Clinical research teams don’t have a lot of control over those who knowingly peddle medical misinformation for personal gain.
They do, however, have some control over the actors in this media ecosystem who are trying to share real science in good faith, and they can exercise this control by making their findings as clear as possible, both for the media who report on those stories and for those publications’ audiences.
Raw facts alone cannot counter an audience’s suspicions. Audiences who are exposed to competing claims about a therapy will be quick to ask, “But what about...?”
To connect your good research with those audiences, you’ll have to handle the what abouts.
A good starting point would be the article “How to Read a Clinical Trial Paper” by researchers Dr. Shail M. Govani and Dr. Peter D. R. Higgins. Their rubric for scrutiny functions as a checklist of objections researchers must handle when sharing their research. Those potential objections include:
Clinical trials teams all understand how to account for these issues in the actual trial. When reporting results, then, it’s important to communicate to publishers what you did to account for those issues.
Terms familiar to clinical trials teams (e.g. “confidence interval” or “double-blind”) might sound esoteric to wider audiences. It’s worth taking the time to unpack clinical terms so reporters and audiences alike can have a common understanding of the science.
If you’re unsure of which terms might be confusing, have a look at OncoLink’s article on interpreting a cancer research study. This piece was written for a general audience, and it has an exhaustive definition of terms you can check your own reporting against.
To use OncoLink’s own example, if your study reports “overall survival of 81% (95% CI 78%-83%),” be sure to help the reporter and the reader understand what those percentages mean. Spell out in conversational language how it’s likely that 95 percent of a population will fall in the range of 78–83 percent survival, and describe what makes this level of confidence significant when your study is compared to others.
In moments such as a pandemic, it’s tempting to want to push out results as quickly as possible.
But remember that the act of reporting is done within a larger context, where you might have other research organizations rushing to publish their own data. As Stacey McKenna at Scientific American notes, the first community-level COVID-19 serological surveys in the U.S. showed a wide range of results.
Those surveys showed an antibody presence in nearly a third of residents in Chelsea, Massachusetts, but in just 2.8 percent of people in Santa Clara County, California. That wide range of values sent up a red flag, and scientists (rightly) began to scrutinize the studies’ sampling methods and statistics.
Of note, too, is McKenna’s detail that some of the studies were “first announced as press releases rather than as peer-reviewed or even preprint studies.”
The lesson here: Scientific rigor is important, and cutting corners at any stage of the process will undercut your credibility.
Clinical research has audiences beyond everyday news consumers and the reporters who serve those audiences.
To Southwell’s point, there are pools of trial participants to think about, too, and that audience’s needs are quite different. Among people with difficult conditions, especially, “fear and uncertainty makes patients vulnerable to disinformation,” Dr. Guy Buyens and Saar Sinnaeve at the Anticancer Fund write.
The value of any trial must reflect the added value for the patient, they say. That way, vulnerable people don’t feel the research system is leaving them out, but responding to their needs. This is what patient-centric clinical trial design is all about.
An important aspect of patient centricity is a recognition that patients can represent a variety of cultures and contexts. This lesson was cast into relief during the 2018 Ebola outbreaks in the Democratic Republic of the Congo, “a multicultural country where heterogeneous models of health knowledge, healthcare systems and worldviews coexist, interact and merge,” researchers Arsenii Alenichev, Koen Peeters Grietens and René Gerrets write in Global Public Health.
How people understand medical information shapes everything, from how willing they are to participate in a study to how willing they are to give their attention to less scrupulous sources.
The DRC’s people struggled with Ebola information in ways that might seem more familiar now to people whose countries were hard-hit by COVID-19.
Emergency vaccinations were delivered to communities of people “[s]urrounded with mistrust, rumours and violence,” the researchers write. The key to winning over people who need vaccines in such a context is to understand the “epistemological pluralism” through which people learn about things like viral outbreaks, they argue.
The word “vaccine” connotes different things depending on a person’s cultural context. Colonialism and anti-vaxxer movements alike contribute to those perceptions. This is why researchers need to understand the objections, the “But what about...?”s, that various people might have about their research.
Case in point: Conspiracy theories in the United States regarding potential COVID-19 vaccines. “In recent weeks, vaccine opponents have made several unsubstantiated claims, including allegations that vaccine trials will be dangerously rushed or that Dr. Anthony Fauci, the nation’s top infectious diseases expert, is blocking cures to enrich vaccine makers,” AP reporters David Klepper and Beatrice Dupuy write. “They’ve also falsely claimed that Microsoft founder Bill Gates wants to use a vaccine to inject microchips into people — or to cull 15% of the world’s population.”
This is a familiar pattern. As Yuxi Wang at the Centre for Research on Health and Social Care at Bocconi University and fellow researchers note in their literature review, conspiracy theories proliferated during the Zika epidemic of 2015 and 2016 and the Ebola outbreak, too.
“Much of this misinformation comes from individuals who are highly active in influencing opinions, and rumours often garner higher popularity than evidence-based information,” they write.
Medical organizations have been looking for ways to circumvent these malicious influencers. In the UK, for example, the team at Cognitant have released an app, Healthinote, that sends reliable health information directly to users’ phones.
Furthermore, Dr. Bram Rochwerg and fellow researchers write in Critical Care Explorations that new pandemics will arrive after this one, and this puts the onus on researchers “to build infrastructure, [...] develop collaborative networks, initiate study protocols, and begin regulatory and ethical approval processes in anticipation of the next outbreak.” That way, researchers will have a head start in facilitating a response.
That infrastructure, however, depends on how well researchers can deliver sound science to disparate audiences who might feel overwhelmed by, duped by or dismissive of peer-reviewed data. Getting buy-in at a cultural level is what gets patients to participate in clinical trials, and it’s what gets whole populations to accept best-available treatments.
Images by: Minh Hằng, Viktoriya Krasovskaya/©123RF.com, Daniel Ernst/©123RF.com