Automation of data exchange and, more broadly, clinical trial processes, present the clinical research industry with myriad possibilities. Streamlined trial management, improved data collection, analysis and sharing, better matching of eligible patients with trials — and an overall enhanced experience for all stakeholders — are areas of success achieved with the promise of more.
As always, however, more needs to be done to maximize automation’s benefits. Currently, electronic health records (EHRs) and electronic data capture (EDC) can seldom be integrated. The problems of interoperability and the non-standardization of data need to be solved for the industry to realize the full potential of automated process.
In this post, we explore the automation pain points that need fixing and how doing so can streamline stakeholder collaboration.
Before data can be shared automatically, it needs to be standardized. Doing so will result in trial evidence being gathered faster and analyzed better, with data and processes being more predictable, write researchers Applied Clinical Trials.
The semi-automation of data follows from standardization of data, and this is especially true when data is standardized from a trial’s outset. Standardized data leads to faster start-up times, greater transparency and easier reuse of case report forms and validation documents across studies.
Current use of EHRs has yielded benefits through lower costs and less time spent on collecting data. And while their use has not been optimal, they carry significant possibilities for research, says Rachel Murkett, Ph.D., innovation consultant at Biochromex.
Indeed, the automation of data — collection, organization, cleaning and sharing — is reliant on EHRs being used effectively. But electronic health records suffer from poor interoperability and insufficient quality control and security of data. The way data is stored in these records often varies across institutions and organizations. Sharing it, therefore, is a struggle as there isn’t a centralized repository or standardized format for EHRs.
The potential for artificial intelligence to mine data to help identify eligible patients for clinical research opportunities is significant. But reality has not yet lived up to expectations. Part of the problem has been the failure to develop sufficiently sophisticated algorithms, according to Matthew A. Michela, president and CEO of medical information network Life Image.
But the other hindrance has been the unstructured format of data and integrating this into various stakeholders’ clinical workflow. Take EHRs: Around 85 percent of data is unstructured and thus useful only when humans are analyzing it. Machines cannot automate the process because of this structural inconsistency, says Michela.
Clinical trial stakeholders would all benefit from a data exchange network — especially one between sites and sponsors. The network would operate by collecting and analyzing data before channeling it to relevant stakeholders who would, in turn, use it to improve quality, explains Shree Kalluri, founder and CEO of Forte Research.
It should be thought of as an ecosystem in which sponsors are able to send important information to sites, such as protocol documents, draft budgets and coverage analyses, while sites can update sponsors in real time on all matters including activation statuses and patient registrations.
By using documents from electronic trial master files (eTMFs), electronic regulatory management systems and electronic case report form data, as well as data from EHRs, sites and sponsors would be in constant collaboration and cut out any source data verification requirements.
The result would be integrated systems and unhindered flow of information.
Yet not all information should flow freely, warns Shannon Roznoski, Forte’s director of product management. For instance, sites need to be careful about sharing protocol-specified data with sponsors. This is because EHRs contain protected health information and non-protocol specific data. Were sites to automate the sharing of this data, they'd be putting confidential patient at risk.
The summary of a recent European Medicines Agency workshop warns that amid an evolving clinical trial landscape, where science, technology and legislations are changing, sharing data is becoming easier in some ways and more complex in others.
Technology makes data sharing more possible but there are security concerns. Equally, legislation related to data privacy presents another challenge. When trials operate across countries and continents, the hurdles continue to mount.
The viewpoints expressed in the summary assert that no data should be excluded categorically from sharing. Instead, it should be assessed for risk and put through a framework for anonymization. So the sensitivity and context of the data need to be considered before sharing. Automation of data, therefore, would have to be guided by certain parameters of what can and cannot be shared.
A hindrance to the exchange of data — whether automated or not — is the prevailing mindset that all data is proprietary and sharing it could be competitively disadvantageous.
Of course, some data is proprietary, writes Virginia Nido, global head of industry collaborations at Roche. But more should be shared in order to mitigate the rising costs and complexity of trials. By taking a collaborative approach to data, sponsors would be able to run more efficient trials with fewer patients, allowing faster enrollment. All of these outcomes would improve medical research and development, and bring new therapies to the market faster, she explains.
Nido recognizes too the need for data standardization and says both challenges can be overcome. In fact, the biopharmaceutical industry has shown progress with initiatives that influence data-sharing, such as:
Current automation of clinical trial data tends to happen in silos. In other words, trial sites, sponsors and CROs may have automated processes in place but they are not often integrated across stakeholder systems, explains Julie Ross, president of contract research organization Advanced Clinical.
Relevant parties should replace disconnected systems with technology that is interoperable. Doing so will remove data silos and allow the exchange of information at all trial stages. That would mean, contrary to current common practice, that the eTMF systems of CROs and sponsors would match identically. Productivity would increase while redundancies decrease.
These connected clinical networks would enable better automation of data, and facilitate exchange and collaboration between sponsors, CROs, sites and regulators, says Ross.
Automating data exchange from case receipt to reporting can reduce trial costs as well as human error. The result would be more accurate data processing, writes David J. Balderson, vice president of global safety operations at scientific process organization Sciformix. But there are also gains to be made when it comes to pharmacovigilance, “the process of identifying, tracking, evaluating and preventing negative outcomes from drug therapies.”
Balderson says automatically sharing data with regulators, prescribers and patients can build trust and confidence. At least as important is automating adverse event data — with added artificial intelligence capabilities to analyze it — which will update stakeholders of safety issues in real time. This collaborative power is essential for improved decision-making.
U.S.-based data repository provider Nurocor and Swiss consultancy Intilaris LifeSciences have joined forces to improve automation in clinical trials as well as data sharing among stakeholders. Intilaris will use Nurocor’s metadata repository and governance platform to manage data better, explains Allie Nawrat at Pharmaceutical Technology. The result, the organizations say, will be better drug development.
The MDR platform essentially uses data standardization to make data simpler to understand, use and exchange.
Clinical data should be fluid and flow freely but it often stagnates. Data’s transformation from analog to digital has been more a case of incremental change rather than true transformation, according to physician Michelle Longmire, cofounder and CEO of digitally-enable clinical trials platform Medable. Even today, data remains trapped on paper and in siloed databases.
What’s needed, and quickly, is data systems that connect patients, researchers, monitors, data managers and CROs and sponsors, she says. Data should be shared and made to flow automatically so the best clinical decisions can be made in real time.
This type of automated flow, Longmire explains, will lead to quantitative analysis of data, which uses probabilistic data models to enable throughput transactions via automation and data-driven decision-making. She says we should think of this in the same way that stock trading relies on Quant Trading, which enables high throughput trading and quantitative risk analysis to improve the buying and selling of stocks.
In a clinical context, quant trials would elevate clinical trials to new capability levels by using quantitative analysis for automated information exchange and better evidence-based decisions.
Again, the importance of standardizing data must be highlighted. Doing so will make data more accessible and usable, but also shareable among stakeholders. Indeed, this is a prerequisite for the automated exchange of data. Get this right and it will be a boon for stakeholder collaboration.
Images by: everythingpossible/©123RF Stock Photo, anyaberkut/©123RF Stock Photo, everythingpossible/©123RF Stock Photo