Researchers in medical lab; generative AI medical affairs concept

The Transformative Role of Generative AI in Regulatory Compliance and MA Communications

Tools like ChatGPT and Midjourney have made generative AI a topic of continuous conversation. Today, generative AI stands to change the way Medical Affairs teams handle regulatory compliance demands and communication requests.

Generative AI differs from earlier AI models in how it works and what it can do, however some things remain the same, and for optimal outcomes, generative AI requires careful integration into the existing workflows and relationships Medical Affairs teams have built. Teams that attend to the abilities and shortcomings of generative AI can better integrate these tools, enabling their Medical Affairs teams to achieve their goals more effectively.

Gen AI Advances in Regulatory Compliance and Communications

“Artificial intelligence” has been a topic of discussion for many years. Until recently, most of its uses focused on discriminative AI, which classifies data into relevant categories, write Mary Lee, Harrison Liu, and Rachana Gollapudi at Blue Matter Consulting. For instance, discriminative AI used in self-driving vehicle systems works to distinguish among road signs, stoplights, lane markers, other vehicles, pedestrians, and other items in the roadway to allow the vehicle’s computer to make the right decisions about each identified item.

Generative AI is different. Instead of merely distinguishing among inputs, generative AI creates novel outputs similar to the generative AI’s own training set. Large language models (LLMs) are a form of generative AI that produces written text in response to prompts.

Generative AI is already offering some use cases in the biopharma industry, such as identifying new therapeutic targets, clinical trial sites, and potential patient cohorts, write Lee, Liu, and Gollapudi. McKinsey’s Chaitanya Adabala Viswa et al. write that “the technology could generate $60 billion to $110 billion a year in economic value for the pharma and medical-product industries” by improving clinical trial and regulatory compliance and approval, as well as product launch processes.

Viswa et al. predict that generative AI could improve regulatory compliance particularly in the area of Health Authority Queries (HAQs). Here, generative AI can improve compliance by:

  • Predicting which HAQs will be requested for a given submission.
  • Creating appropriate sponsor responses.
  • Providing insight into submission strategies.

“Gen AI’s predictive analytics, for example, can help teams proactively anticipate HAQs, thus reducing their number, both initial and follow-up,” write Viswa et al.

Deloitte’s Stefano Orani identifies additional use cases for generative AI in regulatory compliance:

  • Aiding Medical Affairs teams in understanding and interpreting regulations more effectively.
  • Assessing the impact of regulatory changes on internal policies, standards, and procedures.
  • Updating policies, standards, and procedures promptly when regulatory changes render previous content obsolete, and providing necessary communication and training.

Like any new technology, generative AI requires care and effort to implement. Addressing major challenges early can help MA teams tailor their generative AI use to their needs.

Medical team using lap top during meeting; generative AI medical affairs concept 

Challenges for Implementing Gen AI

A recent MAPS white paper, to which Anju contributed, lists several challenges in incorporating generative AI into Medical Affairs efforts. These challenges include:

  • Resisting the urge to implement generative AI without a clear strategic purpose.
  • Implementing generative AI without first considering the full impact of the technology on Medical Affairs teams’ current efforts and processes.
  • Accounting for cross-functional changes related to the use of generative AI.
  • Integrating generative AI technologies with human work processes in a way that enhances human efforts instead of needlessly duplicating those efforts.

These challenges arise alongside and may be complicated by challenges inherent in using generative AI in any context. These include early models’ tendencies to provide inaccurate or made up outputs, lack of data privacy and security safeguards, and the fact that the output of many generative AI models continues to depend on the quality of the input.

Another major challenge for implementing generative AI in life sciences is the current lack of regulatory and legal guidance on the use of generative AI, particularly when handling patient data and other sensitive information, write Shashank Bhasker and fellow authors at McKinsey. Without this guidance, “the protection of safe use will fall on users,” note Bhasker et al.

Many of these challenges can be overcome with careful attention to implementation of generative AI tools. Medical Affairs teams that apply generative AI thoughtfully, using it where it can provide the most value and support, can benefit from the tool.

Pharmacists using digital tablet and computer; generative AI medical affairs concept 

Strategies and Tools for a Gen AI Future in Medical Affairs

Generative AI tools cannot replace human effort in Medical Affairs. Implementing these tools alongside humans, however, can result in deeper insights, better decision-making, and more effective, streamlined responses to regulatory demands and communication requests.

To ensure generative AI is used effectively, MA teams will need to use the tools carefully. They’ll also need to choose the right tools for each job—whether or not these use generative AI directly. And they’ll need to keep humans in the loop at every stage of the process.

Training Generative AI

Training generative AI is essential if it is to produce meaningful, data-based insights. Tarun Mathur and Sameer Lal recommend the following steps to train generative AI for better regulatory compliance and Medical Affairs communications:

  • Use validated, domain-specific, contextualized data to fine-tune the generative AI model and to provide prompts for the AI’s response.
  • Ensure training data is “representative, diverse, and [does] not contain biases that the system may amplify.”
  • Perform regular testing and review the AI’s output to ensure the model stays on track.
  • Engineer prompts to return factual information, rather than prompting the AI to provide opinions that are more likely to be based on hallucinations.

Attention to ongoing training can optimize generative AI’s responses and refine these responses over time, resulting in a more trustworthy tool.

Choosing the Right Array of Tools

Currently, only 25 percent of healthcare organizations are using generative AI solutions, writes Heather Landi in Fierce Healthcare. While that number is expected to increase, it is unlikely that life sciences companies will find generative AI to be the right tool for every job. Rather, most Medical Affairs teams will continue to use available software, platforms, and other digital tools alongside generative AI.

While generative AI can glean insights from data provided to it, it cannot yet organize that information or its outputs in readily searchable, shareable, or interoperable ways. For that task, Medical Affairs teams will likely continue to rely on tools like Anju’s IRMS MAX, iCare MAX, and Pubstrat MAX. These tools integrate with one another, allowing for efficient information capture, organization, and sharing — which meets current regulatory and legal requirements. Generative AI may provide ways to leverage this data more effectively, particularly when paired with these and other platforms.

Human-Technology Partnerships

“Generative AI may well disrupt the life sciences industry, but human supervision remains the key,” write Mathur and Lal.

“Human in the loop” is the term used to describe the participation of humans in engineering generative AI prompts and in studying the output to determine that it is useful and conforms to reality and known facts. Keeping a human in the loop is a must both while training generative AI models and while using them to improve regulatory compliance, generate communications, and assist Medical Affairs teams with similar tasks.

Human-technology partnerships can also help address the alienation some audiences feel when they realize they’re talking to an AI rather than a human being. In one study published in JAMA Internal Medicine by John W. Ayres and fellow researchers, a panel of healthcare professionals preferred ChatGPT-generated answers to common healthcare questions 79 percent of the time over human-generated answers, finding the AI-based answers “higher quality and more empathetic.”

Even when ChatGPT or other generative AI tools can draft more effective communications, these tools cannot deliver them with the same level of connection as a human being. Partnering generative AI tools with Medical Affairs and other team members offers a best of all possible worlds scenario, where the AI’s output is reviewed and shared by people.

No technological tool solves every problem, and generative AI is no exception; however, generative AI can help Medical Affairs teams anticipate and address regulatory compliance issues, improve their communication, and achieve deeper insights.

Images used under license by

Authored by Reed McLaughlin, Senior Vice President, Customer Management

Reed McLaughlin has spent the last 14 years developing an unparalleled rapport with clients and customers to create environments of success. As Senior Vice President of Sales, Mr. McLaughlin has been able to continually grow the business and improve Anju’s outreach and strengths. A leader in Sales and Customer Service, Reed has helped elevate Anju toward a brighter future.

Want to stay up to date with our news?

To top