Best practices for implementing pharmacovigilance automation amid anticipated regulatory changes

Barry Mulchrone

Annette Williams

Best practices for implementing pharmacovigilance automation amid anticipated regulatory changes

By Annette Williams, Vice President & Global Head, Lifecycle Safety, IQVIA & Barry Mulchrone, Senior Directory & Head of Pharmacovigilance Oversight and Analytics, IQVIA

Since the declaration of the COVID-19 pandemic on 11 March 2020, there has been a need to innovate and transform how biopharmaceutical products are developed and delivered to help rapidly alleviate the global public health and societal burden of the pandemic. Innovation across multiple stakeholders in public health was spurred by the urgent need for safe and effective vaccines and treatments.

The unprecedented global effort undertaken to develop COVID-19 vaccines, treatments and rapid testing has led organizations to consider automation and artificial intelligence (AI) technology to contend with large volumes of safety data generated by mass vaccination programs. This has been especially true for processing adverse event data associated with vaccines and treatments associated with COVID-19. It is estimated more than half of all adverse events reported in 2021 were associated with COVID-19 vaccines alone.

In response to the challenges presented through accelerated vaccine approval timelines and mass vaccination programs over a short timeframe, regulatory authorities also had to look at innovative ways of processing safety data while ensuring compliance with legislation and guidelines. The authorities issued guidance to manufacturers and the public to aid the streamlined reporting of safety data during the pandemic and this also required technology innovation (e.g. upgrading the MHRA yellow card scheme in the UK). 

As organizations continue to evaluate and implement automation and AI technology, it is important to consider how increased use of technology should align with both industry best practices and regulations – and to proactively address how technology adoption will impact stakeholders involved in routine pharmacovigilance activities while ensuring deployment and implementation poses no risk to breaching guidelines or regulations.

Addressing the “black box” of AI in global regulations

Pharmacovigilance is a particularly compelling use case for AI, as routine pharmacovigilance activities including handling large volumes of largely structured data and mining large volumes of structured data for signals are largely manual processes involving repetitive actions. Given the growing volumes of adverse events and associated adverse drug reaction reporting between multiple stakeholders which results in increased exchange of safety data between stakeholders in the same period of time – traditional processes are increasingly unscalable and therefore unsustainable. In addition, AI holds promise for streamlining signal management activities – offering potential to identify signals earlier in the product lifecycle (thus helping companies to prioritize higher value assets); eliminate noise (i.e. false positives) during routine signal management activities and allow for new sources of data to be used to identify and validate signals (e.g. the use of real world data to identify signals and social media to further validate signals).

Although the use case benefits of AI are well recognized, there is a need to proceed with caution. Some of the challenges to consider include – validation of AI tools; oversight/having assurance that output is as expected and does not change; assurance that machine learning/AI does not jeopardize PV quality objectives. Rigorous pre-and post-implementation checks needs to be conducted to ensure AI is behaving as it should and that there are no unintended downstream consequences. Machine learning processes should form an integral part of any company’s overall Quality Management System and clear documentation will need to be in place to account for how technology is deployed and maintained.

Last year, both the United States Food and Drug Administration (FDA) and the European Parliament laid out plans to regulate the use of AI-based software in healthcare. In particular, the FDA’s guidance focuses on ensuring transparency for changes to algorithms. The August 2021 report from the International Coalition of Medicines Regulatory Authorities (ICMRA), a consortium of regulatory bodies, likewise reflects this concern. Among its recommendations, ICMRA called for the exploration of new regulatory frameworks regarding access to algorithms and their datasets, along with guidelines for the use of AI in data provenance, reliability, transparency, and validity.

Using technology effectively amid regulatory uncertainty and employee skepticism

The pace of technology change has always been difficult for government bodies to regulate – it takes time to draft and implement new regulations and by the time regulations are implemented technology has already advanced. Part of this stems from the way regulations are shaped, as draft rules are subject to public input before they are revised and finalized. Part of this is due to the difficult balance between encouraging technological innovation while protecting public and patient health. Moreover, the increasingly global footprint of life sciences organizations – the complexity of meeting differing regulatory requirements across multiple jurisdictions – cannot be understated.

Whatever the outcome of regulatory action, there are three clear steps that life science organizations can take to ensure that PV use of automation and AI stays a step ahead of new compliance standards and ensures improved productivity.

Robust training. Training needs to begin with the scientific basics of what artificial intelligence is, how it works and how there is still a critical need for humans to interact with the output to apply clinical judgment and decision-making where humans excel, and machines currently do not. When employees understand what the algorithms are meant to do and how they actually behave, have clarity on what support they need to provide and how it will improve their work, employees are much more likely to use them often and as intended. Furthermore, consideration needs to be given to how the machine learns over time and the records for how training is managed should be retained as part of the company’s overall quality management system.

Cross-departmental engagement. As organizations begin to roll out AI systems, it’s important for PV and technology teams to collaborate early and often. A DevOps approach is recommended, as this will ensure the success in developing and deploying AI technology.  With this approach, both teams will be expected to assess and address gaps in real-time to ensure that the product performs as expected. Frequent and ongoing engagement also enables IT teams to quickly communicate to pharmacovigilance any changes to the AI product, including algorithm updates – a critical step to ensure that automated adverse event and adverse drug reaction reporting processes remain compliant. Frequency of AI performance reviews should be documented, and contingency measures established should there be any deviation from the minimum quality standards established as part of the overall quality management system.

Process harmonization. Any enterprise application rollout may come with unintended consequences, and AI is no exception. In addition, incomplete or erroneous data left unchecked and certain types of analysis may trigger anomalies that, if left undetected and unaddressed, could have an adverse impact on the entire analytics environment – risk of rework, unnecessary delays and an increased risk of non-compliant results. Organizations should harmonize business processes prior to implementation and continually assess them after implementation, to account for any nuances that may been missed. Any AI deployment should include a mechanism for flagging where human intervention may be required (e.g. where an adverse event case is flagged as an exception if AI was unable to create an accurate case narrative).

Act now to avoid challenges in the future

It’s impossible to predict when governing bodies in the United States, the European Union, or elsewhere will issue new regulations for the use of AI in healthcare and life sciences. It’s also impossible to predict whether these regulations will embrace the recommendations of ICMRA, or whether there will be broad alignment among regulators in different jurisdictions.

The absence of regulatory standards shouldn’t dissuade life science organizations from considering AI for pharmacovigilance. The increased volume and compressed timeline of COVID-19 treatment and vaccine development has underscored the value of automation technology for adverse event and adverse drug reaction reporting. Manual processes alone would have been insufficient to meet the monumental challenges that the industry faced.

Organizations that take the right steps today – thorough training, engagement and collaboration, ongoing business process harmonization, and rigorous testing and monitoring of technology deployments  – will bolster the maturity of their AI and automation programs in pharmacovigilance. As a result of this work, they will be well positioned to meet whatever regulatory requirements emerge in 2022 and beyond.