A Risk-Based Approach to Compliance for AI/ML-Based Medical Devices
The healthcare artificial intelligence market is expected to touch $51.3 billion by 2027, growing at a CAGR of 41.4% from 2020. In the last few years, the growing capabilities in AI and ML along with the access to Big Data has seen it pervading all industry sectors including healthcare. There is a growing demand for personalized healthcare services and lower costs, which is expected to become a reality with responsible but effective use of AI/ML technologies.
In its discussion paper on the Proposed Framework for Modifications to Artificial intelligence (AI)/Machine Learning-based Software as a Medical Device (SaMD), the US FDA points out how AI/ML-based technologies can improve healthcare solutions and outcomes using the insights gained from the vast amount of data generated by healthcare delivery solutions.
Some of the key use cases would be in early disease detection, improved diagnosis, recognizing new observations or patterns on human physiology, as well as developing personalized diagnostics and therapeutics.
It also points out how the embedded AI/ML in software can constantly learn and improve from real-world use and experience, making them a little different from software as a medical device (SaMD).
Medical Specialties Where AI Applications are Employed
AI/ML technologies are already being used in a variety of devices, which can be broadly classified under the following categories:
- Radiology, one of the specialties where AI is being widely used, be it reading the scans or other data like MRI or ET
- Oncology for a variety of uses including mammogram workflow
- Robotic surgery
- Neurology (for example, AI/ML is used in brain atrophy screening)
- Endocrinology and diabetes management, such as detection of diabetic retinopathy
- Cardiology, AI/ML is used to estimate the fractional flow rate score for coronary artery disease, ECG analysis, etc.
- Internal Medicine (for example, AI/ML used for assessing liver ion concentration)
AI-Based SaMD – Creating a New Need
The current AI/ML-based devices have been approved using the framework already present for SaMD and require:
- Manufacturers to submit a marketing application prior to the initial distribution of their medical device, with the submission type and data requirements based on the risk of the SaMD (510(k) notification, De Novo, or premarket approval application (PMA) pathway
- In case of changes to the design specific to the software, it should have been reviewed and cleared under a 510(k) notification, based on the guidance published by FDA’s Center for Devices and Radiological Health (CDRH). This is a risk-based approach that helps to determine when a premarket submission is required
The limitation with this approach is that an AI/ML-based SaMD undergoes continuous learning and the algorithm may undergo a change, changing the impact on the health outcome and the risk level. This makes it necessary to review the current regulatory approval processes and take cognizance of the changing nature of the algorithm, as against the ‘locked’ nature of SaMDs. Therefore, it requires a premarket submission for an algorithm change, where the software is not ‘locked’ and continues to learn and evolve over time to improve patient care without compromising on patient safety and treatment effectiveness.
Total Product Lifecycle Regulatory Approach
AI/ML being highly iterative, autonomous, and adaptive in nature, the SaMD with AI/ML capability requires a new, total product lifecycle (TPLC) regulatory approach so that it can continually improve while providing effective safeguards. This has to take cognizance of the many types of changes possible to AI/ML-based SaMD that can be broadly categorized as:
- Performance, that could be clinical or analytical
- The inputs used by the algorithm and how they clinically impact the SaMD output
- Intended use
The intended use is defined by the IMDRF risk framework based on the following two major factors:
1. The significance of the information provided by the SaMD to the healthcare decision and includes:
- Whether to treat or diagnose
- To drive clinical management
- To inform clinical management
2. The state of the healthcare situation or condition, and defines the risk level as:
Based on these factors, the risk level of the AI/ML-based SaMD can be established as lowest (I) to highest (IV).
SaMD IMDRF Risk Categorization
|State of healthcare situation or condition
|Significance of information provided by SaMD to healthcare decision
|Treat or diagnose
|Drive clinical management
|Inform clinical management
The equivalent description of intended use for FDA purposes can be referred to in 21 CFR 807.92(a)(5), 814.20(b)(3), and 860.7(b).
The TPLC approach enables quality assurance of the AI/ML-based device and also pushes the organization to embrace an internal culture to pursue quality excellence. It covers every stage of the product lifecycle in the pre-market and post-market stages, right from software development, testing, to performance monitoring.
This approach balances the benefits and risks of AI/ML-based SaMD by establishing clear expectations:
- On quality systems and good ML practices (GMLP)
- For continually managing patient risks throughout the lifecycle of the AI/ML-based SaMD
- On monitoring the AI/ML device and incorporating a risk management approach based on FDA’s Guidance: “Deciding When to Submit a 510(k) for a Software Change to an Existing Device”
- On increased transparency and post-market real-world performance reporting for continued assurance of safety and effectiveness
Good Machine Learning Practice (GMLP)
The FDA defines Good Machine Learning Practices (GMLP) based on AI/ML best practices that are akin to good software engineering practices or quality system practices and include data management, feature extraction, training and evaluation. The FDA is creating a framework for manufacturers of AI/ML-based SaMD to submit potential modifications and their ability to manage and control the resultant risks. This could be of two types:
- SaMD Pre-Specifications or SPS that outline the retraining and model update strategy and the associated methodology
- Algorithm Change Protocol or ACP where those changes are implemented in a controlled manner to manage risks to patients
Some changes could require pre-specified SPS to be managed through ACP and require individual consideration during pre-market review. In some cases, the risks and learning may vary drastically from the planned course of action and needs to be analyzed based on the risks and submit a new pre-market submission.
Transparency and Performance Monitoring
For the TPLC approach to be successful, businesses will need to incorporate mechanisms for transparency and real-world performance monitoring.
Transparency would include the following:
- Updates to FDA, device companies, collaborators and the public
- Making changes to the labeling accurately
- Updating the specifications or compatibility of any impacted supporting devices, accessories or non-device components
- Establish communication procedures through various media to keep the stakeholders updated
This along with real-world performance monitoring can help mitigate risks and ensure continued safety and effectiveness of the AI/ML-based devices.
For more information on ‘Artificial Intelligence and Machine Learning in Medical Technology – Fundamentals and Emerging Regulations’, don’t miss the webinar by our guest speaker and subject matter expert, Sundeep Agarwal.Watch Webinar Recording
At ComplianceQuest, our next-generation EQMS built on the Salesforce Platform, is well-suited to automate your end-to-end quality and compliance workflow. Our solution is scalable, flexible and can be custom designed to manage all regulatory compliance related processes for your AI/ML-based medical device. It comes with an integrated risk management solution, a “must have” in the case of running a quality management process for your next-generation medical device.
To find out more, write to email@example.com to schedule a demo/call with one of our subject matter experts.