With artificial intelligence gaining sway in numerous industries, the US Food and Drug Administration (FDA) has teamed up with its counterparts in Canada and the United Kingdom to create a ten-point draft regarding best practices for the development and deployment of AI in the health care sector.

FDA Digital Health Center of Excellence director Bakul Patel stated that the FDA and its counterpart agencies have recognized that AI and machine learning technologies need to be given due consideration.  These technologies are quite complex, and their development entails a great deal of information. As such, this is seen as a global opportunity to craft a set of quality standards that are unique to the industry.

This collaborative endeavor proposes ten key principles that will identify areas where international alignment could be made possible through organizations like the International Medical Device Regulators Forum (IMDRF), which includes similar agencies and regulatory bodies from Australia, Europe, and key nations within the Asian continent.

Finding Ways to Make Medical AI Better

The joint endeavor will also seek to find ways for aligning regulatory policies related to any research related to AI for medical use and any complementary tools. 

The FDA also expressed hopes that this initiative could be the foundation of guidelines for developing AI technology specific to healthcare or possibly lead to the adaptation of best practices that have proven effective in other industries.

This is actually a continuation of the FDA’s plan to apply risk-based strategies regarding device-related requirements. When the plan was first announced in September 2019, the agency stated that it had no plans to regulate a number of low-risk technologies, including those designed to help healthcare practitioners and their patients manage non-critical conditions. 

What Do The New Best Practices Entail?

Multidisciplinary expertise is one of the key considerations regarding the new best practices. All three parties have agreed that there is a need for such expertise in all stages of the product life cycle. This will give researchers and developers a better understanding of possible physical risks and enable them to determine how AI models can be integrated into the actual clinical procedure.

Likewise, the new principles also call for separate training datasets that accurately represent a specific AI’s intended patients. These would be kept apart from testing datasets to prevent confusion and mismanagement.

Device testing also needs to reflect how AI will be used in actual clinical conditions. In this case, the criteria for success revolves around the performance of human actors – physicians, caregivers, and technical personnel – using the AI and not the technology itself. Those using AI technology also need to be given concise information, including any possible modifications or updates.