Commercial AI medical devices lacking adequate clinical validation may pose risks to patient care. Surprisingly, that’s true for almost half of the tools already cleared by the US Food and Drug Administration.
In an August 26th commentary in Nature Medicine, researchers at the University of North Carolina in Chapel Hill discussed an analysis they performed of AI device clearances to date and called on the FDA and AI developers to publish more clinical validation data and prioritize prospective studies.
“Although AI device manufacturers boast of the credibility of their technology with FDA authorization, clearance does not mean that the devices have been properly evaluated for clinical effectiveness using real patient data,” said author Sammy Chouffani El Fassi, a medical student, in a news release.
The researchers looked at 521 AI or machine-learning device authorizations. They found that 144 were retrospectively validated, 148 were prospectively validated, and 22 were validated using randomized controlled trials. Most notably, 226 of 521 (43%) lacked published clinical validation data.
AI is able to learn and perform such human-like functions by using combinations of algorithms. The technology is then given a plethora of data and sets of rules to follow so that it can “learn” how to detect patterns and relationships with ease.
But from there, the device manufacturers need to ensure that the technology does not simply memorize the data previously used to train the AI, and that it can accurately produce results using never-before-seen solutions.
Since 2016, the average number of medical AI device authorizations by the FDA per year has increased from 2 to 69, indicating tremendous growth in the commercialization of AI medical technologies.
The majority of approved AI medical technologies are being used to assist physicians with diagnosing abnormalities in radiological imaging, pathologic slide analysis, dosing medicine, and predicting disease progression.
However, even if AI is booming, its implementation in healthcare has raised concerns about patient harm, liability, device accuracy, patient privacy, scientific acceptability, and lack of explainability, otherwise known as the “black box” problem.
According to the authors of the study, these concerns underscore the need for transparency in how AI technology is validated.
What’s more, the researchers found that the latest draft guidance, published by the FDA in September 2023, does not clearly distinguish between different types of clinical validation studies in its recommendations to manufacturers.
For its part, in an article published earlier this year in npj Digital Medicine, the FDA noted it is currently sketching out details to help improve transparency in AI product information.
Your email address will not be published. Required fields are markedmarked