Discussion on AI-powered Medical Devices and Software: Assessing Necessary Regulations
In the world of healthcare, Artificial Intelligence (AI) is making significant strides, and the journey is far from over.
Guildford Street Laboratories, a spin-off from University College London, recently received a major boost with the FDA Breakthrough Device designation for its blood-based Parkinson's disease test, PD Predict. This diagnostic test uses machine-learning analysis to measure multiple biomarkers, offering a promising step forward in early detection of Parkinson's disease.
Meanwhile, in a recent episode of the Keeping Up with the Radiologists podcast, Tessa Cook, director of Penn Medicine's Center for Practice Transformation in Radiology, along with Saurabh (Harry) Jha and Hugh Harvey, discussed the future of AI in clinical care. They touched upon topics such as the need for evidence and rigor, transparency, trust, and managing AI in the real world.
One tool that aids in this endeavour is Assess-AI, which enables the comparison of local performance metrics to national benchmarks and facilities with similar characteristics. It's currently being piloted at 15 sites with four AI vendors and platforms engaged for two FDA-cleared clinical use cases.
The conversation also delved into the topic of regulation. Robert Califf, MD, FDA Commissioner, expressed his doubts about the readiness of U.S. health systems to validate an AI algorithm in a clinical care system. The future of regulation is a spectrum, according to Hugh Harvey, managing director of Hardian Health consultants, a U.K.-based clinical digital consultancy. He discussed the challenges of CE marking versus FDA 510(k) and the difficulties in obtaining regulatory clearance.
The FDA is currently undergoing a restructuring, with changes from the Trump administration being closely watched. Recently, the FDA Digital Health Advisory Committee held a public meeting discussing regulation of generative AI and the use of narrow AI tools.
Bernardo Bizzo, MD, PhD, discussed the American College of Radiology's (ACR) national AI quality assurance program, ARCH-AI. This registry, Assess-AI, provides real-world monitoring of imaging-based AI models deployed in a clinical workflow.
However, the future of AI is not without its challenges. Bizzo emphasised the need for clinically safe thresholds and specific elements and metrics for monitoring safety and efficacy. The future state of AI is generative AI, but discussions about these aspects are ongoing.
In drugs and biologics, AI is commonly used, but in clinical care, it presents complexities and variables. The CEO of a risk consulting and algorithmic auditing firm, along with other authors, recently discussed auditing algorithmic risk in the MIT Sloan Management Review. Yet, to date, there hasn't been a single Class-3 autonomous AI system regulatory approval.
The company that received FDA approval in November 2021 for the use of a clinical AI-based test for Parkinson's disease is not specifically named in the provided search results. Nevertheless, the strides being made in AI-based diagnostics offer a promising future for early and accurate disease detection.
This article is brought to you by AuntMinnie.com in collaboration with Penn Radiology and is available on Spotify, YouTube Music, and Apple Podcasts.