Regulatory Frameworks and Transparency Challenges in Artificial Intelligence-Enabled Medical Devices: A Global Perspective
Main Article Content
Abstract
The integration of artificial intelligence (AI) in medical devices has revolutionized diagnostics, risk prediction, and treatment planning. However, regulatory frameworks defining the safety, efficacy, and ethical deployment of AI-enabled medical devices remain fragmented and evolving. This article presents a comprehensive review of current regulatory approaches in major jurisdictions—United States, European Union, and Asia-Pacific—highlighting disparities, best practices, and emerging models such as Predetermined Change Control Plans (PCCPs). The discussion emphasizes persistent transparency issues, including underrepresentation of minority populations in clinical datasets, lack of explainability, and inconsistent post-market surveillance. Recent guidance from the U.S. FDA, the EU AI Act, and the British MHRA are analyzed to map regulatory trends and identify gaps. The findings underscore the need for harmonized standards, robust transparency requirements, and stakeholder-driven accountability. Suggestions for future regulatory alignment, improved data reporting, and the integration of ethical AI-by-design principles are provided to enhance global health equity and patient safety. This article serves as a foundation for policymakers and researchers aiming to strengthen regulatory oversight of AI in healthcare