AI & Health: Who Controls The Cure?—PART 4

AI & Health: Who Controls The Cure?—PART 4
AI & Health: Who Controls The Cure?—PART 4
WhatsApp
Facebook
Twitter
Telegram
LinkedIn
Print

Regulating the Invisible

Why Laws, Standards, and Oversight Fail Against Algorithmic Medicine.

By Prof. MarkAnthony

The Regulatory Illusion

Regulation is supposed to make power visible. In healthcare, it has historically done so by demanding documentation, traceability, and accountability, clinical trials, post-market surveillance, malpractice law, and professional licensure. Artificial intelligence, however, has exposed the limits of this regulatory imagination. Across jurisdictions, AI systems now operate inside hospitals and health agencies with profound clinical influence but only partial legal clarity. They are regulated in fragments, audited intermittently, and governed largely by frameworks that assume transparency where none exists.

Morley et al. (2022) describe this condition as a structural mismatch: AI systems evolve faster than the regulatory instruments designed to contain them. Oversight regimes built for static medical devices struggle to address adaptive algorithms that change behavior over time. The result is not deregulation, but misregulation—rules that exist on paper but fail to engage the real locus of algorithmic power.

From Medical Devices to Moving Targets

Regulatory authorities have attempted to assimilate AI into existing medical-device frameworks. In the United States, the Food and Drug Administration (FDA) classifies many AI systems as Software as a Medical Device (SaMD), subject to premarket review and post-market monitoring. Yet this approach presumes that a product’s behavior at approval resembles its behavior in practice. With machine-learning systems, that presumption collapses.

Pantanowitz (2024) shows how adaptive algorithms challenge foundational regulatory concepts such as validation, version control, and risk classification. When a system learns from new data, it may drift from its approved state without triggering formal review. Regulators, in effect, approve a snapshot while the system continues to evolve. Oversight becomes retrospective, not preventative.

The European Union’s AI Act attempts to address this by categorizing medical AI as “high risk,” imposing requirements for transparency, risk management, and human oversight. Yet Aboy et al. (2024) caution that even this ambitious framework struggles to operationalize enforcement in clinical settings where proprietary protections limit access to models and training data. High-risk classification does not automatically translate into high-visibility governance.

Read also: AI & Health: Who Controls the Cure?—Part 3

The Governance Gap Inside Institutions

Much of the regulatory burden is quietly shifted onto healthcare institutions themselves. Hospitals are expected to assess vendor claims, manage algorithmic risk, and ensure ethical deployment—often without the technical capacity to do so. Freeman et al. (2025) document how internal AI governance committees are emerging as ad hoc solutions, tasked with responsibilities traditionally reserved for regulators.

These committees, however, operate within institutional constraints. They lack subpoena power, cannot compel disclosure of proprietary models, and often rely on vendor-provided documentation. Hassan (2025) describes this as delegated governance without delegated authority. Responsibility is transferred downward, but power is not.

The consequence is predictable: institutions assume liability for systems they cannot fully evaluate. When failures occur, hospitals are scrutinized for oversight lapses, while vendors invoke trade secrecy. Regulation, in practice, becomes a shield for upstream actors and a burden for downstream users.

Risk Management Without Risk Visibility

Regulatory frameworks increasingly emphasize “risk-based” governance. The U.S. National Institute of Standards and Technology (NIST) AI Risk Management Framework provides a widely cited model for identifying, assessing, and mitigating AI risks across the lifecycle. Its language is rigorous, its taxonomy comprehensive. Yet its application in healthcare reveals a fundamental limitation: risk cannot be managed if it cannot be seen.

Schmidt et al. (2024) map the global regulatory landscape and find that most oversight regimes depend on self-reporting and documentation supplied by developers. This creates a circular logic: systems are deemed safe based on assurances from those with the strongest incentive to minimize perceived risk. Without independent audit rights, risk management frameworks become procedural rituals rather than substantive controls.

Sun et al. (2024) argue that the translation of ethical principles into enforceable rules remains incomplete. Accountability, transparency, and fairness are widely endorsed, but rarely operationalized in ways that constrain real-world deployment. Regulation becomes aspirational rather than coercive.

Ethics Without Enforcement

International bodies have responded to AI’s rise with a proliferation of ethical guidelines. The World Health Organization’s Ethics and Governance of Artificial Intelligence for Health outlines principles of inclusiveness, transparency, and accountability. These principles are necessary. They are also insufficient.

Ethical guidance lacks legal force unless embedded in enforceable standards. Nouis et al. (2025) show that ethical review mechanisms often operate separately from procurement and deployment decisions. A system may pass an ethics checklist and still be deployed without clear accountability pathways or audit mechanisms. Ethics becomes a precondition for approval, not a condition of continued operation.

Papagiannidis (2025) notes that this separation allows organizations to claim ethical compliance while avoiding structural reform. The language of ethics is adopted; the distribution of power remains unchanged.

The Black Box Meets the Law

Legal systems are ill-prepared for algorithmic opacity. Traditional liability frameworks presume that causation can be traced and negligence established. With AI-assisted decisions, causation is probabilistic and distributed. Pham (2025) demonstrates that courts face profound difficulty assigning responsibility when harm results from interactions between clinicians, institutions, and opaque systems.

Cestonaro (2023) highlights that existing doctrines of medical liability assume a human decision-maker with discretion and explainable reasoning. When recommendations originate from black-box models, clinicians may follow guidance they cannot interrogate, while developers disclaim clinical responsibility. The law encounters a void where intention, foreseeability, and control become blurred.

Sabet (2025) warns that without proactive regulatory reform, liability will be resolved only after harm occurs—through litigation rather than prevention. Regulation arrives after midnight, not before.

Global Fragmentation, Local Consequences

AI regulation in healthcare is profoundly uneven. High-income jurisdictions experiment with comprehensive frameworks, while low- and middle-income countries often rely on imported systems governed elsewhere. Ghaffari Heshajin (2024) highlights how weak health information governance enables foreign entities to deploy AI tools without meaningful local oversight.

Geist (2025) frames this as a sovereignty problem. When predictive models are trained, updated, and controlled externally, states lose the ability to govern their own health intelligence. Regulatory dependency becomes a form of technical dependency, with long-term implications for public health autonomy.

The global governance landscape is thus fragmented: rules vary by jurisdiction, enforcement is inconsistent, and cross-border data flows escape effective oversight. Algorithms exploit these gaps with ease.

Why Regulation Keeps Failing

The failure to regulate medical AI effectively is not due to ignorance. It is due to structural incentives. Vendors benefit from opacity. Institutions benefit from efficiency. Regulators struggle with limited resources and rapidly evolving technologies. Each actor operates rationally within its constraints; the system as a whole produces regulatory blindness.

Singhal (2024) argues that fairness, accountability, and transparency must be embedded as enforceable obligations, not voluntary commitments. Without mandatory auditability, disclosure requirements, and shared liability frameworks, regulation will continue to trail innovation rather than shape it.

The Forensic Conclusion

Regulating AI in healthcare has become an exercise in chasing shadows. Laws reference systems they cannot fully see. Standards assume cooperation where incentives discourage disclosure. Ethics articulate values without enforcement.

AI remains governable—not because it is benign, but because it is built by humans within institutions that respond to rules. The failure lies not in the technology, but in the reluctance to confront power directly. Until regulation compels visibility, accountability, and shared responsibility, algorithmic medicine will continue to operate beyond meaningful oversight.

 

Professor MarkAnthony Ujunwa Nze is an internationally acclaimed investigative journalist, public intellectual, and global governance analyst whose work shapes contemporary thinking at the intersection of health and social care management, media, law, and policy. Renowned for his incisive commentary and structural insight, he brings rigorous scholarship to questions of justice, power, and institutional integrity.

Based in New York, he serves as a full tenured professor and Academic Director at the New York Center for Advanced Research (NYCAR), where he leads high-impact research in governance innovation, strategic leadership, and geopolitical risk. He also oversees NYCAR’s free Health & Social Care professional certification programs, accessible worldwide at:
 https://www.newyorkresearch.org/professional-certification/

Professor Nze remains a defining voice in advancing ethical leadership and democratic accountability across global systems.

 

Selected Sources (APA 7th Edition)

Aboy, M., et al. (2024). Navigating the EU AI Act: Implications for regulated digital medical products. PMC.
https://pmc.ncbi.nlm.nih.gov/articles/PMC11379845/

Carapinha, J. L. (2024). Balancing innovation and ethics in AI governance for health technology assessment. Journal of Health Services Research & Policy.
https://www.tandfonline.com/doi/full/10.1080/13696998.2024.2352821

Freeman, S., et al. (2025). Developing an AI governance framework for safe and responsible use in health care. PMC.
https://pmc.ncbi.nlm.nih.gov/articles/PMC12340452/

Hassan, M. (2025). Artificial intelligence governance framework for healthcare. Healthcare Management Forum.
https://journals.sagepub.com/doi/10.1177/08404704241291226

National Institute of Standards and Technology. (2025). National Artificial Intelligence Risk Management Framework (AI RMF). NIST.
https://www.nist.gov/ai-risk-management-framework

Pantanowitz, L. (2024). Regulatory aspects of artificial intelligence and machine learning in pathology and medical devices. Modern Pathology, 37, 112–125.
https://www.modernpathology.org/article/S0893-3952%2824%2900189-3/fulltext/

Schmidt, J., et al. (2024). Mapping the regulatory landscape for artificial intelligence in healthcare and population health. NPJ Digital Medicine.
https://www.nature.com/articles/s41746-024-01221-6

Sun, N., Miao, Y., Jiang, H., Ding, M., & Zhang, J. (2024). From principles to practice: A deep dive into AI ethics and regulations. arXiv.
https://arxiv.org/abs/2412.04683

World Health Organization. (2021). Ethics and governance of artificial intelligence for health. WHO.
https://www.who.int/publications/i/item/9789240029200

Xiong, H. (2025). Global artificial intelligence governance, trust, and policy frameworks. Verixiv.
https://verixiv.org/articles/2-187/pdf

Kumar, R. (2025). Navigating healthcare AI governance: Comprehensive algorithmic oversight and stewardship. Health Policy and Technology.
https://link.springer.com/article/10.1007/s10728-025-00537-y

OpenAI Policy & Governance Collaborative. (2025). Accountability Framework for Healthcare AI Systems. arXiv.
https://arxiv.org/abs/2509.03286

Ong, C. L., et al. (2025). Regulatory science innovation for generative AI and large language models in health and medicine. arXiv.
https://arxiv.org/abs/2502.07794

Momani, A. (2025). Implications of artificial intelligence on health data privacy and confidentiality. arXiv.
https://arxiv.org/abs/2501.01639

Papagiannidis, E. (2025). Responsible artificial intelligence governance: A scoping review and framework. Technology in Society.
https://www.sciencedirect.com/science/article/pii/S0963868724000672

U.S. Food and Drug Administration. (2025). Artificial intelligence in software as a medical device (AI/ML-SaMD): Regulatory considerations. FDA.
https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-software-medical-device

Africa Digital News, New York

WhatsApp
Facebook
Twitter
Telegram
LinkedIn
Print