When medicine surrenders judgment to systems it cannot govern, harm becomes inevitable.
By Prof. MarkAnthony Nze
What Remains When the Hype Is Gone
Every technological epoch eventually outlives its slogans. The vocabulary of inevitability—disruption, intelligence, revolution—thins with time, leaving behind something more durable and far less forgiving: institutions, incentives, and consequences. Artificial intelligence in medicine has now crossed that threshold. The future it promised is no longer speculative. It is operational. And what now demands scrutiny is not what AI can do, but what it has already rearranged—often invisibly, and rarely democratically.
This investigation began with optimism and ends with power. Not the power of machines, but the power exercised through them: who controls medical decision-making, who bears risk when systems fail, and who is permitted to ask questions when outcomes harm rather than heal. AI did not simply enter healthcare as a tool. It entered as a governance event, reshaping authority, accountability, and trust in ways that were neither inevitable nor neutral.
What remains, after the hype is gone, is a reckoning.
Trust Is Not a Performance Metric
Trust has never been a function of accuracy alone. In medicine, trust is produced through legibility, responsibility, and the capacity for redress. Abrams (2023) makes clear that public confidence in artificial intelligence depends less on abstract ethical assurances than on whether individuals can identify who is answerable when systems err. This question becomes existential in healthcare, where errors are not theoretical but embodied—measured in delayed diagnoses, misallocated resources, and preventable harm.
Yet many clinical AI systems were deployed under a flawed assumption: that performance would generate trust retroactively. Instead, the opposite occurred. As algorithmic mediation expanded and systems became more opaque, trust eroded—first among clinicians asked to defer to tools they could not interrogate, and then among patients whose care pathways became shaped by recommendations no one could fully explain (McCradden et al., 2020).
Trust, the evidence now shows, cannot be automated. It must be governed.
The Illusion of Clinical Displacement
Despite years of speculation, medicine has not become algorithmic by default. It has become algorithmically mediated. The distinction is not semantic; it is structural. Algorithms increasingly influence diagnosis, triage, and resource allocation, but they do not replace the ethical burden of judgment. They displace it—often upward, away from the bedside, into institutional and commercial architectures that remain largely shielded from scrutiny.
Topol (2019) argued early that the most promising future for AI in medicine was one of augmentation rather than replacement. That future remains viable, but only if institutions resist the drift toward treating model outputs as authoritative rather than advisory. Cabitza et al. (2017) warned that uncritical reliance on machine learning systems invites unintended consequences precisely because models lack contextual awareness, moral reasoning, and an understanding of lived experience.
What this investigation reveals is that the erosion of human judgment was not imposed by technology. It was permitted—by procurement decisions, workflow redesigns, and incentive structures that privileged efficiency over deliberation.
Bias Was the Diagnostic Breakthrough
The most consequential revelation of the past decade was not that algorithms can be biased, but that they are biased systemically. Obermeyer et al. (2019) demonstrated that widely deployed health algorithms systematically disadvantaged Black patients—not due to malicious intent, but because cost was used as a proxy for need. This single design choice encoded historical inequities into automated decision-making at scale.
That finding dismantled the myth of algorithmic neutrality. It reframed bias as a governance failure, not a technical anomaly. As Benjamin (2019) argues, technologies routinely inherit and harden existing social hierarchies unless they are deliberately redesigned to interrupt them. In healthcare, failure to do so does not merely reproduce inequality; it operationalizes it.
Bias, then, became the diagnostic moment—forcing AI medicine out of abstraction and into the moral foreground.
Explainability as Democratic Infrastructure
Calls for explainability are often caricatured as naïve demands for full transparency in systems too complex to unpack. This misrepresents the argument. Amann et al. (2020) clarify that explainability is not about revealing every line of code; it is about enabling meaningful oversight by clinicians, patients, and regulators. Without explanation, accountability collapses. Without accountability, legitimacy cannot survive.
The persistence of black-box systems in high-stakes medical contexts reflects a political choice: proprietary advantage has been prioritized over democratic control. Bender et al. (2021) warned that scale without understanding produces systems whose risks grow faster than our capacity to govern them. Healthcare has proven to be no exception.
Explainability is not a technical luxury. It is the minimum condition for ethical authority in a system that claims to care for human lives.
Read also: AI & Health: Who Controls The Cure?—PART 7
Regulation Arrived Late—But It Did Arrive
Regulatory responses to medical AI were initially hesitant, fragmented, and reactive. For years, deployment outpaced governance. Yet a convergence is now visible. The European Union’s Artificial Intelligence Act represents the most ambitious attempt to reverse the sequence—classifying medical AI as high-risk and subjecting it to enforceable, ex ante obligations (European Commission, 2024).
In parallel, the U.S. has begun to shift from episodic approval toward lifecycle oversight. The NIST AI Risk Management Framework and the FDA’s evolving approach to AI-based software as a medical device reflect growing recognition that safety cannot be assessed once and assumed thereafter (NIST, 2023; U.S. Food and Drug Administration, 2021).
These frameworks are imperfect. But they mark a decisive break from regulatory permissiveness. The central risk now is not overregulation, but regulatory capture—a condition in which standards exist formally, but enforcement bends toward commercial convenience.
Global Ethics, Unequal Power
International guidance has played a vital role in articulating shared values. The World Health Organization’s Ethics and Governance of Artificial Intelligence for Health provides a normative anchor grounded in transparency, inclusiveness, and accountability (World Health Organization, 2021). Yet ethics without enforcement remains aspirational.
Kerasidou (2021) warns that global ethical frameworks risk legitimizing unequal deployments if they fail to confront power asymmetries—between developers and deployers, between data-rich institutions and resource-constrained health systems. Ethics cannot substitute for political economy. Without attention to who controls infrastructure, data, and capital, principles float free of material reality.
Global AI ethics, then, faces its own credibility test: whether it can constrain power, or merely describe it.
Against Permissionless Medicine
One of the most persistent errors in AI governance has been the assumption that innovation requires permissiveness. Reddy (2023) dismantles this claim, arguing instead for precision regulation—rules calibrated to risk, context, and consequence. Healthcare AI, by definition, operates at the highest level of societal risk. It therefore demands the highest level of scrutiny.
Char et al. (2018) articulated this necessity early: ethical challenges in medical AI cannot be resolved through post hoc guidelines or voluntary principles. They require structural integration into design, deployment, and accountability. What has changed since then is not the argument, but the scale of evidence supporting it.
The record is now clear. Permissionless innovation in medicine does not liberate patients. It exposes them.
Re-Centering the Patient
Perhaps the most important shift now underway is the slow reassertion of patient agency. For too long, patients were treated as passive beneficiaries of algorithmic efficiency. That assumption has collapsed under scrutiny. Patients are not data points. They are rights-bearing participants in decisions that shape their bodies and lives.
Meaningful consent, the right to explanation, and the ability to contest algorithmic decisions are not obstacles to progress. They are safeguards against abuse. As Vayena et al. (2018) emphasize, respect for patient autonomy is not incompatible with technological advancement; it is the condition that makes advancement legitimate.
The future of AI in medicine will be judged not by its speed, but by whom it empowers.
What the Algorithm Cannot Decide
Algorithms can predict. They can classify. They can optimize. What they cannot do is decide what ought to matter. That responsibility remains irreducibly human. It belongs to institutions willing to confront the values encoded—often implicitly—into technological systems.
This investigation has shown that when those decisions are left to markets alone, outcomes favor scale over care, profit over equity, and opacity over trust. The central question now is whether that delegation of authority will continue.
A Final Reckoning
Artificial intelligence has forced medicine to confront questions it long deferred: who controls knowledge, who defines value, and who is accountable when systems fail. These questions were always present. AI merely made them unavoidable.
This epilogue is not a warning. It is a record. Of what happened when speed outran governance. Of how institutions responded once consequences became visible. And of the choices that now remain.
The cure, it turns out, was never just technological.
It was—and remains—political, ethical, and profoundly human.
Professor MarkAnthony Ujunwa Nze is an internationally acclaimed investigative journalist, public intellectual, and global governance analyst whose work shapes contemporary thinking at the intersection of health and social care management, media, law, and policy. Renowned for his incisive commentary and structural insight, he brings rigorous scholarship to questions of justice, power, and institutional integrity.
Based in New York, he serves as a full tenured professor and Academic Director at the New York Center for Advanced Research (NYCAR), where he leads high-impact research in governance innovation, strategic leadership, and geopolitical risk. He also oversees NYCAR’s free Health & Social Care professional certification programs, accessible worldwide at:
https://www.newyorkresearch.org/professional-certification/
Professor Nze remains a defining voice in advancing ethical leadership and democratic accountability across global systems.
Selected Sources (APA 7th Edition)
Abrams, Z. (2023). Ethics, transparency, and trust in artificial intelligence. Monitor on Psychology, 54(3), 48–55.
American Psychological Association.
https://www.apa.org/monitor/2023/03/ethics-transparency-trust-ai
Amann, J., Blasimme, A., Vayena, E., Frey, D., & Madai, V. I. (2020). Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20, Article 310.
Springer Nature.
https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-020-01332-6
Angus, D. C., et al. (2025). AI, health, and health care today and tomorrow. JAMA.
American Medical Association.
https://jamanetwork.com/journals/jama/fullarticle/2840175
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.
https://www.wiley.com/en-us/Race+After+Technology-p-9781509526406
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623).
Association for Computing Machinery.
https://dl.acm.org/doi/10.1145/3442188.3445922
Cabitza, F., Rasoini, R., & Gensini, G. F. (2017). Unintended consequences of machine learning in medicine. JAMA, 318(6), 517–518.
American Medical Association.
https://jamanetwork.com/journals/jama/fullarticle/2648638
Char, D. S., Shah, N. H., & Magnus, D. (2018). Implementing machine learning in health care—Addressing ethical challenges. The New England Journal of Medicine, 378, 981–983.
Massachusetts Medical Society.
https://www.nejm.org/doi/full/10.1056/NEJMp1714229
European Commission. (2024). The EU Artificial Intelligence Act.
European Union.
https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
Kerasidou, A. (2021). Ethics of artificial intelligence in global health. Global Health, 17, Article 90.
BioMed Central.
https://pmc.ncbi.nlm.nih.gov/articles/PMC8449079/
McCradden, M. D., Joshi, S., Anderson, J. A., & Goldenberg, A. (2020). Clinical decision support and artificial intelligence: Ethical challenges. Journal of Medical Internet Research, 22(3), e16561.
JMIR Publications.
https://www.jmir.org/2020/3/e16561/
National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0).
U.S. Department of Commerce.
https://www.nist.gov/itl/ai-risk-management-framework
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453.
American Association for the Advancement of Science.
https://www.science.org/doi/10.1126/science.aax2342
Reddy, S. (2023). The case for precise regulation in health care artificial intelligence. Journal of Medical Internet Research, 25, e49989.
JMIR Publications.
https://www.jmir.org/2023/1/e49989
Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25, 44–56.
Nature Publishing Group.
https://www.nature.com/articles/s41591-018-0300-7
U.S. Food and Drug Administration. (2021). Artificial intelligence/machine learning–based software as a medical device action plan.
U.S. Department of Health and Human Services.
https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device
Vayena, E., Blasimme, A., & Cohen, I. G. (2018). Machine learning in medicine: Addressing ethical challenges. PLOS Medicine, 15(11), e1002689.
Public Library of Science.
https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1002689
World Health Organization. (2021). Ethics and governance of artificial intelligence for health.
World Health Organization.
https://www.who.int/publications/i/item/9789240029200








