AI & Health: Who Controls The Cure?—PART 5

AI & Health: Who Controls The Cure?—PART 5
AI & Health: Who Controls The Cure?—PART 5
WhatsApp
Facebook
Twitter
Telegram
LinkedIn
Print

By Prof. MarkAnthony Nze

Resistance, Reckoning, and the Future of Human-Centered Medicine

How Patients, Clinicians, and Institutions Are Pushing Back Against Algorithmic Control.

When Progress Meets Consequence

Technological waves don’t stop at the shoreline; they slam into it. Healthcare’s embrace of artificial intelligence—first hailed as efficiency at scale—has now hit that surf. The celebratory phase is over. In its place: questions from the very people most affected by machine-mediated decisions. Patients discovering their care pathways were quietly shaped by an algorithm. Clinicians realizing their judgement has been nudged, then subordinated. Institutions finding themselves liable for tools they cannot interrogate.

This pushback is not a romantic defense of the past. It is evidence-led. As Angus et al. (2025) argue, the first generation of healthcare AI was built to win on performance metrics—accuracy, speed, throughput—while governance, consent, and accountability were left for later. Later has arrived. As systems move from “advice” to infrastructure—triage gates, risk scoring backbones, treatment prioritization—the costs of that deferral are now visible.

What follows isn’t rejection of AI. It is a reckoning with how power, risk, and responsibility are distributed when software quietly becomes policy.

The Patient Who Was Never Consulted

For years, AI crept into clinical decision-making through small doors—order sets, risk scores, triage lists—until patients were being steered without being told. Disclosure, when it existed, lived in footnotes and euphemisms. Capella (2025) shows how often the fact of algorithmic influence is diluted by technical language or omitted entirely, even where model outputs materially shape care.

The consent problem runs deeper than poor wording. Informed consent presumes comprehensibility: a patient knows what is being proposed and why. Hjort (2025) shows how that premise collapses when the rationale for a decision is a probability distribution inside a proprietary model. Consent becomes ceremonial—agreement without understanding, acquiescence without agency.

Ploug and Holm (2025) push the point: this is not a communications lapse; it is a rights gap. Patients lack enforceable rights to explanation, contestation, and remedy when algorithmic systems are involved. Ethics talk is abundant, but patient standing is thin. The result is structural asymmetry: patients carry algorithmic risk without any corresponding power over algorithmic choice.

Read also: AI & Health: Who Controls The Cure?—PART 4

What resistance looks like: patients and advocates demanding to know when an AI is in the loop; insisting on plain-language explanations; asking for a path to second opinions that are genuinely independent of the same model.

Human-Centered Design as Resistance, Not Aesthetic

“Human-centered design” has been marketed as a nicer interface. In practice, done well, it is institutional resistance to automation bias. Sassi (2025) documents how systems built with active clinician and patient participation behave differently: they surface uncertainty, make overrides natural, and restore space for clinical discretion. They don’t just feel respectful; they change decisions.

Why, then, are such systems still the exception? Hassan, Kushniruk and Borycki (2024) give the pragmatic answer: participatory design slows timelines, complicates procurement, and dilutes vendor control. In short, it interferes with scale. Opaque defaults are not inevitable; they are chosen—because opacity is faster to ship and easier to defend in a sales deck.

What resistance looks like: clinicians refusing to adopt tools that hide provenance; procurement teams baking co-design and auditability into tenders; boards insisting that “explainability and override” are not features but entry requirements.

Accountability Deferred Is Accountability Denied

Medicine’s liability frameworks assume a discernible decision-maker. AI distributes that decision across code, data, clinician, and institution—and then hides the joins. Cestonaro (2023) shows how the traditional chain of accountability breaks under that dispersion. When harm occurs, blame pinballs: the vendor points to the clinician, the clinician points to the system, the institution points to regulators, and the regulator points to guidance.

Weiner et al. (2024) map the now-familiar pattern: vendors disclaim clinical responsibility, regulators set non-binding principles, and clinicians are left legally exposed for decisions they did not fully control. Institutions absorb reputational damage while lacking audit or modification rights to the tools in question.

Sabet (2025) calls this “regulation after midnight”—intervening only once something has gone wrong. Litigation substitutes for governance; discovery for transparency. Patients become plaintiffs not because malice was present, but because oversight was absent.

What resistance looks like: contracts that allocate shared liability; mandated third-party audits as a precondition of deployment; sunset clauses for models that drift; incident reporting that treats algorithmic misclassification like a medication error—document, learn, fix.

Bias as the Breaking Point

Opacity corrodes trust quietly. Bias breaks it publicly. Evidence that AI systems reproduce and amplify disparities—by race, gender, language, disability—has shifted the debate from hypothetical to operational. Ueda (2023) synthesizes results showing consistent performance gaps with clinical consequences across diagnosis, risk scoring, and access.

Reddy (2023) argues that once bias is measurable, it stops being a philosophical concern and becomes a compliance issue. No more comfort in aspirational fairness statements. Institutions must demonstrate parity—or withdraw systems that cannot achieve it in the populations they serve.

The practical implication is bracing: algorithmic equity isn’t a nice-to-have. It is a gating requirement. Fairness metrics and subgroup performance monitoring belong in the same category as hand-hygiene compliance and radiation safety.

What resistance looks like: bias impact assessments before go-live; continuous subgroup monitoring; targeted model retraining or constrained use where parity cannot be achieved; the courage to turn systems off.

Ethics Without Teeth

The language of AI ethics—transparency, beneficence, non-maleficence—has colonized slide decks and strategy papers. Enforcement has not kept pace. Pham (2025) notes that most jurisdictions still treat AI governance as an add-on to data protection or device regulation, rather than a regime with its own duties, rights, and remedies.

Nasir (2025) traces the gap between global consensus and national law: laudable principles with no binding obligations and no penalties for non-compliance. The result is regulatory symbolism—visible concern without structural constraint—and clinicians unwilling to stand as liability buffers for systems they cannot interrogate.

What resistance looks like: professional colleges setting binding practice standards for AI use; payers and insurers requiring audit trails for reimbursement; health systems adopting “no-deploy” lists for models that fail minimum governance thresholds.

Institutional Pushback and Collective Governance

Not all resistance is individual. Some of it is structural—and productive. Rozenblit (2025) documents multi-stakeholder governance models that bring clinicians, patients, ethicists, technologists, and legal experts into the same oversight room. These bodies assume AI deployment is not merely technical but distributive: it allocates benefits and burdens and is therefore a political decision.

Alelyani et al. (2025) provide empirical footing: where these frameworks exist, systems are deployed more cautiously, audited more frequently, and withdrawn more readily when harm emerges. Governance shifts from compliance theatre to collective control.

What resistance looks like: standing AI oversight committees with veto power; public registers of approved and retired models; routine model cards and data sheets; escalation paths when frontline staff detect drift or harm.

Reclaiming Medicine from Automation

This chapter of resistance is not anti-innovation. It is anti-displacement. Patients and clinicians are rejecting systems that decide without explaining, optimize without accountability, and scale without consent. Bodnari (2025) shows that enterprise-level governance can work: clear accountability matrices, enforceable audit rights, and ongoing vendor oversight reduce risk without strangling innovation. But that requires tackling entrenched procurement habits: “black-box preferred” contracts, set-and-forget deployments, and growth targets that conflate scale with success.

The operational reset looks like this: start with the clinical decision and the equity outcome to be improved; choose the smallest effective tool; demand auditable data lineage; measure burden as well as benefit; define the off-ramp before go-live.

What resistance looks like: service lines that decline high-performing models whose explanations cannot be understood by patients; vendor scorecards that weight governance and equity as heavily as AUROC; executive dashboards that pair outcome gains with clinician workload and patient-reported experience.

The Reckoning Ahead

A recalibration is underway. Patients are asking to be told—plainly—when algorithms are in the room and why. Clinicians are insisting on remaining decision-makers, not button-pressers for opaque recommendations. Institutions are demanding clarity over ambiguity: audit rights, bias monitoring, liability that maps to control. Together, these demands surface a blunt truth: a system cannot retain legitimacy if its most consequential decisions are governed by mechanisms no one can fully see, explain, or challenge.

Human-centered medicine is not a retreat from intelligence; it is a demand that intelligence remain accountable to the people it touches. Progress without consent is not progress. Efficiency without explanation is not care. The question is no longer whether AI belongs in healthcare—it does—but on what terms and under whose authority.

Whether this reckoning reshapes the field or gets absorbed into the status quo will decide who governs medicine in the algorithmic age: systems, or human beings.

Practical Signals to Watch (for leaders, clinicians, and advocates)

● Consent with content: disclosures that specify which model influenced which decision, with a human-readable rationale (Capella, 2025; Hjort, 2025; Ploug and Holm, 2025).

● Override as default: interfaces that present uncertainty and make overrides routine, logged, and learnable (Sassi, 2025).

● Shared liability in writing: contracts that align responsibility with control; audit rights that bite (Cestonaro, 2023; Weiner et al., 2024; Sabet, 2025).

● Equity telemetry: continuous subgroup performance dashboards, with pre-committed triggers for rollback (Ueda, 2023; Reddy, 2023).

● Collective governance with teeth: cross-disciplinary oversight bodies with authority to approve, pause, or retire systems (Rozenblit, 2025; Alelyani et al., 2025).

● Enterprise stewardship: procurement and lifecycle management that reward transparency and penalize drift (Bodnari, 2025; Hassan, Kushniruk and Borycki, 2024).

 

Professor MarkAnthony Ujunwa Nze is an internationally acclaimed investigative journalist, public intellectual, and global governance analyst whose work shapes contemporary thinking at the intersection of health and social care management, media, law, and policy. Renowned for his incisive commentary and structural insight, he brings rigorous scholarship to questions of justice, power, and institutional integrity.

Based in New York, he serves as a full tenured professor and Academic Director at the New York Center for Advanced Research (NYCAR), where he leads high-impact research in governance innovation, strategic leadership, and geopolitical risk. He also oversees NYCAR’s free Health & Social Care professional certification programs, accessible worldwide at:
 https://www.newyorkresearch.org/professional-certification/

Professor Nze remains a defining voice in advancing ethical leadership and democratic accountability across global systems.

 

Selected Sources (APA 7th Edition)

Alelyani, T., et al. (2025). A validated framework for responsible AI in healthcare. Scientific Reports.

Angus, D. C., et al. (2025). AI, health, and health care today and tomorrow. JAMA.

Bodnari, A. (2025). Scaling enterprise AI in healthcare. PMC.

Capella, S. (2025). How does generative AI affect patients’ rights? Bioethics.

Cestonaro, C. (2023). Defining medical liability when artificial intelligence is involved. PMC.

Hassan, M., Kushniruk, A., & Borycki, E. (2024). Barriers to AI adoption. JMIR Human Factors.

Hjort, L. S. (2025). Informed consent to AI-based decisions. Oslo Law Review.

Nasir, M. (2025). Ethical–legal implications of AI-powered healthcare. Frontiers in AI.

Pham, T. (2025). Ethical and legal considerations in healthcare AI. Royal Society Open Science.

Ploug, T., & Holm, S. (2025). Patient rights in AI-driven healthcare. PMC.

Reddy, S. (2023). The case for precise regulation. JMIR.

Rozenblit, L. (2025). Multi-stakeholder perspectives. Health Policy and Technology.

Sabet, C. J. (2025). Regulating medical AI before midnight. PMC.

Sassi, Z. (2025). Human-centered AI in healthcare. PMC.

Singhal, A. (2024). Fairness, accountability, and ethics. JMIR.

Ueda, D. (2023). Fairness of AI in healthcare. PMC.

Weiner, E. B., et al. (2024). Ethical challenges in AI integration. arXiv.

Africa Digital News, New York

WhatsApp
Facebook
Twitter
Telegram
LinkedIn
Print