AI & Health: Who Controls The Cure?—Intro

WhatsApp
Facebook
Twitter
Telegram
LinkedIn
Print

When innovation meets power — who ultimately decides the future of healing?

By Prof. MarkAnthony Nze

In promises of precision and cures, artificial intelligence has captivated the imagination of technologists, clinicians, policymakers, and the public alike. Across the globe, from Silicon Valley boardrooms to academic medical centers, a compelling narrative has taken hold: that AI will transform medicine, obliterate inefficiencies, and usher in a new era of personalized, data-driven care. Yet beneath the glossy rhetoric of innovation lies a more complex truth — one that demands rigorous, evidence-based scrutiny. For AI in healthcare is not merely a story of computing power, but a story of data governance, ethical constraints, institutional priorities, and human consequence.

At its best, AI harnesses computational might to sift through mountains of information — electronic health records, imaging scans, and genomic sequences — far more rapidly than any human ever could. Proponents claim these capabilities will accelerate diagnoses, forecast health crises before symptoms appear, and tailor treatments to the individual’s biological and environmental profile. Indeed, analytical frameworks have shown promise in pattern recognition tasks such as early detection of diabetic retinopathy or lung anomalies on imaging, suggesting that AI can augment clinician judgment in meaningful ways (Chustecki et al., 2024). Yet this potential exists alongside unresolved tensions that may profoundly shape the trajectory of AI’s influence in medicine.

Central to the debate is the question of who holds authority when machines influence decisions about human health. Traditional clinical decision-making relies on the expertise of physicians and the trust placed in them by patients. But when an AI system recommends a course of action — a risk score here, a diagnostic suggestion there — whose judgment is being honored? The “black box” nature of many machine learning models — their opacity and lack of interpretability — means that neither clinician nor patient can always understand how conclusions were reached (Nouis et al., 2025). This lack of explainability raises serious concerns about accountability and undermines the foundational medical ethic of informed consent, wherein patients are entitled to comprehend the basis of decisions affecting their care.

The foundation of any AI system is data. But health data is among the most sensitive and regulated categories of personal information. How it is collected, stored, shared, and monetized carries profound implications for privacy, autonomy, and trust (Pham, 2025; Couture et al., 2023). Current regulatory frameworks — such as HIPAA in the U.S. and the GDPR in Europe — aim to protect patient information, but the sheer scale and complexity of data used in training AI models often outpace the clarity of existing governance mechanisms (Radanliev et al., 2025). Without robust oversight, sensitive health records can be commodified in ways that prioritize commercial interests over patient rights, with little transparency into downstream usage or consent.

Moreover, the very datasets that enable AI also risk entrenching existing biases and inequities in healthcare. Empirical research has documented how algorithms trained on non-representative data can underperform or make misleading predictions for certain population groups, perpetuating disparities rather than diminishing them (Cross, 2024; Dankwa-Mullan et al., 2024). These biases arise at multiple stages — from data collection and feature selection to model development and deployment — and can lead to differential treatment recommendations that mirror historical inequities. In the absence of fairness safeguards, AI may inadvertently reinforce systemic patterns of disadvantage, turning what was imagined as democratizing technology into a mechanism of inequity.

Read also: Diseases That Pay: The Global Health Economy—EPILOGUE

The ethical dimensions extend beyond bias to encompass privacy, autonomy, and transparency. Healthcare AI systems depend on massive troves of personal data, yet patients often lack meaningful control over how their information is used, shared, or re-purposed. Privacy advocates have warned that current practices frequently fall short of expectations, enabling third parties to access or analyze data without clear consent or comprehensive understanding (Pham, 2025). In addition, the drive for commercial advantage can obscure data governance practices, making it difficult for individuals to know who ultimately holds and uses their personal health details.

These challenges are compounded by legal and regulatory ambiguity. As algorithmic decision-making becomes embedded in clinical workflows, questions of liability and accountability remain unresolved. If an AI-informed recommendation contributes to a misdiagnosis or poor patient outcome, it is often unclear whether clinicians, developers, or health systems bear responsibility. Traditional frameworks for assigning liability — crafted in an era when humans held primary decision-making authority — have not yet adapted to scenarios in which machines share or influence that authority (Nouis et al., 2025). This legal uncertainty not only jeopardizes patient safety, but also risks creating a “blame avoidance” culture among stakeholders.

The economic forces shaping AI adoption in healthcare are equally significant. Tech giants, venture capital firms, and analytics startups are pouring capital into healthcare applications, turning innovation into a high-stakes commercial arena. This infusion of capital has accelerated development, but it also aligns innovation incentives with profitability rather than equitable care outcomes. As a result, access to cutting-edge AI tools often reflects broader socioeconomic divides — with well-resourced institutions and affluent regions benefiting most, while low-income communities and resource-poor health systems lag behind (Brookings Institution, 2025). The growing “AI health divide” threatens to reshape healthcare into a tiered system where technological benefits accrue unequally.

This economic logic also extends to seemingly benign consumer health technologies. Wearables, symptom checkers, and smartphone apps now incorporate AI-driven insights, expanding the notion of healthcare beyond clinical spaces. While these tools improve convenience and personal engagement, they also raise fresh concerns about oversight, data commercialization, and validity. Many such applications operate outside traditional healthcare regulation, blurring lines between medical guidance and wellness advice and creating new arenas for ethical and legal uncertainty.

Despite these profound challenges, it would be inaccurate to characterize AI in healthcare as purely problematic. Thoughtfully designed AI systems can reduce administrative burden, help optimize resource allocation, and surface insights that may remain hidden in conventional analysis. In public health contexts, AI tools have been used to model disease spread patterns and anticipate resource needs — capacities that, when governed appropriately, can save lives. But unlocking this promise demands more than technological prowess; it requires ethical governance, structural transparency, and public engagement in shaping the norms that guide AI’s development and deployment.

The profound ethical concerns documented in the health AI literature — including privacy, bias, transparency, accountability, and equitable access — highlight the urgency of robust policy frameworks (Singhal, 2024; Ueda et al., 2023). Among these, fairness in algorithmic design and implementation is especially critical. Researchers assert that ensuring fairness involves not only statistical adjustments but deep collaboration among clinicians, policymakers, and communities most affected by AI systems (Ueda et al., 2023). Without such inclusion, AI tools risk replicating the very disparities they claim to remedy.

Regulatory responses are emerging, though unevenly. International guidelines — such as those promoted by global bodies like the World Health Organization — attempt to articulate standards for ethical and safe AI use in health, emphasizing principles such as data protection, equity, and patient autonomy (Bouderhem, 2024). Regional policy efforts similarly strive to harmonize innovation with rights protection, but these frameworks often lack operational specificity for real-world clinical settings. As a result, healthcare institutions and technology developers operate in a patchwork regulatory environment that can vary significantly across jurisdictions.

Invariably, the question of who controls the cure is a question of power. It is not enough to ask whether AI can enhance health outcomes; we must interrogate whose values are embedded in its design, whose interests drive its adoption, and whose wellbeing is prioritized when trade-offs arise. These are not abstract philosophical debates but tangible determinants of health equity, justice, and human dignity in an increasingly data-mediated world.

This inquiry urges us to look beyond what AI might achieve and instead ask how it is governed and for whose advantage it truly operates. Over the next seven days, we will unpack this tension in a groundbreaking series — holding decision-makers accountable, exposing power dynamics, and demanding that technological innovation be anchored in human rights, public accountability, and shared stewardship of collective health. Only by confronting these questions head-on can we ensure that AI serves the health of all people — not just the privileged few.

Professor MarkAnthony Ujunwa Nze is an internationally acclaimed investigative journalist, public intellectual, and global governance analyst whose work shapes contemporary thinking at the intersection of health and social care management, media, law, and policy. Renowned for his incisive commentary and structural insight, he brings rigorous scholarship to questions of justice, power, and institutional integrity.

Based in New York, he serves as a full tenured professor and Academic Director at the New York Center for Advanced Research (NYCAR), where he leads high-impact research in governance innovation, strategic leadership, and geopolitical risk. He also oversees NYCAR’s free Health & Social Care professional certification programs, accessible worldwide at:
 https://www.newyorkresearch.org/professional-certification/

Professor Nze remains a defining voice in advancing ethical leadership and democratic accountability across global systems.

Selected Sources (APA 7th Edition)

Chustecki, M. (2024). Benefits and risks of AI in healthcare: A narrative review. PMC. https://pmc.ncbi.nlm.nih.gov/articles/PMC11612599/

Cestonaro, C. (2023). Defining medical liability when artificial intelligence is involved. PMC. https://pmc.ncbi.nlm.nih.gov/articles/PMC10711067/

Cross, J. L. (2024). Bias in medical AI: Implications for clinical decision-making. PMC. https://pmc.ncbi.nlm.nih.gov/articles/PMC11542778/

Dankwa-Mullan, I. et al. (2024). Health equity and ethical considerations in using artificial intelligence in public health and medicine. CDC Public Health. https://www.cdc.gov/pcd/issues/2024/24_0245.htm

Nouis, S. C. E. (2025). Evaluating accountability, transparency, and bias in AI-assisted healthcare decisions. BMC Medical Ethics. https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-025-01243-z

Radanliev, P. et al. (2025). Privacy, ethics, transparency, and accountability in AI. Frontiers in Digital Health. https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2025.1431246/full

Sabet, C. J. (2025). Regulating medical AI before midnight strikes. PMC. https://pmc.ncbi.nlm.nih.gov/articles/PMC12360511/

Singhal, A. (2024). Toward fairness, accountability, transparency, and ethics in healthcare AI. Journal of Medical Internet Research. https://medinform.jmir.org/2024/1/e50048/

Weiner, E. B. et al. (2025). Ethical challenges and evolving strategies in AI integration in clinical practice. arXiv. https://arxiv.org/abs/2412.03576

Garcia-Gómez, J. M. et al. (2023). Functional requirements to mitigate risk of harm from AI in healthcare. arXiv. https://arxiv.org/abs/2309.10424

Bouderhem, R. (2024). Shaping the future of AI in healthcare through ethics and governance. Nature. https://www.nature.com/articles/s41599-024-02894-w

Capella, S. (2025). How does generative AI affect patients’ rights? Bioethics. https://journals.library.columbia.edu/index.php/bioethics/article/view/14212

Chamouni, G. (2025). Ethical and legal concerns in AI applications for lung cancer care. PMC. https://pmc.ncbi.nlm.nih.gov/articles/PMC12558840/

Bogdadi, R. S. (2025). Perception, awareness, and ethical use of AI in healthcare. Open Public Health Journal. https://openpublichealthjournal.com/VOLUME/18/ELOCATOR/e18749445415581/FULLTEXT/

Canadian AI governance & privacy (2025). IAPP. https://iapp.org/news/a/ai-in-health-care-balancing-innovation-with-privacy-and-trust

 Africa Digital News, New York

WhatsApp
Facebook
Twitter
Telegram
LinkedIn
Print