When care becomes code, and the body becomes a permanent data source
The Quiet Displacement of Care
The stethoscope did not disappear.
It was quietly displaced—without resistance, without debate—by something smaller, more intimate, and infinitely more persistent. A phone in the pocket. A sensor on the wrist. A platform that never sleeps. Health no longer waits for symptoms or appointments. It now unfolds continuously, translated into data points that stream into infrastructures most patients will never see and agreements they will never fully understand.
This transformation is marketed as empowerment.
It feels like convenience.
It operates as capture.
According to the World Health Organization’s framework on the ethics and governance of digital health, technological innovation in healthcare has advanced far faster than the systems designed to regulate it, leaving patients exposed to new forms of risk that are legal, invisible, and largely unaccountable (World Health Organization, 2022; World Health Organization, 2023). What began as a solution to access gaps has matured into a model where data, not healing, is the central asset.
From Healing to Signals
Digital health promised to fix what traditional systems could not. Telemedicine would erase distance. Wearables would democratize monitoring. Artificial intelligence would reduce human error and inefficiency. In narrow clinical terms, some of these promises were fulfilled. Patients accessed clinicians faster. Signals were detected earlier. Information moved at unprecedented speed.
But progress is never neutral.
As Eric Topol argued in Nature Medicine, the convergence of human care and machine intelligence reshapes not only how medicine is practiced, but who holds authority within it (Topol, 2019). When health is translated into data, decision-making shifts away from relationships and toward systems optimized for prediction, scalability, and cost control. What matters most is no longer the clinical encounter itself, but the signal extracted from it.
Medicine becomes measurement.
The patient becomes a data stream.
The Ownership Illusion
The question most patients rarely ask—because the system is designed to discourage it—is deceptively simple: Who owns my health data?
Most assume the answer is themselves.
In reality, digital health platforms retain expansive rights over data collection, storage, analysis, and reuse. Legal and ethical analyses have shown that consent is typically broad, one-time, and functionally irreversible, allowing data to be repurposed far beyond direct care (Price & Cohen, 2019). Claims of anonymization offer diminishing protection in an era where datasets can be cross-referenced and identities reconstructed.
Ownership becomes symbolic.
Control becomes theoretical.
The interface feels personal.
The infrastructure is corporate.
Read also: Diseases That Pay: The Global Health Economy—Part 5
Consent Reduced to Ritual
Consent, once the moral cornerstone of medicine, has been radically thinned by digital health.
One click to agree.
One swipe to authorize.
One update to expand permissions.
Consumer protection studies have repeatedly shown that health apps collect far more data than is clinically necessary, often without transparent disclosure of downstream use (Kaiser Family Foundation, 2023). The burden of understanding is shifted entirely onto users, while responsibility for misuse evaporates into legal disclaimers.
Consent becomes ritual rather than comprehension.
Compliance replaces understanding.
This is not informed consent.
It is procedural surrender.
Algorithms as Authorities
Digital health does not merely observe the body.
It increasingly governs it.
Algorithms now influence triage decisions, diagnostic pathways, treatment eligibility, and resource allocation. Proponents claim these systems reduce bias. Evidence suggests they often automate it. A landmark study published in Science demonstrated that a widely used healthcare algorithm systematically underestimated the health needs of Black patients because it relied on historical spending data rather than actual illness burden (Obermeyer et al., 2019).
Bias was not programmed intentionally.
It was inherited from inequitable systems and scaled efficiently.
When an algorithm denies access, responsibility dissolves. There is no clinician to question, no judgment to appeal, no explanation beyond the model says no. Authority shifts quietly from care to code.
Telemedicine and the Thinning of Care
Telemedicine, often celebrated as a triumph of access, illustrates this shift vividly.
Virtual care expanded rapidly, particularly during crises, filling gaps where in-person systems collapsed. Yet it also redefined the nature of care itself. Appointments grew shorter. Encounters became transactional. Continuity weakened. As commentators in The New England Journal of Medicine have warned, digital transformation risks prioritizing efficiency over relational depth, replacing care with throughput (Hartzband & Groopman, 2020).
Telemedicine works best for those who are already healthy, insured, digitally literate, and connected. For the elderly, the poor, and patients with complex needs, it often introduces new barriers rather than removing them.
Access without depth is not equity.
The Quantified and Audited Body
Wearable technologies promise self-knowledge.
They deliver continuous assessment.
Steps, sleep, heart rate, oxygen saturation, stress indicators—each framed as empowerment. Over time, these measurements become expectations. Deviations trigger alerts. Absence of data feels like failure. Health shifts from a lived experience into a performance measured against invisible benchmarks.
Ethical reviews of AI and digital health show how constant monitoring subtly transfers responsibility for health outcomes onto individuals, even when social and structural determinants dominate (Morley et al., 2020).
The body is no longer simply lived.
It is audited.
Prediction as Power
Prediction is the most lucrative promise of digital health.
Risk scores forecast future illness. Algorithms flag “high-cost” patients. Preventive analytics shape insurance models and care pathways. Used responsibly, such tools could save lives. Used commercially, they stratify populations.
High-risk profiles become liabilities.
Low-risk profiles become assets.
As Shoshana Zuboff has argued, predictive systems do not merely anticipate the future—they help construct it, channeling opportunity and exclusion along data-driven lines (Zuboff, 2019). Health becomes destiny, coded in advance.
The Global Digital Frontier
Digital health expansion is most aggressive where regulation is weakest.
Low- and middle-income countries are saturated with health apps, AI triage tools, and mobile diagnostics, often funded by donors, foundations, or venture capital seeking scale. According to OECD analyses of health data governance, oversight mechanisms frequently lag behind deployment, allowing data extraction to outpace protection (OECD, 2023).
Health data flows outward.
Value accumulates elsewhere.
This is not inclusion.
It is digital extraction.
The Myth of Neutral Technology
The sustaining myth of this system is neutrality.
Technology is presented as objective. Algorithms as impartial. Platforms as tools. In reality, every system embeds values about whose health matters, whose errors are tolerable, and whose suffering is acceptable. When harm occurs, responsibility fragments—developers blame data, clinicians blame tools, companies blame misuse.
Complexity becomes a shield.
Opacity becomes protection.
No one is held accountable.
Dependence by Design
As public health systems weaken, digital platforms rush to fill the gaps—not by rebuilding care, but by offering substitutes.
Apps replace clinics.
Chatbots replace nurses.
Dashboards replace trust.
Patients become dependent on platforms they cannot interrogate. Clinicians depend on systems they did not design. Health systems outsource judgment to vendors whose incentives are aligned with scale, not care.
Innovation hardens into dependence.
What Measurement Cannot Capture
What disappears in this transformation is rarely measured.
Listening.
Continuity.
Context.
Trust.
Digital health excels at efficiency and struggles with meaning. It captures signals and misses stories. It optimizes workflows while thinning relationships. The danger is not the presence of technology, but the quiet replacement of care with capture.
The Unspoken Trade-Off
The trade-offs are never debated openly.
In exchange for convenience, privacy is surrendered.
In exchange for access, control is ceded.
In exchange for efficiency, accountability dissolves.
These exchanges are embedded silently in software updates and consent boxes. The body becomes a data source. The patient becomes a profile. Care becomes a service tier.
Drawing the Line
Digital health could still serve humanity.
But only if ownership is clarified, consent restored to meaning, algorithms audited, and care placed above capture. Without these guardrails, digital health will not democratize medicine.
It will commodify it more efficiently than any system before it.
The Quiet Ending
Digital health promised liberation from broken systems.
What it built was a new dependence—one that follows people home, sleeps beside their beds, and watches while they believe they are being helped.
The future of medicine is not only being coded.
It is being claimed.
And unless societies decide where care ends and commerce begins, the most intimate aspects of being human will continue to be logged, analyzed, and sold—quietly, continuously, and one heartbeat at a time.
Professor MarkAnthony Ujunwa Nze is an internationally acclaimed investigative journalist, public intellectual, and global governance analyst whose work shapes contemporary thinking at the intersection of health and social care management, media, law, and policy. Renowned for his incisive commentary and structural insight, he brings rigorous scholarship to questions of justice, power, and institutional integrity.
Based in New York, he serves as a full tenured professor and Academic Director at the New York Center for Advanced Research (NYCAR), where he leads high-impact research in governance innovation, strategic leadership, and geopolitical risk. He also oversees NYCAR’s free Health & Social Care professional certification programs, accessible worldwide at:
👉 https://www.newyorkresearch.org/professional-certification/
Professor Nze remains a defining voice in advancing ethical leadership and democratic accountability across global systems.
Selected Sources
World Health Organization. (2022). Ethics and governance of artificial intelligence for health: WHO guidance. WHO.
https://www.who.int/publications/i/item/9789240029200
World Health Organization. (2023). Digital health and data governance: Framework for action. WHO.
https://www.who.int/teams/digital-health-and-innovation
Organisation for Economic Co-operation and Development. (2023). Health data governance: Privacy, monitoring and research. OECD Publishing.
https://www.oecd.org/health/health-data-governance/
Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44–56.
https://doi.org/10.1038/s41591-018-0300-7
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453.
https://doi.org/10.1126/science.aax2342
Sharon, T. (2020). Blind-sided by privacy? Digital contact tracing, the Apple/Google API and big tech’s newfound role in public health. Ethics and Information Technology, 22, 1–13.
https://doi.org/10.1007/s10676-020-09547-x
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Morley, J., Machado, C. C. V., Burr, C., Cowls, J., Joshi, I., Taddeo, M., & Floridi, L. (2020). The ethics of AI in health care: A mapping review. Social Science & Medicine, 260, 113172.
https://doi.org/10.1016/j.socscimed.2020.113172
Rieke, N., et al. (2020). The future of digital health with federated learning. NPJ Digital Medicine, 3, 119.
https://doi.org/10.1038/s41746-020-00323-1
Price, W. N., & Cohen, I. G. (2019). Privacy in the age of medical big data. Nature Medicine, 25(1), 37–43.
https://doi.org/10.1038/s41591-018-0272-7
Hartzband, P., & Groopman, J. (2020). Off-label use and the digital transformation of medicine. New England Journal of Medicine, 382(7), 635–638.
https://doi.org/10.1056/NEJMp1910846
Kaiser Family Foundation. (2023). Health apps and data privacy: Consumer risks in digital health. KFF.
https://www.kff.org
European Commission. (2022). European health data space: Proposal and impact assessment. European Union.
https://health.ec.europa.eu
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.







