By W. Kyle Resurreccion

 

 

I. Introduction

In response to the COVID-19 pandemic, governments and healthcare systems worldwide sough the widespread adoption of digital health technologies.[1] This phenomenon has led to the development of various apps for contract tracing, social distancing, and quarantine enforcement, as well as the creation of artificial intelligence (AI) and machine learning algorithms to analyze the large datasets used and produced by these apps.[2] While arguably beneficial, these new tools come with potential harms that must be understood in order to facilitate their effective use and implementation in both public and private health systems.[3]

II. Risks of Digital Health Technologies

Data breaches in the healthcare industry are prolific. Of the 5,212 confirmed breaches included in Verizon’s report on global data breaches, 571 occurred in healthcare, making it the third highest industry for total number of breaches, just behind finance (690) and professional services (681).[4] Furthermore, according to IBM’s report, healthcare is also the costliest industry for data breaches.[5] The average total cost of a breach in healthcare in 2022 is USD 10.10 million, which is more than twice as costly as a breach in any industry (USD 4.35 million) or a breach in any critical infrastructure (USD 4.82 million).[6] Data breaches in healthcare also harm the individual right to privacy.[7] In 2018, Anthem, one of the largest health insurance companies in the United States, agreed to pay USD 16 million to the U.S. Department of Health and Human Services to settle potential violations of the Health Insurance Portability and Accountability Act (HIPAA) after the largest health data breach in the nation’s history.[8] The breach exposed the protected health information of 79 million people to hackers who stole names, social security numbers, addresses, dates of birth, and employment information, among other private electronic data.[9]

Digital health technologies also carry the risk of bias due to the AI and machine learning algorithms used in automated processes.[10] Since machine learning models are powered by data, biases can be encoded via the datasets from which the algorithm is derived or through the modeling choices made by the programmer.[11] This can compound the bias problem in healthcare where, for instance, the gender and race of participants in randomized clinical trials for new medical treatments are often not representative of the population that ultimately receives the treatment.[12] For example, in one study, an AI was built using hospital notes of ICU patients.[13] The AI was later used to predict the mortality of patients in the intensive care unit (ICU) based on their gender, race, and type of insurance (insurance was used as a proxy for socioeconomic status).[14] The results of the study showed differences in the AI’s prediction accuracy for mortality based on gender and type of insurance, which is a sign of bias.[15] A difference based on race was also observed, but this finding may have been confounded since the original dataset was racially imbalanced to begin with.[16]

Discrimination in the accessibility of digital health technologies is also an immediate concern. This is especially relevant considering many nations are transitioning to a “digital by default” or “digital by choice” model for providing welfare, which in reality are “digital only” in practice.[17] The United Nations report on extreme poverty and human rights emphasized how the lack of digital literacy and access to reliable internet connection can contribute to the inequality in accessing digital technologies.[18] This issue occurs in both the global North and global South.[19] For example, in the wealthy nation of the United Kingdom, 4.1 million adults (8% of the population) are offline, with almost half of them coming from low-income households and almost half being under the age of 60.[20] Failure to address these gaps can result in exacerbated inequalities where underserved and vulnerable populations cannot receive healthcare due to the inability to access and use digital health technologies.[21]

III. International Rights Framework to Mitigate Risks

Guidance for governments to mitigate the concerns brought by digital health technologies can be found in the ethics-based approach derived from a framework of international human rights most relevant in the context of this issue.[22] These are the rights to health, nondiscrimination, benefit from scientific progress, and privacy.[23] The international right to health is particularly critical, being enshrined in both international and domestic laws, with over 100 national constitutions guaranteeing this right to individuals.[24]

In pursuing this ethics-based approach, the United Nations Educational, Scientific and Cultural Organization (UNESCO) published its recommendation on the ethics of artificial intelligence, which aims to guide stakeholders in making AIs work for the good of humanity and to prevent harm.[25] The recommendation lists values and principles for the proper creation and implementation of AIs and specifically includes the principles of non-discrimination, safety and security, privacy, transparency, and accountability among its other provisions.[26]

Additionally, frameworks such as those established by the African Union (AU), Asia-Pacific Economic Cooperation (APEC), European Union (EU), and other regional organizations are also helpful in providing guidance for how to best regulate personal data ethically.[27] These regional frameworks enshrine a common set of positive rights: (1) the right to be informed about what data are and are not collected, (2) right to access stored data, (3) right to rectification, (4) right to erasure (or the “right to be forgotten”), (5) right to restriction of procession, (6) right to be notified, (7) right to data portability, (8) right to object, and (9) other rights related to automated decision-making and profiling.[28]

This approach based on ethics and international human rights also envisions a pathway for holding private companies accountable for how they use, implement, and offer digital health technologies. Through a resolution, the Human Rights Council of the United Nations endorsed non-binding guidelines which would place upon private companies the obligation to respect international human rights independently of nations and governments.[29] This responsibility would require businesses to avoid causing or contributing to adverse human rights impacts through their own activities and to address such impacts when they occur.[30]

IV. Conclusion

The risks of digital technologies are real and immediate. However, to halt this progress out of fear would rob us of the extensive benefits these tools can offer, especially to our health and wellbeing. The intersection of digital technologies and the right to health is an inevitable development in the growth and progress of humanity. As with any tool, the onus is on the user to ensure that these advances continue to benefit our goal of being healthy. We must also ensure that these tools enable others to pursue the same goal and are regulated in their license to intrude into the most private and intimate parts of our lives. In this fast-evolving field, laws based on core human rights are needed to ensure our progress does not turn these tools into weapons that can be used to divide and harm us.

 

 

 

[1] Nina Sun et al., Human Rights and Digital Health Technologies, 22 Health & Hum. Rts. J., no. 2, Dec. 2020, at 21, 22.

[2] Id.

[3] Id. at 23.

[4] Gabriel Basset et al., Verizon, Data Breach Investigations Report 50 (2022).

[5] IBM, Cost of a Data Breach Report 11 (2022).

[6] Id. at 5, 11 (“critical infrastructure” means financial services, technology, energy, transportation, communication, healthcare, education and the public sector industries).

[7] Sun et al., supra note 1, at 23.

[8] Off. for Civ. Rts., Anthem Pays OCR $16 Million in Record HIPAA Settlement Following Largest U.S. Health Data Breach in History (2020).

[9] Id.

[10] Sun et al., supra note 1, at 23.

[11] Irene Y. Chen et al., Can AI Help Reduce Disparities in General Medical and Mental Health, 21 AMA J. Ethics, no. 2, Feb. 2019, at E167, 168; James Zou & Londa Schiebinger, Comment, AI Can Be Sexist and Racist – It’s Time to Make It Fair, 559 Nature 324, 325 (2018).

[12] Chen et al., supra note 11, at 167.

[13] Id. at 169.

[14] Id. at 167, 171, 175.

[15] Id.

[16] Id. at 169, 173, 175.

[17] Philip Alston (Special Rapporteur on Extreme Poverty and Human Rights), Extreme Poverty and Human Rights, U.N. Doc. A/74/493, at 15 (Oct. 11, 2019).

[18] Id.

[19] Id.

[20] Id. at 16

[21] Sun et al, supra note 1, at 25.

[22] Chen et al., supra note 11, at 24.

[23] Id. at 24-26.

[24] G.A. Res. 217 (III) A, Universal Declaration of Human Rights, art. 25(1) (Dec. 10, 1948) (“Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family . . . .”); Rebecca Dittrich et al., The International Right to Health: What Does It Mean in Legal Practice and How Can It Affect Priority Setting for Universal Health Coverage?, 2 Health and Sys. Reform, no. 1, Jan. 2016, at 23, 24.

[25] U.N. Educ., Sci. & Cultural Org. (UNESCO), Recommendation on the Ethics of Artificial Intelligence, U.N. Doc. SHS/BIO/PI/2021/1 (Nov. 23, 2021).

[26] Id. at 7-10.

[27] Sun et al, supra note 1, at 27.

[28] Id.

[29] John Ruggie (Special Representative of the Secretary-General on Human Rights and Transnational Corporations and Other Business Enterprises), annex, Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework, U.N. Doc. A/HRC/17/31, at 13 (Mar. 21, 2011); G.A. Res. 17/4, ¶ 1 (July 6, 2011) (endorsing the guiding principles).

[30] Ruggie, supra 28, at 14.

 

Image Source: https://www.thelancet.com/cms/attachment/e936ef82-9641-4796-9760-81d386e465a9/fx1.jpg