Artificial Intelligence and the Medical Humanities: The Ethical Concerns of Data Commodification in Medicine

by Alissa Williams, HI Program Assistant

Dr. Kirsten Ostherr, the Gladys Louise Fox Professor of English, Director of the Medical Futures Lab at Rice University, and an Adjunct Professor at the UT-Houston School of Public Health, gave her talk, “AI and the Medical Humanities: An Emerging Field of Critical Intervention,” at the October Health and Humanities Research Seminar. Discussing the past, present, and future role of artificial intelligence (AI) in medicine, Dr. Ostherr argued that AI and related “datafication” practices are coming to constitute a new social determinant of health, a term that refers to “conditions in the places where people live, learn, work, and play [that] affect a wide range of health risks and outcomes.” (Datafication is the process of collecting information about something that was previously invisible and turning it into data.) Dr. Ostherr’s lecture was an enlightening take on the potential positive impacts of AI, but also a warning as to how dangerous its reach can become if it goes unchecked. The seminar began with a chronological mapping of AI’s appropriation into the medical field and ended with a call to action for scholars across all disciplines, as well as the public, to participate in the advancement and regulation of AI as it relates to medicine and health.

Prior to 2015, the application of AI in the medical field involved “a lot of speculation,” but “little action” according to Dr. Ostherr. While AI was initially thought to be a threat to the job security of health care professionals such as radiologists, some now see it as a potential tool for making health care more efficient, more effective, and more accessible to historically neglected populations. But AI, especially when combined with datafication, also poses potential harm to patients and others. Dr. Ostherr argues that the health data about themselves that individuals both purposefully and inadvertently make available on social media platforms, websites, and personal wellness devices (e.g., Fitbits) can be commodified for exploitative purposes, largely without the permission or even knowledge of the patient whose data is being commodified. While research into the social determinants of health can be used to promote health equity and health justice, it can also be used to reinforce existing forms of bias and exclusion and even create new ones. For instance, companies can use data about the neighborhoods people live in or even their degree of political participation to deny them health insurance, charge them more for it, or even deny them employment.  Rather than using data to improve people’s health and health care, companies can use it to manage their own financial risks. In other words, instead of making healthcare and its infrastructure more accessible to all individuals, including those belonging to historically marginalized groups, AI and datafication can exacerbate the gaps that already exist by amplifying the biases that helped create these inequities.

On the other hand, however, Dr. Ostherr mentioned two websites in her talk, Hu-manity.co and Doc.ai, that have begun to use AI as a means of enabling patients themselves to monetize the sharing of their data under privacy constraints that the patients choose. Although these websites have not yet garnered a prominent following, they serve as examples of the potential AI has to help develop more positive and transparent ways of using individuals’ health data. Dr. Ostherr believes that the health humanities have a crucial role to play in determining how AI and other modes of information technology are used in the fields of health and medicine. Dr. Ostherr emphasizes that this is indeed a collaborative effort that needs to take place between disciplines in order to ensure the needs of the public are met from all angles with diversity in mind.

Leave a Reply

Your email address will not be published.