Tag Archives: technology

Computational and Biological Approaches in the Study of Literature

By Sarah Schuster, HI Graduate Research Assistant

Pramit Chaudhuri, Associate Professor of Classics at the University of Texas at Austin, led the Faculty Fellows seminar on November 7th. Dr. Chaudhuri presented his current work on Latin literary genre, using methodologies from the digital humanities. With collaborator T.J. Bolt, a Mellon postdoctoral researcher in the Classics department, and other researchers Chaudhuri is exploring the stylistic boundaries between literary genres in Latin, such as the relationship between epic and drama. Bolt and Chaudhuri used quantitative methods to uncover what differences and similarities exist between genres of Latin poetry, seeking distinctive features that accurately describe a given genre. Building from this work, Chaudhuri expressed interest in other ways to apply computational analysis or to present the data to a scholarly audience.

Chaudhuri opened the seminar by considering the lens of analysis other Fellows had used to discuss “Narrative Across the Disciplines.” Rather than focusing on the analysis or construction of individual narratives, Chaudhuri suggested that narrative across disciplines could be a research discussion in its own right. He encouraged Fellows to discuss primary and secondary narratives, to consider what narratives felt familiar to them, and whether genre was a meaningful or valuable classification for work within their fields. Chaudhuri noted that these questions were meant to aim Fellows towards considerations of form, rather than content.

After a brief discussion of these questions in small groups, the Fellows reconvened to discuss Chaudhuri’s project more broadly as part of his work as Co-Director of the Quantitative Criticism Lab. Given the range of disciplinary interests, Chaudhuri expressed his curiosity toward what considerations of form and genre might be most influential for the Fellows in their own work. Fellows responded with a variety of answers, but they also posed questions regarding Chaudhuri and Bolt’s computational method. Fellows were interested in the assumptions embedded in the project regarding machine learning, and to what extent computational approaches offer insights beyond that of more traditional methods. Some in the group wondered if the project could be expanded or combined with similar projects in linguistics, while others noted concerns regarding generalization over historical periods that might lead scholars in some disciplines to resist digital humanities projects. A lively discussion of Chaudhuri’s use of the term “cultural evolution” revealed how scholars in various disciplines deal with change. The seminar closed with the Fellows speculating on the implications of the project for Classics departments, from possible considerations (or reconsiderations) of genre to novel examinations of intertextuality at the level of syntax.

Artificial Intelligence and the Medical Humanities: The Ethical Concerns of Data Commodification in Medicine

by Alissa Williams, HI Program Assistant

Dr. Kirsten Ostherr, the Gladys Louise Fox Professor of English, Director of the Medical Futures Lab at Rice University, and an Adjunct Professor at the UT-Houston School of Public Health, gave her talk, “AI and the Medical Humanities: An Emerging Field of Critical Intervention,” at the October Health and Humanities Research Seminar. Discussing the past, present, and future role of artificial intelligence (AI) in medicine, Dr. Ostherr argued that AI and related “datafication” practices are coming to constitute a new social determinant of health, a term that refers to “conditions in the places where people live, learn, work, and play [that] affect a wide range of health risks and outcomes.” (Datafication is the process of collecting information about something that was previously invisible and turning it into data.) Dr. Ostherr’s lecture was an enlightening take on the potential positive impacts of AI, but also a warning as to how dangerous its reach can become if it goes unchecked. The seminar began with a chronological mapping of AI’s appropriation into the medical field and ended with a call to action for scholars across all disciplines, as well as the public, to participate in the advancement and regulation of AI as it relates to medicine and health.

Prior to 2015, the application of AI in the medical field involved “a lot of speculation,” but “little action” according to Dr. Ostherr. While AI was initially thought to be a threat to the job security of health care professionals such as radiologists, some now see it as a potential tool for making health care more efficient, more effective, and more accessible to historically neglected populations. But AI, especially when combined with datafication, also poses potential harm to patients and others. Dr. Ostherr argues that the health data about themselves that individuals both purposefully and inadvertently make available on social media platforms, websites, and personal wellness devices (e.g., Fitbits) can be commodified for exploitative purposes, largely without the permission or even knowledge of the patient whose data is being commodified. While research into the social determinants of health can be used to promote health equity and health justice, it can also be used to reinforce existing forms of bias and exclusion and even create new ones. For instance, companies can use data about the neighborhoods people live in or even their degree of political participation to deny them health insurance, charge them more for it, or even deny them employment.  Rather than using data to improve people’s health and health care, companies can use it to manage their own financial risks. In other words, instead of making healthcare and its infrastructure more accessible to all individuals, including those belonging to historically marginalized groups, AI and datafication can exacerbate the gaps that already exist by amplifying the biases that helped create these inequities.

On the other hand, however, Dr. Ostherr mentioned two websites in her talk, Hu-manity.co and Doc.ai, that have begun to use AI as a means of enabling patients themselves to monetize the sharing of their data under privacy constraints that the patients choose. Although these websites have not yet garnered a prominent following, they serve as examples of the potential AI has to help develop more positive and transparent ways of using individuals’ health data. Dr. Ostherr believes that the health humanities have a crucial role to play in determining how AI and other modes of information technology are used in the fields of health and medicine. Dr. Ostherr emphasizes that this is indeed a collaborative effort that needs to take place between disciplines in order to ensure the needs of the public are met from all angles with diversity in mind.

Training the Medical Eye through Art History

Dr. Susan Rather discusses medical art education in HI’s Faculty Fellows Seminar on Health, Well-Being, Healing
By Saralyn McKinnon-Crowley

In the late 1990s, dermatology Professor Irwin M. Braverman of Yale School of Medicine concluded that his medical students were relying too much on high-tech imaging and not enough on their own visual skills to diagnose skin conditions. Dr. Braverman wanted to ensure that reliance on technology did not supplant traditional diagnostic skills, and hoped that better doctor-patient interaction and keener medical observation would diminish the need for so many diagnostic tests. To help students develop their visual skills, Dr. Braverman designed, in collaboration with the education curator at the Yale Center for British Art Linda Krohner Friedlaender, an elective course in which students would study narrative paintings (paintings that tell a story) and describe the works of art as thoroughly and objectively as possible. Students received no external information about the paintings, not even painting titles. In 2001, Jacqueline Dolev—an alumna of the course—Ms. Friedlaender, and Dr. Braverman published research on the course’s effectiveness in the Journal of the American Medical Association. Students who completed the course, they reported, had improved observational skills and perceived more details about their patients compared to students who had not enrolled. These results suggest that students’ visual diagnostic abilities would have also improved, thereby boosting the efficiency of their patient care.

Continue reading Training the Medical Eye through Art History

The Technology of Living and Dying

Dr. John Roberts discusses aging and decline in John Updike’s writing in our Faculty Fellows Seminar on Health, Well-Being, Healing
By Saralyn McKinnon-Crowley

Last week’s Faculty Fellows Seminar in “Health, Well-Being, Healing” focused on questions of dying and, specifically, how new life-prolonging technologies compel one to rethink what it means to die. Dr. John Robertson of the School of Law presented his current research on Left Ventricular Assistance Devices (LVADs) and the later poetry and prose of John Updike. Dr. Robertson is especially interested in Updike’s short story “The Full Glass”—written shortly before Updike’s own death in 2009—about aging and decline. Updike’s protagonist reflects on a small detail of his daily life, filling his bedtime glass of water, to think about the end of life without directly confronting the experience of dying. Dr. Robertson’s work-in-progress on this material is entitled “Writers at the End—John Updike’s ‘The Full Glass,’” which he hopes to publish in the journal Literature and Medicine. Although “The Full Glass” does not address machines or surgical implants (such as LVADs), Updike’s writing reflects on the quality of life from the perspective of an elderly man.

Continue reading The Technology of Living and Dying