AI in the Humanities Profile Series by Colin Phillips
Recycling robots, Roombas, and much more is covered in this series of profiles that highlights the fascinating work UT professors are conducting in the humanities. Learn about how three interdisciplinary researchers have adapted their work with artificial intelligence and where it’s taking them.
Dr. Samatha Shorey | Dr. Simone Browne | Dr. S. Scott Graham
Dr. S. Scott Graham teaches in the Department of Rhetoric and Writing in the College of Liberal Arts. Working in both traditional and computational rhetoric, Dr. Graham’s research focuses on the rhetorical analysis of conflicts of interest in medicine. He has a book, Where’s the Rhetoric: Imagining a Unified Field, slated to be published in November of this year, and is currently teaching two classes: RHE 321 – Principles of Rhetoric, and RHE 328 – Technical Communication and Wicked Problems.
Rhetorics is a tried and true pillar of the humanities, one of the three ancient arts of discourse. Dr. S. Scott Graham is working at the frontier of this age-old discipline, combining rhetorical analysis with computation driven by artificial intelligence. “My training is in finding patterns in language. Our traditional disciplinary training is kind of one text at a time, so we read [someone’s speech and try to get a really deep and clear understanding of the points they’re making. What computational rhetoric allows me to do, what machine learning allows me to do, is expand that out,” says Dr. Graham.
With the rise of natural language processing (NLP) technologies, Dr. Graham is applying machine learning to the field of medicine to bring transparency to biomedical research. “There’s a lot of money in health and medicine … The reality is when there’s that much money on the line, even people with the best intentions can sometimes be subtly influenced into making questionable decisions,” says Dr. Graham. Currently, biomedical research papers are supposed to list all companies that sponsor their research, but there is no standard for displaying this information. For Dr. Graham’s lab, this is where machine learning comes in. Says Dr. Graham, “My current work has been to develop a system that can read those conflicts of interest so that we can test whether there is any relationship between those conflicts of interest and the way that doctors write or argue about drugs.” His hope is to use this transparency tool to develop a better understanding of research networks at large. “Being transparent about an individual conflict of interest is an important first step,” according to Dr. Graham, “but it doesn’t help you understand how funding circulates through the bigger research systems.”
To bring this circulation visibility, Dr. Graham created Conflict Metrics, a data visualization tool for these research funding networks. The tool combines network analysis and graphics to display conflicts of interest from drug trial sponsorship to medical publishing. His hope is for medical researchers and practitioners to use these tools to “make more sophisticated decisions about where to research or what treatment decisions to make.” Data-driven policy development doesn’t seem far off in the future as well. Says Dr. Graham, “I’m cautiously optimistic that there might be a cool future here where this gets folded into decision making … but it’s just not there yet; it’s promising, but more work needs to be done.”
Dr. Graham first became interested in computational rhetoric five years ago, for a project that eventually evolved into Conflict Metrics. “I got involved initially as a step to try to save time and money. I was interested in understanding if conflicts of interest changed the way that the FDA argued about what drugs to approve or not to approve,” he says, “and the first time I did this project, I did it by hand.” A year of pouring over hundreds of FDA transcripts resulted in the processing of four years’ worth of content from one of 18 different regulatory committees. “That’s a ton of people, devoting a ton of time. That year provides the training set for the machine learning I’m doing now but the reality is, you can’t stay on top of it,” says Dr. Graham. He quickly realized that a computational system was a necessity, and Conflict Metrics was born.
Dr. Graham is now hoping to bring his interdisciplinary approach to the field of rhetoric at large. “I’ve been told repeatedly throughout my career,” he says in relation to his upcoming book, “that each of these things [computer systems] don’t necessarily belong within the discipline of rhetoric. And so the book is my response.” This book, Where’s the Rhetoric? Imagining a Unified Field, presents a vision of rhetoric as incorporating traditional elements in addition to computation. For him, “the computer should never replace traditional rhetorical inquiry, but sometimes traditional rhetorical inquiry can lead you to make claims about the world that aren’t true. And so if you use the two together, you can get the best of both worlds.”
Dr. Samantha Shorey is an Assistant Professor in the Moody College of Communication’s Department of Communication Studies. With work experience in both industry and academia, she specializes in design research and maintains a particular interest in narratives about technological innovation. She is currently teaching two classes: CMS 346-Using Communication Technology at Work and CMS 346C-The Cultural Impact of Innovation.
Quilts don’t usually come to mind when most people think of space. We think yarn and rockets don’t mix, but folk fabrics are exactly what Dr. Samantha Shorey of the Moody College of Communication used to highlight the oft-neglected contributions of the women who made computer hardware for the Apollo Missions. This historical inquiry into how early computers were literally woven together, the Making Core Memory Project, was one of Dr. Shorey’s first endeavors at shifting narratives to the individuals that drive innovation. Through an award-winning series of weaving workshops and an audio-enabled quilt, Dr. Shorey brought the experiences of these women to life, building on her own professed love of material technologies. Now, Dr. Shorey is looking to do much of the same by bringing the focus back to the people who craft artificial intelligence.
“Many lose sight of the fact that there are always people, always humans, always designers that are involved in the design and production of artificially intelligent technology,” says Dr. Shorey. “When it comes to the way AI is talked about in the media, it’s really often devoid of people.” She’s looking to change this tendency, incorporating the human element into our vision of the field. Her first stop: recycling robots.
In a trend intensified by the ongoing epidemic, companies are reducing human contact through robotic automation processes, which often use AI. That’s exactly what’s happening at a local recycling plant where Dr. Shorey is conducting an NSF-funded study. “This company is introducing robots into recycling sorting, which is an almost invisible infrastructure. The people that work in waste management… they’re not what we think of as highly technical people in the sense of being like an engineer. But as these technologies are being introduced, the people that work on the floor are doing highly technical work,” Dr. Shorey argues. Her goal is to observe the rapid innovation on the ground and highlight the roles of these human workers alongside their new robotic counterparts. For most of us, the credit for artificial intelligence systems goes to computer scientists that design the algorithms and interfaces. Dr. Shorey focuses instead on the everyday workers whose routines they adapt, and who in turn implement their own creations, but like the women of the Apollo mission, are usually forgotten. Says Dr. Shorey, “I want to try to spread the value out a little bit” to both encourage the participation of these workers in technology development and emphasize the value of their contributions.
When asked about the importance of these workers, she says that “one of the big things we think about for ethical frameworks when building technology is really a question about participation.” To Shorey, empowering these workers and focusing on them allows the group to be a valid stakeholder in the design process. With this, “more and more people are able to participate in that technological process, enact what their view on things is, and build technology that can respond to their needs.” Her interviews with these workers have shown that excluding them from the design process is an oversight and she’s hoping that her coverage of innovations will bring that to light. Dr. Shorey is taking that first step bringing people back into the fold; where we go from there will be one of the defining questions of an AI future.
Dr. Simone Browne is an Associate Professor in the African and African Diaspora Studies Department in the College of Liberal Arts. The award-winning author of Dark Matters: On the Surveillance of Blackness, Browne currently focuses her research on surveillance studies and the lifecycle of electronic waste. She is currently teaching UGS 302 – Surveillance: An Introduction.
There are not many groups in the world like Deep Lab. A feminist collective of artists, researchers, engineers, and more, this group provides critical assessments of digital culture, including artificial intelligence and what lies beyond it. “Did you ever watch Voltron?” asks Dr. Browne, when asked to describe this accomplished interdisciplinary team. “It’s this idea that these individual robots join together to become a super robot.” One of the Deep Lab projects Dr. Browne describes is a painting display made by Roombas, the automatic vacuum cleaners: at an exhibition, Deep Lab founder Addie Wagenknecht would lay down among these Roombas to offer a “critique around the work that gets marked as women’s labor in the home and automation.”
With Deep Lab and on campus, Dr. Browne brings into focus the combination of biometrics and surveillance to contextualize key issues with modern policing. “I wanted to really put surveillance studies in conversation with black studies by historicizing some of the key concepts,” says Dr. Browne. “I want to question how these similar types of technologies have been thought about in earlier ways … and how that can lead us to ask important questions around rights, ownership, and ethics.”
Her work with biometrics first started with an interest in US-Canadian border security post-9/11. “That got me really into reading and being in conversation with surveillance studies, which is very multidisciplinary, a segment of sociology, geography, political science, and more,” says Dr. Browne. “I became interested in … techno-solutionism.” Her experience exploring modern surveillance, including the usage of facial recognition with biometric scanners, drove her to question these growing systems.
For Dr. Browne, ethics is increasingly defining the conversation regarding artificial intelligence. Says Dr. Browne, “the ethics have to be determined by the community, not just the creators but the communities that experience the real material effects of some of these systems.” Dr. Browne notes that “we’re at an interesting point now where there’s a pushback, or at least a halting in many places, on the use of facial recognition technology being used by government agencies … that are about recognizing, and sometimes criminalizing, the human body.” Dr. Brown is also bringing a conversation about ownership to the forefront, particularly in regard to the training of some of these AI technologies. “You might not give consent to have your face recorded in public,” but numerous other information is oftentimes derived from our bodies. A question Dr. Browne hopes to answer is whether “we have a right of refusal or a rate of return from that intellectual property, from that data collected?”
Beyond surveillance, Dr. Browne has conducted extensive ethnographic research on electronic waste. “I wanted to ask a different question, on the afterlife of the material components that artificial intelligence is dependent on,” says Dr. Browne. Her ongoing research observes the bootstrap culture of self-taught engineers in Ghana who use spare parts to make computers and explores the health effects of this waste, which originates from North America, Europe, and elsewhere around the world. She’s hoping to bring electronic waste back into the conversation about artificial intelligence, by rooting it in material objects. “The project,” she says, “is focused on what happens when Alexa becomes electronic waste.”
Dr. Browne is looking to expand the scope of her current research, in part through her recent appointment as a Research Director in the Good Systems Grand Challenge. “One of the key things I’m interested in is how artists often allow us to ask and give us a roadmap to a different world when it comes to technology,” says Dr. Browne of her upcoming work. “I’m really interested in artists who have been using or critiquing troubling AI in their own creative works and practices.” Perhaps we’ll soon see thought-provoking, painting Roombas here in Austin.
Colin Phillips is a fourth year Electrical and Computer Engineering (ECE) Honors and Mathematics major. Outside of the classroom, he researches human-AI interactions and is collaborating with the Strauss Center for International Security and Law to explore the regulation of AI in defense. He’s passionate about the intersection of humanitarianism and technology, and is a former Project Manager for Projects with Underserved Communities -India. He’s helping launch the Humanitarian Engineering Institute at UT and is an avid water polo player in his free time. Colin is pursuing a career in machine learning and product management at socially-focused enterprises.