On February 21, 2024, we held the fifth session of the AI Interest Group. We were honored to have Dr. Kenneth Fleischmann join us to present on the Ethics of Artificial Intelligence and how they relate to libraries, archives, and museums. Dr. Fleischmann is a Professor at the School of Information and Founding Chair of the Executive Team of Good Systems. He is also the Founding Director of Undergraduate Studies for the iSchool’s undergraduate programs in Informatics. His focus is on the role of human values and ethics in the design and use of information technologies.
It was an illuminating talk and discussion that touched on many topics, including data privacy, AI bias, transparency, and alignment with our institutional values. In particular, Dr. Fleischmann looked at AI through the lens of the American Library Association’s Core Values.
Below, you will find a selection of the materials we distributed in advance of the Ethics meeting.
Our next session will focus on Generative AI, Art, and Design, with Professor Julie Schell, Director of the Office of Academic Technology. If you are interested in attending, please fill out a form on our Events page.
Videos
- How I’m fighting bias in algorithms
- Speaker: Joy Buolamwini
- Hosted by: TED
- Duration: 00:08:43
- Date: March 29, 2017
- Description: “MIT grad student Joy Buolamwini was working with facial analysis software when she noticed a problem: the software didn’t detect her face — because the people who coded the algorithm hadn’t taught it to identify a broad range of skin tones and facial structures. Now she’s on a mission to fight bias in machine learning, a phenomenon she calls the “coded gaze.” It’s an eye-opening talk about the need for accountability in coding … as algorithms take over more and more aspects of our lives.”
- Mission, Team and Story – The Algorithmic Justice League
- Algorithmic Bias and Fairness: Crash Course AI #18
- Speaker: Jabril Ashe
- Hosted by: Crash Course, produced in association with PBS Digital Studios
- Duration: 00:11:19
- Date: December 13, 2019
- Description: “Today, we’re going to talk about five common types of algorithmic bias we should pay attention to: data that reflects existing biases, unbalanced classes in training data, data that doesn’t capture the right value, data that is amplified by feedback loops, and malicious data. Now bias itself isn’t necessarily a terrible thing, our brains often use it to take shortcuts by finding patterns, but bias can become a problem if we don’t acknowledge exceptions to patterns or if we allow it to discriminate.”
- Responsible AI: Tools for values-driven AI in libraries and archives
- Speaker: Jason A. Clark, Head of Research Optimization, Analytics, and Data Services at Montana State University Library
- Hosted by: Code4Lib 2023
- Duration: about 00:20:00
- Date: March 16, 2023
- Website: Responsible AI – MSU Library | Montana State University
- Description from website: “Responsible AI is an IMLS-funded project aiming to support ethical decision-making for AI projects in libraries and archives.”
- {key: value}: Algorithmic Debiasing in Practice
- Speaker: Andromeda Yelton
- Hosted by: Code4Lib 2023
- Duration: about 12 minutes
- Date: March 16, 2023
- Description: This presentation explores practical applications of coding in library settings, delving into the complexities of implementing ethical values in real-world scenarios.
- Intro to Large Language Models
- Creator: Andrej Karpathy (works at OpenAI, specializes in deep learning and computer vision)
- Date: November 22, 2023
- Duration: 00:59:44
- Description from video: “This is a 1 hour general-audience introduction to Large Language Models: the core technical component behind systems like ChatGPT, Claude, and Bard. What they are, where they are headed, comparisons and analogies to present-day operating systems, and some of the security-related challenges of this new computing paradigm.”
- Highly recommend this video, it gives a great overview of the structure and constraints of LLMs and the security issues that he discusses at the end represent a critical type of ethical issue that needs to be carefully considered.
Websites, Papers, Other
- Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models
- Author(s): Myra Cheng, Esin Durmus, Dan Jurafsky
- Publication: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers, pages 1504–1532
- July 9-14, 2023
- Abstract: To recognize and mitigate harms from large language models (LLMs), we need to understand the prevalence and nuances of stereotypes in LLM outputs. Toward this end, we present Marked Personas, a prompt-based method to measure stereotypes in LLMs for intersectional demographic groups without any lexicon or data labeling. Grounded in the sociolinguistic concept of markedness (which characterizes explicitly linguistically marked categories versus unmarked defaults), our proposed method is twofold: 1) prompting an LLM to generate personas, i.e., natural language descriptions, of the target demographic group alongside personas of unmarked, default groups; 2) identifying the words that significantly distinguish personas of the target group from corresponding unmarked ones. We find that the portrayals generated by GPT-3.5 and GPT-4 contain higher rates of racial stereotypes than human-written portrayals using the same prompts. The words distinguishing personas of marked (non-white, non-male) groups reflect patterns of othering and exoticizing these demographics. An intersectional lens further reveals tropes that dominate portrayals of marginalized groups, such as tropicalism and the hypersexualization of minoritized women. These representational harms have concerning implications for downstream applications like story generation.
- AI policies across the globe: Implications and recommendations for libraries
- Author: Leo S. Lo
- Publication: IFLA Journal
- Pub date: August 27, 2023
- Abstract: This article examines the proposed artificial intelligence policies of the USA, UK, European Union, Canada, and China, and their implications for libraries. As artificial intelligence revolutionizes library operations, it presents complex challenges, such as ethical dilemmas, data privacy concerns, and equitable access issues. The article highlights key themes in these policies, including ethics, transparency, the balance between innovation and regulation, and data privacy. It also identifies areas for improvement, such as the need for specific guidelines on mitigating biases in artificial intelligence systems and navigating data privacy issues. The article further provides practical recommendations for libraries to engage with these policies and develop best practices for artificial intelligence use. The study underscores the need for libraries to not only adapt to these policies but also actively engage with them, contributing to the development of more comprehensive and effective artificial intelligence governance.
- Podcast: Making Machines Mindful: NYU Professor Talks Responsible AI
- The AI Podcast
- Hosted by Noah Kravitz
- Presented by Nvidia
- Release date: October 18, 2023
- Description from website: “Artificial intelligence is now a household term. Responsible AI is hot on its heels. Julia Stoyanovich, associate professor of computer science and engineering at NYU and director of the university’s Center for Responsible AI, wants to make the terms “AI” and “responsible AI” synonymous. In the latest episode of the NVIDIA AI Podcast, host Noah Kravitz spoke with Stoyanovich about responsible AI, her advocacy efforts and how people can help”
AI Ethics in the News
- The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work
- Authors: Michael M. Grynbaum and Ryan Mac
- Publication: NYT
- Publication Date: December 27, 2023
- “The Times is the first major American media organization to sue the companies, the creators of ChatGPT and other popular A.I. platforms, over copyright issues associated with its written works. The lawsuit, filed in Federal District Court in Manhattan, contends that millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information.” (Link to quote)
- OpenAI Response: OpenAI and journalism (Blog post, January 8, 2024)
- The Unsettling Lesson of the OpenAI Mess
- Author: Ezra Klein
- Publication: The New York Times (Opinion section)
- Pub date: November 22, 2023.
- “I don’t know whether the board was right to fire Altman. It certainly has not made a public case that would justify the decision. But the nonprofit board was at the center of OpenAI’s structure for a reason. It was supposed to be able to push the off button. But there is no off button.”
- For anyone who didn’t keep an eye on this: the board of the nonprofit arm of OpenAI fired Sam Altman, the CEO of OpenAI, on Friday, November 17. The explanation for this firing, that Mr. Altman was “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” was derided as insufficient by many observers. There was a massive amount of pressure from investors and employees to reinstate him, and nearly the entire company threatened to decamp (probably to Microsoft, where they were offered their own division). On Tuesday, November 21, the board reinstated Altman and three of the four remaining members of the board stepped down.
- In somewhat related news, Meta disbanded its Responsible AI team
Examples of AI Harm
- Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks
- Authors: Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner
- Publication: ProPublica
- Publication date: May 23, 2016
- “In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.
- The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.
- White defendants were mislabeled as low risk more often than black defendants.” (Link to quote)
- Why Amazon’s Automated Hiring Tool Discriminated Against Women
- Author: Rachel Goodman
- Publication: American Civil Liberties Union
- Pub date: October 12, 2018
- “In 2014, a team of engineers at Amazon began working on a project to automate hiring at their company. Their task was to build an algorithm that could review resumes and determine which applicants Amazon should bring on board. But, according to a Reuters report this week, the project was canned just a year later, when it became clear that the tool systematically discriminated against women applying for technical jobs, such as software engineer positions.”
- In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation The bot learned language from people on Twitter—but it also learned values
- Author: Oscar Schwartz
- Publication: IEEE Spectrum
- Pub date: November 25, 2019. Updated January 4, 2024.
- “In 2016, Microsoft’s chatbot Tay—designed to pick up its lexicon and syntax from interactions with real people posting comments on Twitter—was barraged with antisocial ideas and vulgar language. Within a few hours of it landing in bad company, it began parroting the worst of what one might encounter on social media.”
- How Wrongful Arrests Based on AI Derailed 3 Men’s Lives | WIRED
- Author: Khari Johnson
- Publication: Wired
- Pub date: March 7, 2022
- “Law enforcement in nearly every US state now has access to facial recognition software. The Georgetown Law Center on Privacy and Technology says images of one in two US adults are in facial recognition databases used to identify criminal suspects. Critics say police rely too heavily on the technology, particularly since research has shown it misidentifies women and people of color more often than white men. Yet in most of the US, neither police nor prosecutors are required to tell people accused of crimes if facial recognition has played a role in an investigation.” (link to quote)
Upcoming Events and Conferences
- Good Systems Annual Symposium
- Dates and locations: March 27 (AT&T Conference Center); March 28 (William C. Powers SAC). Austin, TX.
- Panelists or speaker:
- Host: Good Systems
- Description from the website:
- Day 1: Join us for an opening keynote by Dr. Kristian Hammond (Northwestern University), a panel on how generative AI is transforming work across disciplines and sectors, and a poster session and reception.
- Day 2: Join us for interactive demos and research presentations by Good Systems researchers and partners, and a closing keynote banquet by Dr. Helen Nissenbaum (Cornell Tech).
Policies, Values, Ethics
United States Guidelines and Executive Order
- Artificial Intelligence Risk Management Framework (AI RMF 1.0)
- January 2023
- NIST: National Institute of Standards and Technology, U.S. Department of Commerce
- FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
- October 30, 2023
- Sections in order:
- New Standards for AI Safety and Security
- Protecting Americans’ Privacy
- Advancing Equity and Civil Rights
- Standing Up for Consumers, Patients, and Students
- Supporting Workers
- Promoting Innovation and Competition
- Advancing American Leadership Abroad
- Ensuring Responsible and Effective Government Use of AI
- Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
- October 30, 2023
- “Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.”
- Blueprint for an AI Bill of Rights
- October 2022
- Sections:
- Safe and Effective Systems
- Algorithmic Discrimination Protections
- Data Privacy
- Notice and Explanation
- Human Alternatives, Consideration, and Fallback
- “This framework applies to (1) automated systems that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.”
European Policy and Guidelines
- E.U. Agrees on Landmark Artificial Intelligence Rules
- Publication date: December 8, 2023
- Author: Adam Satariano
- Publisher: New York Times
- “European policymakers focused on A.I.’s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose A.I. systems, like those powering the ChatGPT chatbot, would face new transparency requirements.”
- EU’s AI Act could exclude open-source models from regulation
- Pub date: December 7, 2023
- Authors: Martin Coulter, Supantha Mukherjee and Foo Yun Chee
- Publisher: Reuters
- “According to the document, which circulated among lawmakers on Thursday morning, the AI Act would not apply to free and open-source licences unless, for example, they were deemed high-risk or being used for already banned purposes”
- EU AI Act: first regulation on artificial intelligence
- August 6, 2023
- European Parliament News
- “Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”
- Completely banned:
- Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
- Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
- Real-time and remote biometric identification systems, such as facial recognition” (there are some exceptions to this for emergency situations)
- Expected to pass in early 2024
- Link to briefing on this legislation in progress (June 2023)
- Many data privacy issues are not in this bill because they were already addressed in the GDPR (General Data Protection Regulation), in effect as of May 25, 2018.
- This legislation was and is controversial: Power grab by France, Germany and Italy threatens to kill EU’s AI bill – POLITICO
- Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment
- Publication date: July 17, 2020
- Presented by: The European Union High Level Expert Group on AI
- Published by: The European Commission
- Ethics guidelines for trustworthy AI
- Publication date: April 8, 2019
- Presented by: The European Union High Level Expert Group on AI
- Published by: The European Commission
- “The Guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy:
- Human agency and oversight
- Technical Robustness and safety
- Privacy and data governance
- Transparency
- Diversity, non-discrimination and fairness
- Societal and environmental well-being
- Accountability”
Cultural Heritage Ethics
- Revisiting Core Values:
- ACRL’s Core Organizational Values
- Visionary leadership, transformation, new ideas, and global perspectives
- Exemplary service to members
- Equity, diversity, and inclusion
- Integrity and transparency
- Continuous learning
- Responsible stewardship of resources
- The values of higher education, intellectual freedom, the ALA Ethics policy, and “The Library Bill of Rights”
- AAM Code of Ethics for Museums
- SAA Core Values Statement and Code of Ethics
- ACRL’s Core Organizational Values
- Training Generative AI Models on Copyrighted Works Is Fair Use – Association of Research Libraries
- Publication date: January 23, 2024
- Authors: Katherine Klosek, Director of Information Policy and Federal Relations, Association of Research Libraries (ARL), and Marjory S. Blumenthal, Senior Policy Fellow, American Library Association (ALA)
- “LCA is not involved in any of the AI lawsuits. But as champions of fair use, free speech, and freedom of information, libraries have a stake in maintaining the balance of copyright law so that it is not used to block or restrict access to information.”