Hello World

By Julia Angwin

Confronting the Biases Embedded in Artificial Intelligence

#101・
15.2K

subscribers

106

issues

Subscribe to our newsletter

By subscribing, you agree with Revue’s Terms of Service and Privacy Policy and understand that Hello World will receive your email address.

Dispatches from our founder
This Week
Hello, friends,
Hardly a day goes by without another revelation of race, gender, and other biases being embedded in artificial intelligence systems. 
Just this month, for example, Silicon Valley’s much-touted AI image generation system DALL-E disclosed that its system exhibits biases including gender stereotypes and tends “to overrepresent people who are White-passing and Western concepts generally.” For instance, it produces images of women for the prompt “a flight attendant” and images of men for the prompt “a builder.”
In the disclosure, OpenAI, the entity that trained DALL-E, says it is only releasing the program to a limited group of users while it works on mitigating bias and other risks. 
Meanwhile, researchers using machine learning to examine electronic health records found that Black patients were more than twice as likely to be described in derogatory terms (like “resistant” or “noncompliant”) in their patient records. And those are the types of records that often make up the raw material for future AI programs, like the one that aimed to predict patient-reported pain from X-ray data but was only able to make successful predictions for White patients.
Regulators, racing to catch up with the explosion of new AI technologies, are just starting to try to provide best practices for artificial intelligence applications. In March, the National Institute for Standards and Technology released a voluntary framework for examining AI systems for issues including fairness and equity. In February, the Consumer Financial Protection Bureau said it would start studying automated home valuation algorithms for bias. And last year, the Federal Trade Commission warned companies to test their algorithms for discriminatory outcomes. But there are no federal laws in the U.S. that require the examination and auditing of AI systems. 
So while policymakers scramble to keep up, journalists and researchers are on the algorithmic front lines investigating and monitoring these complex systems. At The Markup, of course, we spend a lot of time auditing algorithms. Our most recent work revealed that popular predictive policing software disproportionately targeted Black and Latino neighborhoods. 
To understand the greater landscape of how researchers and fellow journalists are investigating this space, I turned to Meredith Broussard, an associate professor at the Arthur L. Carter Journalism Institute of New York University, research director at the NYU Alliance for Public Interest Technology, and the author of “Artificial Unintelligence: How Computers Misunderstand the World.”
Our conversation, edited for brevity and clarity, is below.
Meredith Broussard • Credit: Devin Curry
Meredith Broussard • Credit: Devin Curry
Angwin:  To start, I would love to hear how you got into this field?
Broussard: When I was studying computer science at Harvard, I was one of very few women. At a university of 20,000 students, there were only six women majoring in computer science, and I only knew two of them. I couldn’t find the other ones. I wondered why I felt so alone, why I felt so out of step. After graduation, when I went into a career as a professional computer scientist—I worked at Bell Labs and in the MIT Media Lab—I never saw anybody ahead of me who looked like me, or who had the same interests that I did. 
I left computer science for all of the typical textbook reasons that women leave STEM careers. I was a journalist for many years. Eventually, I realized that I didn’t need to give up on computer science entirely; I could combine CS and journalism as a data journalist. Once I discovered algorithmic accountability reporting, a computationally intensive subfield of data journalism, I fell in love and never looked back.
Angwin: Your first book focused on artificial intelligence—or as you call it, unintelligence. Can you give us your definition of AI and talk about some of its limitations?
Broussard: One thing I try to focus on in my work are the realities of AI, and clarifying confusion around what AI is and isn’t. Oftentimes, when people think about AI, they think about “The Terminator” or other Hollywood portrayals of it. These descriptions are extremely fun to talk about, but they are not real. AI is math. It is really beautiful, cool, complicated math, but it is not “The Terminator.”
The narrative that technology will change the world has been in place for my entire professional career—literally, because my adulthood coincided with the launch of the web. Despite the staying power of this narrative, there has never been, nor will there ever be, a computer that gets us away from the essential problems of being human. Every time there is some supposedly new, world-changing AI system, it turns out that the problems of humanity are just reflected inside the computational system. Honestly, I’m a little tired of the narrative that computers are going to deliver us. I think the narrative itself is tired.
Angwin: How do we keep these narratives in check and tune into the actual impacts of algorithms? 
Broussard: Something that I’m doing in my new book, which is called “More Than a Glitch: Confronting Race, Gender and Ability Bias in Tech,” is looking at different computational systems and investigating how deep-rooted issues like racism, sexism, or ableism are coded into them. In the book, which will be out next year, I demonstrate how technological neutrality is a myth, and why these systems need to be held accountable.
For example: Certain 1950s ideas about gender are actually still embedded in our database systems. When I was taught how to build databases in the 1990s, I was taught to code gender as a Boolean, a binary zero or one value. This was because gender was thought to be fixed, and people thought there were only two genders. Today, this thinking has evolved, and we have realized that gender is a spectrum and it should not be represented as a binary. Actually, gender needs to be a string. There needs to be enough room in the database for the full expression of people’s genders—and we need to update our legacy systems to accommodate this. Scholars like Mar Hicks, Anna Lauren Hoffmann, and Sasha Costanza-Chock helped me learn about the intersection of technology and gender identity.
I came to think about gender in databases because I think about race in databases. As a Black woman with a White parent, I used to get upset when there was no box on a form that reflected my multiracial identity. I realized that people who are nonbinary or trans have that same experience of feeling unseen when they fill out the gender box. That small moment of empathy walloped me over the head and made me think about how to be a better ally. I think that, in general, we can bring more empathy to our thinking about how we build technology and how we investigate technology. 
Angwin: Can you give us an example of how racial biases get baked into technical systems?
Broussard: In medical diagnostics, there is a measure called estimated glomerular filtration rate (eGFR) that doctors use as a measure of kidney health. This number determines when you get onto the kidney transplant list. However, the way your GFR is calculated depends on your race because of the incorrect assumption that Black people have greater muscle mass than White people. Therefore, medical systems use a different calculation when estimating the GFR for Black people, such that White people end up on the kidney transplant list faster. This is a historical artifact of racism. We can trace the narrative about muscle mass back to slavery and the fetishization of Black male strength. Dorothy Roberts’s work on race-based medicine is the best resource for understanding the dimensions of the problem.
If you build an AI system that is based on a racist GFR calculation, then you’re just perpetuating that problem. In medicine, people are building algorithmic tools without critically reflecting on existing problematic systems. We need to look at the existing systems in the world and try to fix the problems and inequalities there before we start building AI. We must cut through the AI hype and not be so quick to create computational systems that are going to replicate historical problems.
Angwin: Where do you think we’re headed? Do we have some hope of reining in these AI systems?
Broussard: The last chapter of my new book was originally called “Hope for the Future,” and for a really long time I couldn’t write it because I was too depressed after writing the rest of the book. Fortunately, I remembered that there are a few things out there that do give me hope. One of them is the emerging field of public interest technology. This is exactly what it sounds like: creating technology in the public interest. This could be building better government technology or conducting algorithmic accountability investigations. There are a lot of jobs in public interest technology for young people who are interested in building technology for social good. 
There are also a lot of scholars and thinkers who are doing really interesting work. I was in a movie called “Coded Bias” that came out on Netflix a year ago. It features a lot of the other women who are doing impactful work in this field, like Joy Buolamwini, Safiya Noble, Cathy O'Neil, and Virginia Eubanks. Ruha Benjamin’s book “Race After Technology” had a big impact on me as I was thinking about my new book. Since 2018, we’ve had an explosion of critical work on technology. There are a couple of groups that I admire, like the Algorithmic Justice League, run by Joy Buolamwini, and the policy group Data for Black Lives, run by Yeshimabeit Milner. I am also really optimistic about the worker organizing efforts happening in Silicon Valley and beyond.
Angwin: Do you advocate for a particular solution or policy recommendation to mitigate these harms?
Broussard: I am a big fan of algorithmic audits. I’ve been teaching and consulting on them for some time. They are fantastic, and they need to happen both internally and externally. We also need more computational literacy in the general public and among policymakers so that we can develop more policies that protect people from algorithmic harms. 
Finally, I think that we need to stop expecting that there’s one single answer. This is something that comes from startup methodology. It is the “let’s formulate a problem statement, map out pain points, write code against it, and scale it up” mentality. These are big, complicated problems that took us centuries to get into, so I think we need to stop expecting that there is going to be a one-size-fits-all solution. If there is one thing that we can do, it’s listen to algorithmic accountability reporters. I think it’s really important to examine the claims being made about technology in order to validate them and to not be surprised or disappointed when the claims are overblown. It is time to be much more honest about what technology can and can’t do.
As always, thanks for reading.
Best,
Julia Angwin
The Markup
Additional Hello World research by Eve Zelickson.
 
From The Markup
 
Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them
The Secret Bias Hidden in Mortgage-Approval Algorithms
P.S. For more from Julia Angwin and Hello World, look here. And so you can keep up on all the news from The Markup, sign up here, and we’ll email you every time we publish about the ways powerful actors are using technology to change society, usually two times a week.
Support The Markup
This email doesn't track you when you open it or click on any links. To learn more read our Privacy Policy.
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
The Markup - The Markup P.O. Box 1103 N.Y., N.Y. 10159