View profile

AI Weekly | June 26, 2020

I’ve lost track of the number of times I’ve heard somebody say recently that Timnit Gebru is saving t
AI Weekly
Presented by   
I’ve lost track of the number of times I’ve heard somebody say recently that Timnit Gebru is saving the world. Her co-lead of AI ethics at Google, Margaret Mitchell, said that about her a few days ago when Gebru led some events at Google around race. Her work with Joy Buolamwini that found race and gender bias in facial recognition is in part why lawmakers in Congress want to prohibit federal government use of the technology. That work also played a major role in Amazon, IBM, and Microsoft agreeing to halt or end facial recognition sales to police.
Earlier in the week, organizers of the Computer Vision and Pattern Recognition (CVPR) conference, one of the biggest AI research conferences in the world, took the unusual step of calling her CVPR tutorial about how bias in AI goes far beyond data “required viewing for us all.”
That’s what made the situation with Facebook chief AI scientist Yann LeCun this week so perplexing.
The entire episode between two of the best-known AI researchers in the world came about as part of a conversation that started about a week ago with the release of PULSE, a computer vision model created by Duke University researchers that claims it can generate realistic, high-resolution images of people from a pixelated photo.
The controversial system combines generative adversarial networks (GANs) with self-supervised learning. For training it used the Flickr Face HQ Data set compiled last year by a team of Nvidia researchers. The same data set was used to create the StyleGAN model. It seemed to work fine on White and Asian people, but one observer input a depixelated photo of President Obama, and PULSE produced a photo of a White man. Other photos generated images that give Samuel L. Jackson blonde hair or turn Muhammad Ali into a White man.
In response to a colleague calling the Obama photo an example of the dangers of AI bias, LeCun asserted that “ML systems are biased when data is biased.” Analysis of a portion of the data set found far more white women and men than Black women, but people quickly took issue with the characterization that bias is about data alone. Gebru then suggested LeCun watch her tutorial, which repeats as a central message that AI bias cannot be reduced to data alone, or explore the work of other experts who say the same.
In Gebru’s tutorial, she says that evaluation of whether or not an AI model is fair must take into consideration more than just data, and she challenged the computer vision community to “understand just how pervasively our technology is being used to marginalize many groups of people.”
“I think my take home message here is fairness is not just about data sets, and it’s not just about math. Fairness is about society as well, and as engineers, as scientists, we can’t really shy away from that fact,” Gebru said in the tutorial.
There’s no shortage of resources that explain why bias is about more than data. As Gebru was quick to point out, LeCun is president of the ICLR conference, where earlier this year Ruha Benjamin asserted in a keynote address that “computational depth without historic or sociological depth is superficial learning.”
Debate waged on Twitter until Monday, when LeCun shared a 17-tweet thread about bias in which he said he didn’t intend to say ML systems are biased due to data alone, but that in the case of PULSE the bias comes from the data. LeCun finished the thread by suggesting that Gebru avoid getting emotional in her response – a comment many female AI researchers interpreted as sexist.
Many Black researchers and women of color in the Twitter conversation expressed disappointment and frustration at LeCun’s position. UC Berkeley PhD student Devin Guillory, who published a paper this week about how AI researchers can combat anti-Blackness in the AI community accused LeCun of “gaslighting Black women and dismissing tons of scholarly work.” Other prominent AI researchers made similar accusations.
Gaslighting is defined as an act of psychological manipulation to make someone question their sanity. Gaslighting Black female researchers is especially cruel given how many female researchers describe colleagues who fail to cite their work as part of the erasure phenomena.
Gebru wasn’t the only Google AI leader to confront LeCun this week. Google AI researcher and CIFAR AI chair Nicholas Le Roux suggested LeCun listen to criticism, especially when it’s coming from a person representing a marginalized community. He also urged LeCun not to engage in tone policing and other tactics associated with maintaining the balance of power. Google AI chief Jeff Dean also urged people to recognize bias goes beyond data.
Rather than taking Le Roux’s advice, LeCun responded to his criticism with a Facebook post on Thursday championing the opinions of an anonymous Twitter user who says social justice movements will take away people’s ability to engage in constructive discourse.
Later in the day, LeCun tweeted that he admires Gebru’s work and hopes they can work together to fight bias. Facebook VP of AI Jerome Pesenti also apologized for how the conversation escalated and said that it’s important to listen to the experiences of people who have experienced racial injustice. At no time in the series of posts did it appear that LeCun attempted to engage with Gebru’s research.
All of this happens at a time when Facebook is days away from the start of a growing economic boycott over its willingness to profit from hate. The boycott has supporters ranging from the NAACP to Patagonia. On Thursday Verizon agreed to pull advertising from Facebook and on Friday Unilever halted ad sales for Facebook, Instagram, and Twitter. Shortly thereafter, CEO Mark Zuckerberg announced Facebook will no longer run political ads that assert that people from a specific race, gender, or other group are a threat to people’s safety or survival.
Black former Facebook employees have complained about mistreatment, and controversy over Facebook’s willingness to keep up a Trump post that Twitter labeled as glorifying violence and observers called a racist dog whistle. A Wall Street Journal report last month found that Facebook executives were notified that its recommendation algorithms were dividing people and stoking hatred but they did not change things in part due to fear of conversative backlash. Even employees at the Chan-Zuckerberg Initiative said they have diversity issues and that the nonprofit needs to decide what side of history they want to be on and change how it deals with race.
What’s noticeably missing from LeCun’s assessment of AI bias and Pesenti’s apology Thursday is the role of hiring and building diverse teams. LeCun’s comments come a little over a week after Facebook CTO Mike Schroepfer told VentureBeat that AI bias is generally the result of biased AI. He would go on to champion diversity as a way to mitigate bias, but he could not offer evidence of diverse hiring practices at FAIR. Facebook collects and publicly reports some diversity statistics but does not measure diversity at Facebook AI Research, which LeCun founded in 2013.
A Facebook AI spokesperson told VentureBeat that all employees are required to participate in training to identify personal bias.
It’s unsettling to see someone with as much privilege as Lecun attempt to argue technical matters but ignore the work of a Black colleague at a time when issues of racial inequality sparked protests of historic size around the world. Those protests are still happening.
Maybe Yann LeCun needs better friends. Maybe he should step away from the keyboard, and maybe, as LeCun argued, that first tweet left out bias beyond data due to the sort of brevity that’s common on Twitter, but it’s worth understanding that LeCun built FAIR, and one analysis last year found that Facebook AI Research has no Black employees.
This story isn’t over. Analysis and opinions about the exchange between Gebru and LeCun involving the wider AI community may percolate for a while, and Pesenti promises Facebook AI will change, but something about the series of events and related news seems like a systemic problem. If FAIR valued diversity or Facebook had a more diverse group of employees or made listening to marginalized communities a priority, maybe none of this would have happened. Or it wouldn’t have taken nearly a week for Facebook executives to intervene and apologize.
In an article published last month, days before the death of George Floyd, I wrote that there’s a battle happening now for the soul of machine learning, and that part of that work involves building pluralistic teams.
Yann LeCun is one of the most powerful men in the AI community today. He wouldn’t be a Turing Award winner or neural network pioneer if he couldn’t grasp complicated subjects, but this whole series of debate while people demand equal rights in the streets comes off as sort of juvenile or childish. You can describe the Gebru-LeCun episode as sad and unfortunate and a range of other adjectives, but two things stick with me: 1) AI researchers – many of them Black or women – shouldn’t have to dedicate time to convincing LeCun of established facts and 2) this was a missed opportunity for a leader to demonstrate leadership.
In his apology to Gebru Thursday, Pesenti said Facebook will result in change and education. No specifics were offered, but let’s hope that change involves more than words but meaningful action.
For AI coverage, send news tips to Khari Johnson, Kyle Wiggers, and Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.
Thanks for reading,
Khari Johnson
Senior AI Staff Writer

From VentureBeat
Amazon acquires autonomous vehicle startup Zoox
Detroit's fight over policing and facial recognition is a microcosm of the nation
All you need to deploy the best AI.
Congress introduces bill that bans facial recognition use by federal government
Boston Dynamics will ship Spot with a robot arm 'in a few months' and for home use 'someday'
All you need to scale the best AI now.
Apple's Core ML now lets app developers update AI models on the fly
Autonomous farm robot Burro assists human workers with grape harvest
SqueezeBERT promises faster mobile NLP while maintaining BERT levels of accuracy
DoNotPay's legal bots help consumers fight the system during lockdown
Beyond VB
AI robot cast in lead role of $70M sci-fi film
The U.S. is catching up with China in AI adoption, Kai-Fu Lee says
A robot sloth will (very slowly) survey endangered species
A furry social robot can reduce pain and increase happiness
Absorb new information faster with these 12-minute book summaries
 
Did you enjoy this issue?
Khari Johnson

AI Weekly (Connecting the dots: AI, business, and ethics)

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue
500 Sansome St. #404, San Francisco, CA 94111