View profile

AI Weekly | June 12, 2020

In a paper titled “The ‘Criminality From Face’ Illusion” posted this week on Arxiv.org, a trio of res
AI Weekly
In a paper titled “The ‘Criminality From Face’ Illusion” posted this week on Arxiv.org, a trio of researchers representing the IEEE surgically debunked recent research that claims to be able to use AI to determine criminality from people’s faces. Their primary target is a paper in which researchers claim they can do just that, boasting some results with accuracy as high as 97%.
But the authors representing the IEEE – Kevin Bowyer and Walter Scheirer of the University of Notre Dame and Michael King of the Florida Institute of Technology – argue that this sort of facial recognition technology is “necessarily doomed to fail,” and that the strong claims are primarily an illusory result of poor experimental design.
In their rebuttal, the authors show the math so to speak, but you don’t have to comb through their arguments to know that claims about being able to detect a person’s criminality from their facial features is bogus. It’s just modern-day phrenology and physiognomy.
Phrenology is an old idea that the bumps on a person’s skull indicate what sort of person they are and what type and level of intelligence they can attain. Physiognomy is essentially the same idea, but it’s even older and is more about inferring who a person is by their physical appearance rather than the shape of their skull. Both are inherently deeply racist ideas, used for “scientific racism” and clear-eyed justification for atrocities such as slavery.
And both ideas have been widely and soundly debunked and condemned; yet they’re not dead. They were just waiting for some sheep’s clothing, which they found in facial recognition technology.
The problems with accuracy and bias in facial recognition are well documented. The landmark Gender Shades work by Joy Buolamwini, Dr. Timnit Gebru, Dr. Helen Raynham, and Deborah Raji showed how major facial recognition systems performed worse on women and people with darker skin. Dr. Ruha Benjamin, author, Princeton University associate professor of African American Studies, and director of the Just Data Lab said in a talk earlier this year that those who create AI models must take into consideration social and historical contexts.
Her assertion is echoed and unpacked by cognitive science researcher Abeba Birhane in her paper “Algorithmic Injustices: Towards a Relational Ethics,” for which she won the Best Paper Award at NeurIPS 2019. Birhane wrote in the paper that “concerns surrounding algorithmic decision making and algorithmic injustice require fundamental rethinking above and beyond technical solutions.”
This week, as protests continue all around the country, the social and historic contexts of white supremacy and racial inequality are on full display. And the dangers of facial recognition use by law enforcement is front and center. In a trio of articles, VentureBeat senior AI writer Khari Johnson detailed how IBM walked away from its facial recognition tech, Amazon put a one-year moratorium on police use of its facial recognition tech, and Microsoft pledged not to sell its facial recognition tech to police until there’s a national law in place around its use.
Which brings us back to the IEEE paper. Like the work done by the aforementioned researchers in exposing broken and biased AI, these authors are performing the commendable and unfortunately necessary task of picking apart bad research. In addition to some historical context, they explain in detail why and how the data sets and research design are flawed.
Though they do discuss it in their conclusion, the authors do not engage directly in the fundamental moral problem of criminality-by-face research. In taking a technological and research methodology approach to debunking the claims, they leave room for someone to make the argument that future technological or scientific advancements could make this phrenology and physiognomy nonsense possible. Ironically, in their approach, there’s a danger of legitimizing these ideas.
This is not a criticism of Bowyer, Scheirer, and King. They’re fighting (and winning) a battle here. There will always be battles, because there will always be charlatans who claim to know a person from their outward appearance, and you have to debunk them in that moment in time with the tools and language available.
But the long-running war is about that question itself. It’s a flawed question, because the very notion of phrenology comes from a place of white supremacy. Which is to say, it’s an illegitimate question to begin with.
For AI coverage, send news tips to Khari JohnsonKyle Wiggers, and Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.
Thanks for reading,
Seth Colaner
Editorial Director

From VentureBeat
Some essential reading and research on race and technology
Researchers find racial discrimination in 'dynamic pricing' algorithms used by Uber, Lyft, and others
Microsoft won't sell police facial recognition until there's 'a national law in place'
Amazon imposes one-year moratorium on police use of its facial recognition technology
IBM walked away from facial recognition. What about Amazon and Microsoft?
Uber researchers investigate whether AI can behave ethically
Beyond VB
IBM gets out of facial recognition business, calls on Congress to advance policies tackling racial injustice
The activist dismantling racist police algorithms
Facial recognition tech developed by Clearview AI could be illegal in Europe, privacy group says
Save 80% on Lingvanex's suite of language translation apps
 
Did you enjoy this issue?
Khari Johnson

AI Weekly (Connecting the dots: AI, business, and ethics)

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue
500 Sansome St. #404, San Francisco, CA 94111