View profile

Microsoft sounds an alarm over facial recognition tech

Revue
 
Sophisticated facial-recognition technology is at the heart of many of China's more dystopian securit
 
December 6 · Issue #260 · View online
The Interface
Sophisticated facial-recognition technology is at the heart of many of China’s more dystopian security initiatives. With 200 million surveillance cameras — more than four times as many in the United States — China’s facial-recognition systems track members of the Uighur Muslim minority, block the entrances to housing complexes, and shame debtors by displaying their faces on billboards.
I often include these stories here because it seems inevitable that they will make their way to the United States, at least in some form. But before they do, a coalition of public and private interests are attempting to sound the alarm. 
AI Now is a group affiliated with New York University that counts as its members employees of tech companies including Google and Microsoft. In a new paper published Thursday, the group calls on governments to regulate the use of artificial intelligence and facial recognition technologies before they can undermine basic civil liberties. The authors write:
Facial recognition and affect recognition need stringent regulation to protect the public interest. Such regulation should include national laws that require strong oversight, clear limitations, and public transparency. Communities should have the right to reject the application of these technologies in both public and private contexts. Mere public notice of their use is not sufficient, and there should be a high threshold for any consent, given the dangers of oppressive and continual mass surveillance.
The AI Now researchers are particularly concerned about what’s called “affect recognition” — and attempt to identify people’s emotions, and possibly manipulate them, using machine learning.
“There is no longer a question of whether there are issues with accountability,” AI Now co-founder Meredith Whittaker, who works at Google, told Bloomberg “It’s what we do about it.” 
Later in the day, Microsoft’s president, Brad Smith, echoed some of those concerns in a speech at the Brookings Institution:
We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.
In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.
The paper comes a day after news that the Secret Service plans to deploy facial recognition outside the White House. Presumably, what the agency calls a “test” will not stop there:
The ACLU says that the current test seems appropriately narrow, but that it “crosses an important line by opening the door to the mass, suspicionless scrutiny of Americans on public sidewalks” — like the road outside the White House. (The program’s technology is supposed to analyze faces up to 20 yards from the camera.) “Face recognition is one of the most dangerous biometrics from a privacy standpoint because it can so easily be expanded and abused — including by being deployed on a mass scale without people’s knowledge or permission.”
Perhaps Americans’ enduring paranoia about big government will prevent more Chinese-style initiatives from ever taking root. But I can also imagine a scenario in which a populist, authoritarian leader, constantly invoking the twin specters of terrorism and unchecked illegal immigration, rallies popular support around surveillance technology.
It feels like a conversation worth having.

Democracy
The long, tortured quest to make Google unbiased
Facebook fends off new anti-monopoly questions after UK email release
A Mysterious Imposter Account Was Used On Facebook To Drum Up Support For The Migrant Caravan
Top FTC consumer protection official has 120 corporate conflicts of interest
How France’s ‘Yellow Vests’ Differ From Populist Movements Elsewhere
Australian Government Passes Contentious Encryption Law
Elsewhere
Google is shutting down Allo
TikTok, the App Super Popular With Kids, Has a Nudes Problem
Milo Yiannopoulos lasted a single day on Patreon before getting banned
Tumblr’s adult content ban means the death of unique blogs that explore sexuality
Tumblr’s porn ban could be its downfall — after all, it happened to LiveJournal
YouTube creators blindsided by major network’s collapse
FAIR at 5: Facebook Artificial Intelligence Research accomplishments
Launches
FB QVC? Facebook tries Live video shopping
Increasing Ad Transparency Ahead of India’s General Elections
Byte from Vine creators opens creator program, clearly targeting YouTube creators - 9to5Google
Takes
The Facebook emails show the company never really cared about connecting the world.
Foreign Trolls Are Targeting Veterans on Facebook
Finally, here is a thought from Cher on this week’s big document dump:
Cher
Facebook Gave Some Companies Special Access to Users’ Data, Documents Show via @NYTimes
How Long Are We going to Let Zuckerberg Get away With This...
“Aw Shucks,Im Just a Kid”
🐂💩⁉️ https://t.co/QYVvwI7iUE
11:30 AM - 5 Dec 2018
And finally ...
Remembering Tumblr’s wildest community drama
Talk to me
Send me tips, comments, questions, and draft privacy legislation: casey@theverge.com
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here
If you were forwarded this newsletter and you like it, you can subscribe here
Powered by Revue