View profile

How AI fails Facebook — and YouTube and Pinterest

After a long week of hearings, I went on The Vergecast this week to talk at length about Mark Zuckerb
April 13 · Issue #119 · View online
The Interface
After a long week of hearings, I went on The Vergecast this week to talk at length about Mark Zuckerberg’s testimony. One topic we dug into was the Facebook CEO’s repeated invocation of artificial intelligence as an eventual solution to many of the company’s problems. As AI improves, Zuckerberg says, Facebook will more quickly and accurately identify bad actors, and bad posts, and purge them from the system.
Zuckerberg mentioned AI 33 times over 10 hours of hearings this week. Here he is talking to the House on Wednesday:
We need to rely on and build sophisticated A.I. tools that can help us flag certain content. And we’re getting good in certain areas. One of the areas that I mentioned earlier was terrorist content, for example, where we now have A.I. systems that can identify and — and take down 99 percent of the al-Qaeda and ISIS-related content in our system before someone — a human even flags it to us. I think we need to do more of that.
It’s an appealing idea, at least to Facebook, where it promises to ward off regulation while preserving the company’s profit margins. (It “scales,” in other words.) But AI brings pitfalls of its own, argues Jessica Lessin: it has a terrible time explaining its actions, making it highly vulnerable to charges of bias, and it can’t process new kinds of threats, making it ineffective during an ever-changing information war. Lessin writes:
Imagine Zuckerberg telling a senator that a controversial post was mistakenly pulled because the AI found it offensive—but then not being able to explain why. The ammunition for those who want to prove Facebook is systematically biased could be endless.
There’s a third reason it’s strange to invoke AI as a kind of magic talisman in this moment: the underlying machine-learning technology is arguably responsible for some of the most pressing problems in social media. It’s “AI,” after all, that powers the News Feed, which continues to promote viral hoaxes and misinformation around the world. Recommendation algorithms, which use machine-learning techniques to understand our interests and serve us fresh posts, have wreaked havoc — and the issue goes far beyond Facebook.
Here’s Renee DiResta on a night spent browsing Pinterest:
When a recent disinformation research project led me to a Pinterest board of anti-Islamic memes, one night of clicking through those pins—created by fake personas affiliated with the Internet Research Agency—turned my feed upside down. My babies-and-recipes experience morphed into a strange mish-mash of videos of Dinesh D’Souza, a controversial right-wing commentator, and Russian-language craft projects.
And here’s Craig Silverman this week on a new paper documenting how YouTube algorithms united the far right:
In one example, a user starting on a large mainstream channel such as TEDx ended up being suggested the channel of conspiracy theorist Alex Jones after just three steps of recommendations. In this case, the recommendations on the TEDx channels page listed CNN, which in turn suggested Fox News, which led to Alex Jones.
Kaiser and Rauchfleisch found that channels with little or no political content, such as one that creates video remixes and mashups, saw the algorithm suggest conspiracy channels like Jones’ and accused Holocaust-denier Styxhexenhammer666. 
DiResta continues:
Today, recommendation engines are perhaps the biggest threat to societal cohesion on the internet—and, as a result, one of the biggest threats to societal cohesion in the offline world, too. The recommendation engines we engage with are broken in ways that have grave consequences: amplified conspiracy theories, gamified news, nonsense infiltrating mainstream discourse, misinformed voters. Recommendation engines have become The Great Polarizer.
I’m not being a nihilist here — machine-learning techniques already help keep terrorism and other bad stuff off the internet, and I’m sure they will improve. But it feels a little glib to hold up AI as a savior waiting in the wings while it causes actual harm in the meantime — something tech executives are generally loath to discuss.
Still, it seemed to serve its purpose this week, writes my colleague Sarah Jeong, who calls Zuckerberg’s use of AI “a dodge deployed on a group of laypeople who, for the most part, regrettably swallowed it part and parcel.”
“The point isn’t just that Facebook has failed to scale for content moderation. It’s failed to detect entire categories of bad behavior to look out for — like intentional misinformation campaigns conducted by nation-states, the spread of false reports (whether by nation-states or mere profiteers), and data leaks like the Cambridge Analytica scandal. It’s failed to be transparent about its moderation decisions even when these decisions are driven by human intelligence. It’s failed to deal with its increasing importance in the media ecosystem, it’s failed to safeguard users’ privacy, it’s failed to anticipate its role in the Myanmar genocide, and it’s possible it’s even failed to safeguard American democracy.
Artificial intelligence cannot solve the problem of not knowing what the hell you’re doing and not really caring one way or the other. It’s not a solution for shortsightedness and lack of transparency. It’s an excuse that deflects from the question itself: whether and how to regulate Facebook. 
AI will surely do a lot of good in the years to come. But if they’re going to tout its promise, the platforms ought to reflect more on the bad it’s doing already. 

An Apology for the Internet — From the People Who Built It
Senators Had a Lot to Say About Facebook. That Hasn’t Stopped Them From Using It.
Many believe Facebook is having a negative impact on society around the world
Russia orders immediate block of Telegram messaging app
Google loses landmark 'right to be forgotten' case
What Hearings? Advertisers Still Love Facebook
Facebook isn’t tapping your microphone
Why Parents Are Fans of Games Like ‘Fortnite’
The Personal Data of 346,000 People, Hung on a Museum Wall
Facial recognition used to catch suspect in crowd of 60,000 concertgoers
The Facebook hearings remind us: information warfare is here to stay
On Privacy
And finally ...
Medium’s latest iOS release notes are all about Facebook CEO Mark Zuckerberg
Talk to me
Questions? Comments? AI-generated responses?
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here
If you were forwarded this newsletter and you like it, you can subscribe here
Powered by Revue