View profile

Facebook puts its money where its moderation is

Revue
 
Last month I attended a meeting with Monica Bickert, who runs global policy management at Facebook. T
 
May 15 · Issue #137 · View online
The Interface
Last month I attended a meeting with Monica Bickert, who runs global policy management at Facebook. The company was making its community standards public for the first time, and those standards covered a range of behavior that surprised some of the reporters in the room. One of them expressed shock at the extreme detail the standards go into when describing, for example, sex acts that are prohibited on Facebook.
Bickert, a former federal prosecutor, responded coolly. “The community we have using Facebook and other large social media mirrors the community we have in the real world,“ she said. "So we’re realistic about that. The vast majority of people who come to Facebook come for very good reasons. But we know there will always be people who will try to post abusive content or engage in abusive behavior. This is our way of saying these things are not tolerated. Report them to us, and we’ll remove them.”
The question has been the extent to which Facebook really does remove those posts. Today, it offered an answer:
Facebook took enforcement action on 1.9 million posts related to terrorism by Al Qaeda and ISIS in the first quarter of this year, the company said, up from 1.1 million posts in the last quarter of 2018. The increased enforcement, which typically results in posts being removed and accounts being suspended or banned from Facebook, resulted from improvements in machine learning that allowed the company to find more terrorism-related photos, whether they were newly uploaded or had been on Facebook for longer.
Facebook found 99.5 percent of terrorism-related posts before they were flagged by users, it said. In the previous quarter, 97 percent of posts were found by the company on its own. Facebook made the data available as part of its first ever Community Standards Enforcement Report, which documents content moderation actions taken by the company between October and March.
Check out the full report for more data, including stuff like this:
Fake accounts. Of Facebook’s monthly users, 3 to 4 percent are fake accounts, the company said. It removed 583 million fake accounts in the first quarter of the year, down from 694 million in the previous quarter.
Facebook is generally catching more violations of its content policy thanks to a major new investment in the field, Deepa Seetharaman reported today
The annual budget for some of Facebook’s content-review teams has ballooned by hundreds of millions of dollars for 2018, according to people familiar with the figures. Much of the additional outlay goes to hiring thousands of new content moderators, they said. Facebook says it is hiring 10,000 people—including staffers and contractors—by the end of the year to work on safety and security issues including content review, roughly doubling the total in place this past fall.
Facebook also plucked two executives from its respected growth team to oversee its expansion of content-review operations and to build technical tools that help measure the prevalence of hate speech and track how well its moderators uphold its content rules, the company says. The company outlined some of those measures in a blog post Tuesday.
These are welcome moves. Even the EFF was (mostly) enthusiastic:
“This is a great first step,” said Jillian York from the Electronic Frontier Foundation. “However, we don’t have a sense of how many incorrect takedowns happen – how many appeals that result in content being restored. We’d also like to see better messaging to users when an action has been taken on their account, so they know the specific violation.”
In March, Hunter Walk asked whether the real thing holding back tech platforms from addressing abuse on their platforms was a devotion to maintaining ”software margins“ — the kind of profit margins that get VCs investing in the first place. 
Today’s news suggests at least a partial retreat from those margins.

Democracy
Justice Department and F.B.I. Are Investigating Cambridge Analytica
Twitter will hide more bad tweets in conversations and searches
See Which Facebook Ads Russians Targeted to People Like You
Zuckerberg won’t go to UK for data privacy testimony, despite threat of future arrest
Elsewhere
Facebook's big threat isn't Cambridge Analytica, it's advertisers questioning ROI
Facebook launches Youth Portal to educate teens on the platform, how their data is being used
Twitch emotes list: the meaning of Twitch characters, explained
Launches
WhatsApp revamps Groups to fight Telegram
Instagram has an unlaunched “time spent” Usage Insights dashboard
Takes
Can GDPR Create a Better Internet?
My Twitter Crush
Changing your Facebook relationship status still means something in 2018
And finally ...
‘Love Stories of Tumblr’: How the Entire Internet Became a Dating Site
Talk to me
Questions? Comments? Enforcement actions? casey@theverge.com
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here
If you were forwarded this newsletter and you like it, you can subscribe here
Powered by Revue