Facebook has more than 2 billion users, and at that scale comes lots of wrongdoing. The company’s latest quarterly transparency report
, which quantifies violations of the company’s community standards, conveys just how much
In the second and third quarter of 2019, Facebook said
it removed or labeled more than 54 million pieces of content it deemed violent and graphic, 11.4 million posts that broke its rules prohibiting hate speech, 5.7 million uploads that ran afoul of bullying and harassment policies and 18.5 million items determined to be child nudity or sexual exploitation.
The company also detailed for the first time its efforts to police Instagram, revealing that it took aim at 1.2 million photos or videos involving child nudity or exploitation and 3 million that ran afoul of its policies prohibiting sales of illegal drugs
over the past six months.
The numbers are all large and growing, which is bad. Even a single incident can cause havoc for the company’s content moderation teams. The Christchurch shooting, which is covered in this quarter’s report, generated 4.5 million pieces of content that Facebook had to remove between March 15th, when it happened, and September 30th.
But Facebook is catching more of these issues via automated systems, which is good. That includes progress made in automatically detecting hate speech — typically the hardest kind of violation for machine learning systems to pick up on, given the nuances of human language. Guy Rosen, Facebook’s vice president of integrity, described Facebook’s progress in a blog post
Starting in Q2 2019, thanks to continued progress in our systems’ abilities to correctly detect violations, we began removing some posts automatically, but only when content is either identical or near-identical to text or images previously removed by our content review team as violating our policy, or where content very closely matches common attacks that violate our policy. We only do this in select instances, and it has only been possible because our automated systems have been trained on hundreds of thousands, if not millions, of different examples of violating content and common attacks. In all other cases when our systems proactively detect potential hate speech, the content is still sent to our review teams to make a final determination. With these evolutions in our detection systems, our proactive rate has climbed to 80%, from 68% in our last report, and we’ve increased the volume of content we find and remove for violating our hate speech policy.
The faster that Facebook can detect hate speech, drug and weapon sales, child exploitation, and other issues, the likelier it is that the company can alert law enforcement and civil society groups in time to address them. That’s the positive story conveyed in this quarter’s report.
But there’s a darker story, too — one about how often governments compel Facebook to release user data, typically without informing the target, or even shut down service in a country altogether.
That’s the highest number of government demands its received in any reporting period since it published its first transparency report in 2013.
The U.S. government led the way with the most number of requests — 50,741 demands for user data resulting in some account or user data given to authorities in 88% of cases. Facebook said two-thirds of all of the U.S. government’s requests came with a gag order, preventing the company from telling the user about the request for their data.
Moreover, the report found that 15 countries had disrupted Facebook service 67 times in the first half of the year, compared with nine countries disrupting service 53 times in the previous half-year. Disrupting Facebook service can sometimes be a desperate measure taken by countries worried that fast-spreading hate speech is leading to real-world violence. But more often it serves as a pretext to quash anti-government dissent.
In any case, I appreciate the now-standard transparency reports we get from Facebook, Google, and the other big platforms. (And Facebook offers much more granular information than its peers, as CEO Mark Zuckerberg was quick to point out on a press call about the report.) And yet while they highlight some of the important work done to keep people safe, these reports also illustrate how little recourse people have if they are falsely caught up in a machine-learning dragnet. The appeals process is limited and opaque, and human language and social norms can change faster than machine learning systems can catch up to them.
If what you want from a platform is something like justice, transparency reports are necessary — but not sufficient. The average user still has no way of holding a platform accountable when it makes a mistake.
For that, you might want something like … an oversight board
. Here’s hoping Facebook has more to say on that subject soon.