View profile

The limits of Facebook’s transparency report

Revue
 
Facebook has more than 2 billion users, and at that scale comes lots of wrongdoing. The company’s lat
 
November 13 · Issue #417 · View online
The Interface
Facebook has more than 2 billion users, and at that scale comes lots of wrongdoing. The company’s latest quarterly transparency report, which quantifies violations of the company’s community standards, conveys just how much wrongdoing.
Here’s Tony Romm with a summary at the Washington Post:
In the second and third quarter of 2019, Facebook said it removed or labeled more than 54 million pieces of content it deemed violent and graphic, 11.4 million posts that broke its rules prohibiting hate speech, 5.7 million uploads that ran afoul of bullying and harassment policies and 18.5 million items determined to be child nudity or sexual exploitation.
The company also detailed for the first time its efforts to police Instagram, revealing that it took aim at 1.2 million photos or videos involving child nudity or exploitation and 3 million that ran afoul of its policies prohibiting sales of illegal drugs over the past six months.
The numbers are all large and growing, which is bad. Even a single incident can cause havoc for the company’s content moderation teams. The Christchurch shooting, which is covered in this quarter’s report, generated 4.5 million pieces of content that Facebook had to remove between March 15th, when it happened, and September 30th.
But Facebook is catching more of these issues via automated systems, which is good. That includes progress made in automatically detecting hate speech — typically the hardest kind of violation for machine learning systems to pick up on, given the nuances of human language. Guy Rosen, Facebook’s vice president of integrity, described Facebook’s progress in a blog post:
Starting in Q2 2019, thanks to continued progress in our systems’ abilities to correctly detect violations, we began removing some posts automatically, but only when content is either identical or near-identical to text or images previously removed by our content review team as violating our policy, or where content very closely matches common attacks that violate our policy. We only do this in select instances, and it has only been possible because our automated systems have been trained on hundreds of thousands, if not millions, of different examples of violating content and common attacks. In all other cases when our systems proactively detect potential hate speech, the content is still sent to our review teams to make a final determination. With these evolutions in our detection systems, our proactive rate has climbed to 80%, from 68% in our last report, and we’ve increased the volume of content we find and remove for violating our hate speech policy.
The faster that Facebook can detect hate speech, drug and weapon sales, child exploitation, and other issues, the likelier it is that the company can alert law enforcement and civil society groups in time to address them. That’s the positive story conveyed in this quarter’s report.
But there’s a darker story, too — one about how often governments compel Facebook to release user data, typically without informing the target, or even shut down service in a country altogether.
Zack Whittaker reports on the spiking number of government requests for Facebook user data at TechCrunch. Requests were up 16 percent for the first half of this year, rising to 128,617:
That’s the highest number of government demands its received in any reporting period since it published its first transparency report in 2013.
The U.S. government led the way with the most number of requests — 50,741 demands for user data resulting in some account or user data given to authorities in 88% of cases. Facebook said two-thirds of all of the U.S. government’s requests came with a gag order, preventing the company from telling the user about the request for their data.
Moreover, the report found that 15 countries had disrupted Facebook service 67 times in the first half of the year, compared with nine countries disrupting service 53 times in the previous half-year. Disrupting Facebook service can sometimes be a desperate measure taken by countries worried that fast-spreading hate speech is leading to real-world violence. But more often it serves as a pretext to quash anti-government dissent.
In any case, I appreciate the now-standard transparency reports we get from Facebook, Google, and the other big platforms. (And Facebook offers much more granular information than its peers, as CEO Mark Zuckerberg was quick to point out on a press call about the report.) And yet while they highlight some of the important work done to keep people safe, these reports also illustrate how little recourse people have if they are falsely caught up in a machine-learning dragnet. The appeals process is limited and opaque, and human language and social norms can change faster than machine learning systems can catch up to them.
If what you want from a platform is something like justice, transparency reports are necessary — but not sufficient. The average user still has no way of holding a platform accountable when it makes a mistake.
For that, you might want something like … an oversight board. Here’s hoping Facebook has more to say on that subject soon.

The Ratio
Today in news that could affect public perception of the big tech platforms.
🔼 Trending up: Facebook included Instagram in its transparency report for the first time. The more transparency we get around these things the better.
🔃 Trending sideways: In a press call related to the report, Mark Zuckerberg stuck by his policy to let politicians lie in ads on Facebook, but said that he’s “continuing to look at how it might make sense to refine it in the future.”
🔽 Trending down: Google fired an employee for leaking information to the press and placed two more on leave for allegedly violating company policies. It’s evidence of rising tensions between management and personnel engaged in employee activism.
Governing
Google reached a settlement with the US National Labor Relations Board to allow more open discussions on campus. The agreement came after former employee Kevin Cernekee filed a complaint last year, alleging the company restricted free speech and fired him for expressing conservative views. Jennifer Elias at CNBC has more:
As part of the arrangement, Google is required to let employees speak with the media about their employment without getting permission, which marks a change for a company that has exercised tight restrictions over conversations with the press.
The company also has to say that it will comply with federal law, allowing employees to form, join or assist a union as well as “act together with other employees” for their “benefit and protection.” Former employees have claimed that they faced retaliation for speaking out about workforce issues, including organizing the companywide walkout last year to protest Google’s handling of sexual harassment.
Mark Zuckerberg took a shot at competitors during a press call related to the company’s latest transparency report. He said that other tech companies aren’t releasing data related to account takedowns, making it difficult to gauge how much harmful content is out there. (Tony Romm / Twitter)
A pro-Trump media network is building a Facebook empire using fake accounts and groups. The strategies are a coordinated effort to amplify partisan content while avoiding the burdensome rules associated with advertising on Facebook. (Alex Kasprak and Jordan Lilies / Snopes)
Pro-Trump conservatives are getting trolled at live events by a far-right group pushing an even more conservative message. They call themselves Groypers (a reference to a popular 4chan meme) and try to take over the question-and-answer portion of events with anti-gay, anti-Semitic and racist questions. (Ben Collins / NBC)
Industry
In 2016, Mark Zuckerberg seriously considered buying Musical.ly, the app that could eventually become TikTok. Now, he’s demonizing it to make the case against regulating Facebook. Ryan Mac at BuzzFeed has the scoop:
Sources said the talks were serious, though a deal never materialized. Some 14 months later, Chinese conglomerate ByteDance acquired Musical.ly for around $800 million. It later merged the app with the already existent TikTok to form the popular video platform that Zuckerberg has recently been demonizing as a threat to Western tech supremacy.
“Until recently, the internet in almost every country outside China has been defined by American platforms with strong free expression values. There’s no guarantee these values will win out,” Zuckerberg said in a speech last month at Georgetown University. “While our services, like WhatsApp, are used by protesters and activists everywhere due to strong encryption and privacy protections, on TikTok, the Chinese app growing quickly around the world, mentions of these protests are censored, even in the US.”
Facebook ultimately passed on the deal due to privacy and regulatory concerns. The unrealized moment was a missed opportunity to jump aboard a short-video phenomenon that’s gone viral across the US and China. (Sarah Frier and Zheping Huang / Bloomberg)
Australian Teens are using TikTok to show the world how bad the bushfires are. The fires have claimed the lives of three Australians and destroyed hundreds of homes, but haven’t been widely reported on internationally. (Cameron Wilson / BuzzFeed)
Google is going to start offering checking accounts to consumers. It’s the latest Silicon Valley tech giant to push into finance, after Apple launched its credit card last summer. Google’s project, code-named Cache, is expected to launch next year with accounts run by Citigroup. (Peter Rudegeair and Liz Hoffman / The Wall Street Journal)
Some of the UK’s most popular health websites are sharing people’s sensitive data — including medical symptoms, diagnoses, drug names and menstrual and fertility information — with dozens of companies around the world, including Google, Amazon, Facebook and Oracle. (Madhumita Murgia and Max Harlow / The Financial Times)
Google executives said the company isn’t misusing health data from one of the biggest US health-care providers, pushing back against news reports that have triggered criticism from lawmakers and prompted a federal inquiry. The company said it’s building a search tool for digital medical records. (Gerrit De Vynck / Bloomberg)
The average price brands pay Instagram influencers for sponsored posts has surged this year, according to a new report. The average cost is now $1,643 per post, and more brands are requesting sponsored stories. (Amanda Perelli / Business Insider)
TikTok recently began running ads on Google targeting people curious about Facebook’s advertising and influencer business. A TikTok spokesperson said the ads are “small tests.” (Shoshana Wodinsky / AdWeek)
Third quarter earnings from Facebook, Twitter, Snap and Pinterest show Pinterest trails behind the other social networks in terms of how much money it makes off overseas users. The company deliberately rolled out international ad sales slowly, which suggests it has the most growth potential. (Tom Dotan / The Information)
And finally ...
This is just a sweet story about a Facebook ad that went viral for the basest of reasons, and immediate sold out the product it was selling, bringing untold joy to dogs around the country.
Goodnight.
Talk to us
Send us tips, comments, questions, and transparency reports: casey@theverge.com and zoe@theverge.com.
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue