The Interface

By Casey Newton

"What they’re trying to do is to resolve human nature fundamentally"



Subscribe to our newsletter

By subscribing, you agree with Revue’s Terms of Service and Privacy Policy and understand that The Interface will receive your email address.

August 23 · Issue #193 · View online
The Interface
It’s been a big week for tales of content moderation. On Friday, Radiolab posted a fascinating episode about Facebook’s evolving approach to deciding what stays on the platform. The episode used a series of historical debates inside the company to illuminate the many challenges faced by a company seeking to regulate speech globally.
Today, Motherboard’s Jason Koebler and Joseph Cox posted their own history of content moderation on Facebook, but with an eye turned to the future. Drawing on interviews with Facebook executives, current and former moderators, and academic researchers, Koebler and Cox set out to define the scope of the problem and Facebook’s efforts to wrap their arms around it.
It’s a piece that credits Facebook for the thoughtful work it has done on the subject, while expressing a deep skepticism for the project overall. A passage midway through the piece has stayed with me:
Size is the one thing Facebook isn’t willing to give up. And so Facebook’s content moderation team has been given a Sisyphean task: Fix the mess Facebook’s worldview and business model has created, without changing the worldview or business model itself.
“Making their stock-and-trade in soliciting unvetted, god-knows-what content from literally anyone on earth, with whatever agendas, ideological bents, political goals and trying to make that sustainable — it’s actually almost ridiculous when you think about it that way,” Roberts, the UCLA professor, told Motherboard. “What they’re trying to do is to resolve human nature fundamentally.“
Facebook likely wouldn’t say it’s attempting to "resolve” human nature. But it’s true that the company’s efforts all begin from the notion that it can and should connect everyone on earth — despite having little certainty about the consequences of doing so.
The piece draws new attention to the role Sheryl Sandberg has played in content moderation disputes, reporting that Facebook’s chief operating officer does weigh in on difficult calls from time to time. (CEO Mark Zuckerberg does as well.) It examines the difficulty in retaining content moderators, who are tasked with an often terrible job, and — due to an ever-evolving set of standards — must constantly be retrained.
One hate speech presentation obtained by Motherboard has a list of all the recent changes to the slide deck, including additions, removals, and clarifications of certain topics. In some months, Facebook pushed changes to the hate speech training document several times within a window of just a few days. In all, Facebook tweaked the material over 20 times in a five month period. Some policy changes are slight enough to not require any sort of retraining, but other, more nuanced changes need moderators to be retrained on that point. Some individual presentations obtained by Motherboard stretch into the hundreds of slides, stepping through examples and bullet points on why particular pieces of content should be removed.
Mostly, like Radiolab before it, the piece conveys the enormity of the challenge. The reaction online was largely skeptical. (New York‘s Max Read: “All these supposed Nietzsche fans joined together in the unifying project of tediously applying a 'consistent’ set of rules to all discourse and human relations. Best of luck!!!”)
Perhaps future events will complicate Facebook’s project of connecting everyone. But if they don’t, I return to the idea that the company ought to proceed with Zuckerberg’s idea of a Supreme Court for content moderation. Like I said earlier this week: ultimately, the question of what belongs on Facebook can’t be decided solely by the people who work there.

Yesterday, I promised I’d let you know if the authors of that study on Facebook and refugee violence in Germany got back to me. My question was how they could account for chronology in their study: it seemed impossible, given the data presented, that they could say with certainty that posts on Facebook led to violence, rather than the other way around.
I heard back from one of the authors, Karsten Müller. He made two key points. One, the study takes pains not to say with any certainty that it proves causality. The authors’ exact line is: “The results in this section should be interpreted as purely suggestive and do not allow for causal inference.”
The second point rests on a detailed description of the survey’s elaborate methodology, which relies on developing models of Facebook usage based on interactions on public pages, and then correlating them with instances of anti-refugee violence. Müller told me what allows the authors to make some claims of causation is the study’s use of internet outages to determine when German municipalities had less exposure to Facebook:
As it turns out, we find that the correlation between the interaction of local social media penetration and a measure of anti-refugee sentiments on Facebook on one hand, and hate crimes on the other, appears to vanish in weeks such outages occur. A graph that visualizes these results can be found in the newest version of the paper, available on SSRN (look for “binned scatter plot”). If one wants to claim that social media does not have any propagating effect on hate crimes when tensions are already high (which is what we are measuring), one would need to explain the mediating effect of these outages.
Even if internet or Facebook outages do have a predictive effect on attacks in some manner, it likely shows that Facebook is a communications medium used to organize gatherings and attacks (as the telephone once might have been), not, as the authors repeatedly suggest, that Facebook is somehow generating and whipping up and controlling racist sentiment over time. Again, compare such a possibility to the broader literature. There is good evidence that anti-semitic violence across German regions is fairly persistent, with pogroms during the Black Death predicting synagogue attacks during the Nazi time. And we are supposed to believe that racist feelings dwindle into passivity simply because the thugs cannot access Facebook for a few days or maybe a week? 
Cowen’s post is still worth reading in full. As I mentioned yesterday, much of the confusion around the German study stems from the fact that most of the relevant data is unavailable to us. Anonymized data, shared securely with well vetted academics, would likely bring us much closer to the truth.
Google deletes accounts with ties to Iran on YouTube and other sites
How FireEye Helped Facebook Spot a Disinformation Campaign
Attempted Hacking of Voter Database Was a False Alarm, Democratic Party Says
Fake news war gets sophisticated before 2018 midterm elections
Memphis police used fake Facebook account to monitor Black Lives Matter, trial reveals
Facebook Bans Quiz App That Captured Data of Four Million Users
Russian Trolls Are Spreading Confusion About Vaccine Safety On Twitter
Tech Giants Are Becoming Defenders of Democracy. Now What?
Suspect who fatally stabbed black man in Pennsylvania 'liked' nearly 50 racist alt-right Facebook pages
Apple removes Facebook Onavo app from App Store
Facebook poaches new CMO Antonio Lucio from HP
Facebook business executive Dan Rose leaves
The News Literacy Project is teaching kids to stop fake news
Venmo Considers Making it Harder to See What Other People Are Buying
Wickr has a new plan for dodging internet blocks
NewsGuard Fights Fake News With Humans, Not Algorithms
The NYTimes shouldn’t have relied so heavily on that Facebook and anti-refugee study.
Can Facebook, or Anybody, Solve the Internet’s Misinformation Problem?
WhatsApp has a fake news problem—that can be fixed without breaking encryption
I Want To Log Off
And finally ...
LinkedIn Message Generator
Talk to me
Send me tips, comments, or questions: Or be like T., the nice young man who approached me at Blue Bottle in San Francisco today to introduce himself and discuss the past week of this very newsletter. Meeting newsletter readers in real life has been one of the great joys of writing The Interface. Say hi sometime!
Did you enjoy this issue?
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue