View profile

Twitter fixes the wrong problem

Revue
 
Last August, as it inched toward banning Alex Jones from its platform, Twitter invited the New York T
 
July 9 · Issue #354 · View online
The Interface
Last August, as it inched toward banning Alex Jones from its platform, Twitter invited the New York Times to sit in on a meeting about why it was taking so long. It would later emerge that Jones had already violated the company’s rules at least seven times, but CEO Jack Dorsey still hesitated to pull the trigger. By the meeting’s end, Dorsey had instructed his underlings to create a new policy banning “dehumanizing speech.”
The underlings spent the next year trying to figure out what that meant.
A sweeping draft policy was posted in September. Today, the company unveiled the finished product: an update to its rules on hateful conduct narrowly banning speech that dehumanizes others on the basis of religion. It is no longer kosher to call people maggots, or vermin, or viruses, for keeping kosher. Any existing tweet that breaks the rule will have to be deleted if it gets reported — which has already tripped up Louis Farrakhan — and tweeting dehumanizing anti-religious sentiment in the future could lead to account suspensions or even outright bans.
All of this was a somewhat unexpected outcome: the original Times story had not even mentioned religion. In a new piece, the Times’ Kate Conger says Twitter ultimately decided that religion was the easiest place to start in implementing the policy:
“While we have started with religion, our intention has always been and continues to be an expansion to all protected categories,” Jerrel Peterson, Twitter’s head of safety policy, said in an interview. “We just want to be methodical.”
The scaling back of Twitter’s efforts to define dehumanizing speech illustrates the company’s challenges as it sorts through what to allow on its platform. While the new guidelines help it draw starker lines around what it will and will not tolerate, it took Twitter nearly a year to put together the rules — and even then they are just a fraction of the policy that it originally said it intended to create.
That’s all fine as far as it goes, and yet you can still read it and think — really? Twitter banned saying “Jews are vermin” on a Tuesday in 2019? Even for a company that is notorious for moving at a geologic pace, today’s update feels overdue.
It also feels redundant.
Read the Twitter rules and you’ll see that forms of conduct already banned include “inciting fear about a protected category,” using the example “all [religious group] are terrorists.” It also banned “hateful imagery,” including swastikas. And yet as most Twitter users will tell you, vicious anti-Semites and open Nazis still appear in the timeline all too often — to the point that Jack Dorsey spent much of his winter podcast tour taking questions about Nazis’ durable presence on the service.
New policies will always be needed to account for the ever-evolving nature of human speech and shifting cultural norms. But they will never be sufficient to the task of keeping users feeling safe. Far more important is that the policies are actually applied.
The Times story does include comments from Twitter about how it will train its force of content moderators to apply the new rules. And the company has begun reporting high-level data about its enforcement activities, giving us sense of the scale of the problem that Twitter faces.
The most recent such report found that Twitter users reported 11 million unique accounts between July and December 2018, up 19 percent from the previous reporting period. And yet Twitter took action against just 250,806 accounts — which was down 4 percent from the previous period.
The data doesn’t get any more granular than that, so it’s impossible to judge the efficacy of Twitter moderation from the report. But the numbers suggest that Twitter users’ frustration with the product greatly exceeds moderators’ willingness — or ability — to do anything about it. Viewed that way, Twitter doesn’t have a problem writing policies — it has a problem acting on them.

Democracy
The true origins of the Seth Rich conspiracy theory
President Trump can’t block his critics on Twitter, appeals court rules
Amazon Workers Plan Prime Day Strike Despite Wage Pledge
FTC Said to Ask About Disabling YouTube Ads for Kids’ Privacy
As Cameras Track Detroit’s Residents, a Debate Ensues Over Racial Bias
Exclusive: The Harvard professor behind Facebook’s oversight board defends its role
Why China's Social-Credit Systems Are Surprisingly Popular
Elsewhere
Facebook Diversity: Looks to Double Female Workforce by 2024
Facebook’s ex-security chief on disinformation campaigns: 'The sexiest explanation is usually not true'
Mark Zuckerberg's security chief Liam Booth leaves after misconduct allegations
Exclusive Investigation: Sex, Drugs, Misogyny And Sleaze At The HQ Of Bumble’s Owner
GitHub Removed Open Source Versions of DeepNude
When The Times First Says It, This Twitter Bot Tracks It - The New York Times
Launches
Facebook says it will launch experimental apps under NPE Team name
Facebook is trying to entice creators with more monetization options
YouTube is making it easier for creators to deal with copyright claims
How to run a small social network site for your friends
Takes
If You’re Going to Brag About Trolling, You Should at Least Be Good at It
And finally ...
Juggalo Makeup Blocks Facial Recognition Technology
Talk to me
Send me tips, comments, questions, and photos of you as a Juggalo: casey@theverge.com.
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here
If you were forwarded this newsletter and you like it, you can subscribe here
Powered by Revue