View profile

🆕 More moderation excuses, Twitter's transparency report and 'middleware' barriers

Everything in Moderation
🆕 More moderation excuses, Twitter's transparency report and 'middleware' barriers
Welcome to Everything in Moderation, a weekly newsletter about the policies, products, platforms and people that shape content moderation and online speech. It’s curated and produced by me, Ben Whitelaw.
This week has seen the biggest uptick in subscribers since I started writing EiM in 2018, which is really exciting. I’m guessing it had something to do with this thread that I wrote about the online abuse of England’s football players although you never really know.
New subscribers from Areto Labs, the University of Bristol, Twitter, the Wall Street Journal, DCMS, TaskUs, TikTok and elsewhere — I’d love to know how you got here. And, whether you’re a longtime recipient or fresh this week, thanks for letting me occupy a space in your inbox.
I got a positive COVID-19 result this morning (reminder: stay safe and get tested) so today’s update is a little later than usual. Here’s what you need to know…

📜 Policies - emerging speech regulation and legislation
The independent-but-Facebook-funded Oversight Board has published its first transparency report giving more background on the cases it has taken up and detailing exactly where the recommendations from the 20-person board have got to. Broadly speaking, the 32-page document is full of good intentions — the word ‘commit’ appears no less than 40 times — but, as Mnemonica’s Dia Kayyali points out, there’s a lot of difficult questions which remain untouched.
Middleware — that is, curation services that could give users more control over platform content that they see — is a “path forward in a neighborhood full of dead ends”, according to a new essay by Stanford professor Daphne Keller. The former Google general counsel also notes four issues that need resolving before the idea can properly take hold, including the time and cost of compiling a combined “celestial reference book” of all platform issues ever. I love the term and will be returning to it again in the future.
💡 Products - the features and functionality shaping speech
Discord has announced that it has bought Sentropy, an AI software solution designed to flag and remove online hate, and will integrate it into its tools and products.
The company had initially launched last year with two enterprise solutions — an API-driven abuse detection tool called Detect and Defend, a browser-based interface for mods — before also launching a consumer-facing filtering tool called Protect in February (now withdrawn). The combination of both platform and user-focused tools is likely to have attracted Discord, which lets server admins set and manage their own safety policies.
💬 Platforms - efforts to enforce company guidelines
The major platforms have all fared badly in light of the racist abuse directed towards England’s players but my reading is that Instagram has come off the worst. The BBC ran a piece about its “moderation mistakes” after Technology reporter Cristina Criddle reported orangutan emojis on Bukayo Saka’s profile, only to be told it “probably doesn’t go against our guidelines”. Considering that 29% of abusive messages sent to professional footballers are emoji, according to a 2020 study cited in this Bloomberg piece, Instagram — and others — should have seen this coming (EiM #10).
[Fwiw, I exchanged tweets with Adam Mosseri, head of Instagram, and asked whether he would consider no longer using ‘technical glitch’ (something I wrote about in EiM #67) as an excuse. I’m still waiting for a response.]
Screengrab from BBC coverage with edits
Screengrab from BBC coverage with edits
Here’s some news we may look back with regret: automated moderation is coming to TikTok. Until now, the video platform has used human moderators to pull offending content but will soon start weeding out nudity, graphic content, illegal activities and other violations without mod intervention. Its false positive rate is a whopping 5% which, as The Verge points out, could lead to “tens or hundreds of thousands of videos pulled by mistake”. Yikes.
The Guardian has published a friendly piece about Reddit’s community moderation ethos to mark the platform opening an office Down Under. Chief Operating Officer Jen Wong said the decision to let users “write their own rules and enforce those rules” was “unique and different”. If you’ve been following closely, Reddit is on a run of positive moderation-related PR (EiM #118).
I haven’t properly delved into Twitter’s latest transparency report, featuring data from July - December 2020, but there are some sharp rises in hateful content and accounts taken action against. The aforementioned Daphne Keller also points out a rise in the number of government takedown requests, just 29% of which were valid according to Twitter.
👥 People - folks changing the future of moderation
I wish I could justifiably put the whole England team forward here but sadly, the team’s white players have lagged behind their black counterparts, and in particular Tyrone Mings, in coming out against racist online abuse.
The Aston Villa defender tweeted that the abuse was something “that sickens, but doesn’t surprise me” but is not new to talking about online abuse and racism generally: in April, he tweeted about his experience of being racially abused on Instagram and, as David Allen Green points out, he took part in the protests following the death of George Floyd.
What was powerful was that Mings didn’t call out platforms, name execs or identify anonymity as the root problem; his message was strong enough that anyone taking note — and the dominant digital platforms will have done — should be spurred into action.
🐦 Tweets of note
  • “WARNING: Dangerous to speech” - Graham Smith, better known as @cyberleagle, imagines what label the Online Safety Bill would carry if legislation were subject to a duty of care.
  • “I am currently looking for Indigenous participants (18+) for my research on online violence” - Dr Bronwyn Carlson embarks on an important research project on the experience of online violence.
  • “They have been flagging comments for 12+ hours.” - Tech reporter Ryan Mac on the fallout at Facebook as a result of the online abuse towards Marcus Rashford, Buyako Sako and Jaydon Sancho.
Everything in Moderation is a weekly newsletter about content moderation on the web and the policies, products, people and platforms that make it happen. It is written by journalist, Ben Whitelaw, and supported by loyal subscribers like you.
If you value the newsletter and want to help cover its costs, you can contribute here. Thanks for your support.
Did you enjoy this issue?
Ben from Everything in Moderation

A weekly newsletter about the policies, products, platforms and people shaping content moderation, now and in the future.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue
United Kingdom