View profile

🆕 Moderating your chat up lines, QAnon's disappearance and all hail Hannah

Everything in Moderation
🆕 Moderating your chat up lines, QAnon's disappearance and all hail Hannah
Welcome to Everything in Moderation, your weekly newsletter about the policies, products, platforms and people shaping content moderation. It’s curated and produced by me, Ben Whitelaw.
News subscribers from Google, the Hertie School and elsewhere, thanks for joining the team. If you enjoy EiM, share with your colleagues, connections and industry peers to spread the word. 
Let’s get on with what happened in content moderation this week.

📜 Policy and regulation - company guidelines and state laws
Worrying developments once again in India where police served a notice to Twitter this week seeking an explanation about why a government spokesperson had a tweet tagged as “manipulated media”. EiM subscribers will know that tensions have been brewing between the Indian ruling party and tech platforms since last year (#EiM 65) and don’t look like calming down any time soon.
The Oversight Board’s latest judgment has overturned another Facebook decision — a comment from an Alexei Navalny supporter initially removed for bullying and harassment— because “Facebook’s rules are contrary to international human rights”. As far as reasons go, it’s a pretty good one. 
The Online Safety Bill (covered extensively in EiM #112) is “far worse than any of us imagined it would be” according to Open Rights Group policy manager (and EiM subscriber) Heather Burns in an op-ed for politics.co.uk. Go for the intro, stay for the indignation.
I love featuring the work of EiM subscribers here - if there’s anything you’re working on or a paper or article that you’ve recently published, drop me an email. 
Photo via Wikimedia/YouTube/Навальный LIVE
Photo via Wikimedia/YouTube/Навальный LIVE
💡 Products - features and functionality
Tinder announced this week that, since January, it has been scanning private messages for inappropriate language and sending prompts to both senders and recipients of creepy messages. The change has huge privacy implications that I’ll leave experts to dig into but, from an online abuse perspective, the results are interesting:
  1. The “Are you sure you want to send this?” prompt saw a 10% decrease in inappropriate messages sent by those users.
  2. The “Does this bother you?” message, which triggers a step-by-step guide to reporting a user, saw a 46% rise in reports in the month after it debuted.
The downside, as this Qz article notes, is that swipers aren’t warned upfront that this is how their messages are being processed and it’s impossible to opt-out. Safety but at what cost?
💬 Platforms - dominant digital platforms
New research from Atlantic Council’s Digital Forensics Lab shows mentions of QAnon on social networks have all but disappeared as a result of removals by Facebook and Twitter that took place following the January 6 riots (EiM #82). Interestingly, the report also notes that conspiracy fans didn’t migrate to Parler or Gab as expected, in part due to the silence of the mysterious Q since the start of the year. As Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights and EiM subscriber notes: ‘Concerted moderation works’.
Thomson Reuters Foundation’s Maya Gebeily has written a good report on the moderation and speech tensions playing out online Israel/Gaza conflict. Worth a read, especially if your interest was peaked by last week’s newsletter (EiM #113).
👥 People - those shaping the future of content moderation
I try to make a point of not lauding reporters and editors for doing their job. At the same time, the standard of reporting about content moderation and issues relating to it is so basic that it’s important to call out when good work takes place (see Casey goes solo, EiM #45)
Hannah Murphy, the Financial Times’ tech correspondent, is one journalist who deserves a nod for her careful coverage of how San Francisco is dealing with online speech globally. Her latest piece on the imperfect nature of the Oversight Board and its positioning as the “least worst option” cites sources and raises questions that online speech watchers are not always accustomed to, let alone FT business types. Something for other publications to aim for.
🐦 Tweets of note
  • “She calls on Silicon Valley to pay attention” - Justin Hendrix notes what it’s like to be in the middle of the Muslim Hindu online clashes that are increasingly widespread in Indian politics.
  • “I did the analysis and no change was detectable in abuser behavior” - Engineering manager Roja highlights the condescending tech bro culture that contributes to platforms failing to deal with abuse. Must-read thread.
  • “A handy visual guide” - Graham Smith wins no points for design but scores highly for content with this handy diagram of Online Safety Bill provisions.
Everything in Moderation is a weekly newsletter about content moderation on the web and the policies, products, people and platforms that make it happen. It is written by journalist, Ben Whitelaw, and supported by loyal supporters like you.
If you value the newsletter and want to help cover its costs, you can contribute here. Thanks for your support.
Did you enjoy this issue?
Ben from Everything in Moderation

A weekly newsletter about the policies, products, platforms and people shaping content moderation, now and in the future.

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue
United Kingdom