View profile

🆕 Transparency is not a physical place

Everything in Moderation
🆕 Transparency is not a physical place
Hello everyone and a warm welcome to new subscribers from the Financial Times, Deutsche Welle and Khoros. Sending you — and affected moderators all over the world — elbow bumps during this difficult and contagious time. I’ve included a special COVID-19 section this week to guide you to relevant virus-related moderation stories.
As part of going freelance, I’ve made my calendar open to anyone who wants to have a chat (about work opportunities or otherwise). If you’re self-isolating or want to discuss EiM/moderation, put some time in — I’d love to find out more about you and what you’re working on.
Onto this week’s newsletter.
Thanks for reading,
Ben

🚖 Taken for a ride?
It’s the probably the closest we’re going to get to a content moderation theme park. TikTok this week revealed it will open a transparency centre in its LA offices where the public can examine its moderation practices and watch how its team operates. 
As the announcement sets out, the centre will:
operate as a forum where observers will be able to provide meaningful feedback on our practices.
Although it is focused primarily on attracting outside experts and policymakers, anyone can walk in off the street in theory (it’s unclear at this stage whether there will be queues or height restrictions). The centre will open in early May.
Although it’s the first time that a company has done this for content moderation, it isn’t a new tactic for Chinese tech companies under pressure from regulators. Huawei opened a similar bricks-and-mortar centre in Brussels in early 2019 to ‘facilitate communication between Huawei and key stakeholders on cybersecurity strategies and end-to-end cybersecurity’. 
This week saw another slew of headlines about TikTok’s content policies — Wired reported that users were being surfaced pro-anorexia content — so any oversight into the way the algorithm and how its human moderators surface content is welcome. 
However, let’s not be lulled into thinking this is a big step forward. It is tech blog fodder at best and regulator seduction at worst. Transparency is not a place; it is a process. It is seen in removal explanations, openness about algorithmic decision-making, localised moderators and a regular review of the takedown data that Facebook, Twitter, YouTube and Reddit now report publicly.
Anything less is a sham.
⛔️ Banned but for what?
Mohamed El Dahshan, an economist and writer tweeting critically about the Egyptian government, was banned from Twitter for 11 days for, wait for it, profanity. Considering that most of the site is swear words, I found that surprising. His thread explains what happened and why there is cause for concern from a moderation perspective. 
Mohamed El Dahshan
I also learned a ton of fun facts, from common friends, about *some* of the nice folks in the MENA team and the US-based Arabic content moderators:
Let's say, their politics are, umm, not very pro-freedom of expression.
Or anything that's anti gov or status quo really.
> https://t.co/jZQMCnwtkW
🏥 Public health platforms?
If we started off 2020 thinking that the US election was going to pose the biggest content moderation challenge of this year, it was because we had no idea that coronavirus was coming.
Here are a few stories from this week related to the spread of information about the COVID-19 virus:
Anything I’ve missed here? Hit reply and I’ll include in next week’s edition.
⏰ Not forgetting...
Jonathan Zittrain (whose work I’ve linked to before here) outlines his Right vs Public Health era framing in relation to Facebook’s recent white paper on content moderation (EiM #52 - Facebook is pro thresholds) and advocates for a new Process era. Worth a read. 
I was going to write about The Bristol Post highlighting trolls on their Facebook (and I still might) but good friend Adam Tinworth has written about it so succinctly I don’t know if I need to.
It is only right that the first use of Twitter’s manipulated media labels was reserved for President Trump. 
Twitter flags video retweeted by President Trump as ‘manipulated media’
A timely op-ed over on OpenGlobalRights that Facebook talks a good game when it comes to human rights but that’s about it.
Facebook’s new recipe: too much optimism, not enough human rights | OpenGlobalRights
Chinese AI moderation systems can be bought relatively cheaply and could perpetuate censorship around the world, report the WSJ.
Made-in-China Censorship for Sale - WSJ
Everything in Moderation is a weekly newsletter about content moderation on the web and the policies, people and platforms that make it happen.
If you like what you’re reading and want to help me cover the costs of Everything in Moderation (£7 a month for Revue plus my time), why not get me a Ko-Fi :) (16 and counting). Thanks! - @benwhitelaw
Did you enjoy this issue?
Ben from Everything in Moderation

A weekly newsletter about the policies, products, platforms and people shaping content moderation, now and in the future.

Sign up at everythinginmoderation.co

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue
United Kingdom