View profile

🆕 Indian censorship, leaked Facebook report and new transparency centre

Everything in Moderation
🆕 Indian censorship, leaked Facebook report and new transparency centre
Welcome to Everything in Moderation, your weekly newsletter about content moderation by me, Ben Whitelaw
My goal is for EiM to be the must-read weekly digest on content moderation and I’m always glad when subscribers tell me they make time for the newsletter. But I also want to hear if there are improvements you’d like to see.
Drop me a line, message me on Twitter or, on the off-chance you’re completely satisfied, share to your networks or support the newsletter financially. 
Onto this week’s news.

📜 Policy - company guidelines and speech regulation
The Indian Government this week ordered that Twitter withholds some posts about Covid-19 in the name of maintaining public order and protecting “the sovereignty and integrity of India”. As many as 100 tweets, including some from politicians, actors and journalists, were restricted under the country’s Information Technology Act of 2000 and had one thing in common: they were critical of Prime Minister Narendra Modi and his government’s approach to the pandemic. Around the same time, Facebook also hid posts with a #ResignModi hashtag, which the company later said was a mistake. (EiM #67).
Long-time readers of EiM know India has a fractious history when it comes to online speech (EiM #65) and recent stories of policy staff at the platforms allegedly giving favourable treatment to ruling politicians should be cause for worry (EiM #85).
There is an upside to this though: this story only came out because Twitter disclosed the government order on the Lumen database, a Harvard University project which logs up to 40,000 platform takedowns a week and allows researchers and journalists to better understand the sources of removals. I’m expecting it to be used a lot more after this.
💡 Products - features and functionality
This week’s Senate Judiciary subcommittee was meant to be a more substantive discussion about how the dominant digital platforms amplify harmful content via algorithms. I shouldn’t have got my hopes up. Lauren Culbertson, head of U.S. public policy at Twitter, YouTube’s director of government affairs and public policy, Alexandra Veitch, and Monika Bickert, VP for content policy at Facebook, mostly batted back criticism and pointed to public information about how their algorithms work. There’s always next time, I guess.
Here’s one that I missed last week: Hive, which claims to have developed “human-like interpretation” of images and text, has raised $85 million in Series D funding to expand its tools into new languages and develop the APIs that its clients, including Reddit and Chatroulette, use.
💬 Platforms - dominant digital platforms
Probably the biggest story of the week has been the Facebook report about its role in the January 6 insurrection, which was published in full by Buzzfeed News. The report, produced by an internal working group, notes how company enforcement of Stop The Steal was ‘piecemeal’ because it failed to spot tell-tale signs of a coordinated approach (for example, 30% of invites came from just 0.3% of group inviters). I read this and felt I was looking into the sad, black soul of the biggest platform on the planet.
First in Los Angeles, now in Dublin: TikTok announced that it will open a European version of its Transparency Centre in the Irish capital sometime in 2022 (EiM #55). Before then, it will run virtual tours as it has been doing in the US since the start of the pandemic. I, for one, am looking forward to watching some content review operations over a Guinness sometime next year.
Photo courtesy of Flickr/Jim Nix
Photo courtesy of Flickr/Jim Nix
👥 People - those shaping the future of content moderation
Despite founding the web’s largest encyclopedia and owning a social network with 500k users, Jimmy Wales doesn’t talk very often about online speech or online moderation (I found this podcast from December 2020 but not a lot else).
So, while he’s not an expert in the way that many others I’ve featured in this slot are, he does know a thing or two. And this interview he did with Yahoo News this week raised one or two interesting points. In particular:
“Our fundamental premise has always been, we’re here to write an encyclopedia. That defines everything that we do. It defines the kinds of conversations we have, the kinds of behavioural rules we have. We don’t have a little box that says, type here whatever you think”
Maybe our problem, in part, is with the little box? There’s a thought.
🐦 Tweets of note
  • “Regulate us!” - Jay Rosen, NYU journalism prof, charts the wild ride that platforms have been on over the last decade.
  • “I just think the tradeoff needs to be acknowledged and understood, particularly by external policymakers and pundits.” - Facebook data scientist Colin Fraser explains handily precision-recall tradeoff.
  • “To be a meaningful metric, they need to give us specifics” - Becca Lewis, wearer of many hats including Stanford PhD candidate, takes umbrage with YouTube’s borderline content pronouncements in this thread.
Everything in Moderation is a weekly newsletter about content moderation on the web and the policies, products, people and platforms that make it happen. It is written by journalist, Ben Whitelaw, and supported by loyal subscribers like you.
If you value the newsletter and want to help cover its costs, you can contribute here. Thanks for your support.
Did you enjoy this issue?
Ben from Everything in Moderation

A weekly newsletter about the policies, products, platforms and people shaping content moderation, now and in the future.

Sign up at everythinginmoderation.co

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue
United Kingdom