View profile

🆕 The 'malign creativity' of trolls, Modmail fail and going Birdwatching

Everything in Moderation
🆕 The 'malign creativity' of trolls, Modmail fail and going Birdwatching
Welcome to Everything in Moderation, your weekly newsletter about content moderation packaged and posted me, Ben Whitelaw.
Hello to new subscribers from Internet Lab, Delanyco, Al Hudood and Tencent. Long term recipients, don’t forget to forward EiM to friends and colleagues if you’re a regular reader. And thank you for your ongoing support.
I’ve read dozens of articles and tweets this week on everything from the Oversight Board to online gender abuse. Here’s what you might have missed.

📜 Policies - company guidelines and speech regulation
We know the introduction of new speech legislation could impact free speech. But what does that mean in practice? Justitia, a Danish judicial think tank, has looked into the duration of national legal proceedings of recent hate speech cases in five countries considering passing new laws.
It found that it took between 393 and 1273 days for states to rule on such cases and subsequently warned that:
“When governments impose intermediary liability on private platforms through laws prescribing punishments for non-removal, platforms are essentially required to assess the legality of user content as national authorities.”
China’s Cyberspace Administration has followed up a significant update to its internet regulation (EiM #95) with new rules that specify how public individuals and organisations must behave on social media. The new guidance, which prohibits ‘extreme emotions, plagiarism, cyberbullying, blackmailing’ among other things, is expected to force WeChat and Weibo to increase the size of their moderation teams. The rules come into force on 22 February. 
💡 Products - features and functionality
Word-banning filters, improved image recognition and a larger moderation team: these are the product improvements that Wimkin, a social network dedicated to free speech, has added following its ban from the Apple and Google app stores following incidents of “gratuitous violence or other dangerous activities.” It follows the removal of Parler from both stores after the riots at Capitol on January 6. It won’t be the last: Clapper — a TikTok clone which claims to have ‘zero tolerance’ for violence but no discernible moderation strategy — looks likely to be next.
AI software is redundant in the face of trolls adopting ‘malign creativity’ to spread gendered and sexualised disinformation narratives about female politicians, according to a new paper. The research tracks 13 narratives across six platforms between September and December 2020 and found that the likes of US Vice President Kamala Harris were subject to up to four posts a minute of undetectable hate speech.
💬 Platforms - dominant digital platforms
Four overturned decisions (one upheld), seven days to return the removed content and nine recommendations which have to be responded to within 30 days: that’s the outcome of the first cases from the Oversight Board, the group of experts funded by Facebook to arbitrate moderation decisions.
Twitter this week announced Birdwatch, a US-only pilot which will allow users to add notes on tweets to give greater context. Although designed as a means of combatting misinformation, the structured data gathered from questions and ratings provide more inputs for flagging systems and opens up the potential for more Wiki-esque moderation structures.
I’m sure you’ve heard about GameStop and the carnage inflicted on the stock market as a result of Redditors from r/WallStreetBets. Well, the posting volumes and traffic also caused some tools — including Modmail — to fail, according to Mashable.
	Source: Flickr/Bryan Ledgard
Source: Flickr/Bryan Ledgard
👥 People - those shaping the future of content moderation
There are lots of academics, policy folks and human rights experts in content moderation (I’ve put together over 140+ experts on this Twitter list and would love more suggestions).
Benedict Evans, the tech analyst and part-time venture partner, has touched on moderation regularly in his annual trends presentations and comes at the issue from a technological and more media-focused angle than others.
This week Ben did an online chat with Columbia Journalism Review’s Mathew Ingram and, among other things, questioned whether Cloudflare blocking content was like “Barnes & Noble refusing to sell James Joyce” or, perhaps Mein Kampf? Great question — answers on a postcard.
🐦 Tweets of note
  • “It is not enough to focus only on illegal and harmful content but also on the biz model of the platforms like amplifying certain contents”: Privacy and data ethics expert Pernille Tranberg quotes Marietje Schaake (who incidentally wrote for the Financial Times this week).
  • “Here’s the thing: there are two sides of the same coin.” - Evan Hamilton, Reddit’s director of community, on the false distinction between curation and moderation (Side note: Evan is on a podcast released this week that I plan to get round to listening to soon).
  • “The board’s principal achievement will be to build pressure to write better content rules” - Former journalism professor George Brock gives his first impression on the Oversight Board’s first cases.
Everything in Moderation is a weekly newsletter about content moderation on the web and the policies, products, people and platforms that make it happen. It is written by me, Ben Whitelaw, and supported by loyal subscribers like you.
If you value the newsletter and want to help cover its costs, you can contribute here. Thanks for your support.
Did you enjoy this issue?
Ben from Everything in Moderation

A weekly newsletter about the policies, products, platforms and people shaping content moderation, now and in the future.

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue
United Kingdom