View profile

🆕 Why regulation is hard, Pao on Reddit's mod efforts and #RightsCon roundup

Everything in Moderation
🆕 Why regulation is hard, Pao on Reddit's mod efforts and #RightsCon roundup
Welcome to Everything in Moderation, your weekly newsletter on how content moderation is changing the world, now and in the future. It’s curated and produced by me, Ben Whitelaw
I didn’t manage to register in time for #RightsCon — the digital rights jamboree that finishes today — but I’ve made myself feel better about it by including the best tweets at the bottom of today’s newsletter.
Welcome to new subscribers from Facebook, CNN, Invision and elsewhere. If you enjoy the newsletter, forward it to one lucky colleague to gain kudos/co-worker points.
Here’s your weekly roundup of news and must-read.

📜 Policies - company guidelines and speech regulation
A very readable essay by Stanford professor Daphne Keller, published this week, examines the potential legal models for limiting the effects of amplification to reduce harm. In it, the former Google general counsel notes the difficulties with creating laws that restrict the distribution of content (search rankings, recommendation systems etc) because i) they face the same strict First Amendment scrutiny as laws banning content and ii) they would make user’s experience much worse. I had recently arrived at the idea that harm as a result of amplification is the least challenging problem but Keller has reminded me that nothing is simple. 
Following criticism from the Oversight Board about the slapdash way that it banned Donald Trump, Facebook announced this week that it would suspend the former US presidents account for two years. The news didn’t get as much coverage as the initial judgement but is notable because Trump will be “subject to enhanced penalties” if he missteps after returning in January 2022, including facing a permanent ban. And Facebook even has a nice graphic with pretty colours to make its new policy seem authoritative and real.
If you have time, check out Lawfare’s expert-filled podcast about what this means for the Oversight Board.
💡 Products - features and functionality
In-app notifications about posts that were taken down are one way Instagram plans to be more transparent about moderation decisions, according to a new blog post published this week. Although light on detail, Adam Mosseri — head of the photo-sharing app — promises more explanation about its algorithm and takedown rules so people can “better understand what’s going on.” I’ll believe it when I see it.
Photo via Flickr/Techcrunch (with edits)
Photo via Flickr/Techcrunch (with edits)
This piece from Wired looks like it is about the battle for email newsletter supremacy but it’s actually about Substack and Ghost’s divergent policies on moderation. It comes on the back of a mini-exodus from the newsletter subscription platform following some muddy announcements about their speech guidelines (EiM #93). As one of the people quoted explains, “If you’re going to have a policy, you should actually enforce it”. 
💬 Platforms - dominant digital platforms
I hate hitting Facebook over the head every week in this section but it’s impossible to ignore the glut of stories about the poor enforcement of its guidelines outside the US:
For a shot of something more positive, listen to Clara Tsao, co-founder of the Trust & Safety Professional Association (TSPA), on how it is bringing together trust and safety professionals to combat some of the challenges mentioned in EiM every week.
👥 People - those shaping the future of content moderation
Ellen K. Pao‘s credentials are rock solid. She was Reddit’s CEO when it ended unauthorised nude photos (EiM #69) and last year called out the platform and its current CEO Steve Huffman for not banning r/The_Donald, which it eventually did. 
What I didn’t know was the abuse Pao herself received when she removed the five revenge porn subreddits. She talks with the New York Times’ Kara Swisher about her colleagues being doxxed, the countless memes that got upvoted to the front page and the need for 24/7 security. It sounds truly awful and yet she’s magnanimous about the whole thing. And her view on Reddit now? “A mess”, just like the other platforms.
🐦 Tweets of note (via #RightsCon)
  • “Sorry to the white guys developing the natural language processing - it’s not working!”: Digital Action campaigner Bissan Fakih, quotes wise words from Dia Kayyali.
  • “Rights have to be carefully balanced and no one has been really successful at this”: Canadian Digital Service’s Michael Karlin helpfully livetweets a great session with Twitter’s Vijaya Gadde and Berhan Taye from Access Now.
  • “Come join a badass discussion about the spread of health misinformation on social media”: you’ve still got time to join this session featuring Meedan’s Kat Lo at 5pm BST.
  • Bonus tweet: “It’s a community moderator role, but, really, you’ll be helping us build the future of news”: great role going with Bassey Etim and a great team at CNN.
Everything in Moderation is a weekly newsletter about content moderation on the web and the policies, products, people and platforms that make it happen. It is written by journalist, Ben Whitelaw, and supported by loyal supporters like you.
If you value the newsletter and want to help cover its costs, you can contribute here. Thanks for your support.
Did you enjoy this issue?
Ben from Everything in Moderation

A weekly newsletter about the policies, products, platforms and people shaping content moderation, now and in the future.

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue
United Kingdom