View profile

🆕 ‘Technical glitch’ is no longer an excuse

Everything in Moderation
🆕 ‘Technical glitch’ is no longer an excuse
It’s been the kind of week that has made me think a newsletter on content moderation isn’t what the world needs right now. I’ve found it hard to motivate myself, which is why today’s edition is dropping into your inbox later than usual.
At the same time, I recognise that the issues of free speech, online abuse and platform regulation are at the heart of the George Floyd protests and the political response to it, both in the US and elsewhere. It doesn’t make any sense to stop highlighting the inconsistencies of online platforms and their guidelines and policies now. Now is the time that it’s needed most. So here it is, your weekly content moderation roundup.
Stay safe and thanks for reading, 
Ben
PS I’m looking to interview EiM subscribers over the coming weeks — read on for more details…

🔎 Platforms should investigate, not just apologise
Enough was happening in the US this week before a tweet went viral accusing TikTok of censoring the #blacklivesmatter hashtag. 
damian•bIm
TikTok blocked the #blacklivesmatter tag and every single tag related to #GeorgeFloyd.
This is disgusting.
Fuck TikTok. https://t.co/HVDCFNuVyc
This is the very same video platform, don’t forget, that has been heavily criticised for allowing racist users and content to propogate on its platform (#EiM 61) and for lacking transparency about its moderation processes. Now, millions of users were seeing 0 views for hashtags relating to the George Floyd protests and, perhaps rightly, presuming the worst. 
The reality was different. All hashtags, as TikTok pointed out, were at 0 as a result of a ‘technical glitch’ which only occurred in the Compose screen of the app. In a blog post, published on Monday, it reiterated the diagnosis and acknowledged how it may have looked to supporters of the movement. 
TikTok, however, wasn’t the only platform to pass off a fuck-up as a glitch this week. 
Facebook also resorted to blaming a ‘technical error’ for deactivating the accounts of 60 high-profile Tunisian journalists and activists without warning. Facebook is huge in the north African country and was a vital communication tool during the 2011 revolution. Haythem El Mekki, a political commentator whose account was deactivated, said to The Guardian: “It would be flattering to believe that we had been targeted, but I think it’s just as likely that an algorithm got out of control.”
There has been a worrying rise in these 'out of control’ algorithms in recent months — driven by an industry-wide move to a more automated process of moderation — and subsequently more excuses that have been put down to so-called ’glitches’. For example:
  • Last week, YouTube blamed an ‘an error in our enforcement systems’ for deleting comments containing certain Chinese-language phrases related to the country’s government.
  • In March, Facebook accidentally removed user posts including links from reputable news organisations because of an ‘issue with an automated system’.
  • Even in January, before COVID-19, Chinese leader Xi Jinping’s name appeared as ‘Mr Shithole’ on Facebook when translated from English to Burmese. Again put down to a ’technical issue’.
It’s clearer than ever that platforms are using ‘technical error’ as a free pass when content moderation issues arise. It has become a way of sweeping issues that affect user speech under the carpet, of passing the blame to an anonymous engineer or product manager. The suggestion seems to be that if it’s a ‘technical error’, then the platforms can’t be blamed.
This is no longer good enough. With more automated systems being used to flag and deal with content that violates platform rules, the ‘technical glitch’ get-out doesn’t wash. Such ‘errors’ are impacting user’s speech in real-time and with real-world implications. If we are going to have more auto-moderated content (and it doesn’t look like we have a choice in the matter), we also deserve better responses to breakdowns of those systems than ‘computer says no’. 
Tech company typical response (via screengrab)
Tech company typical response (via screengrab)
So let’s stop underestimating the effect that ‘display issues’ (TikTok’s words) have on people’s health and their trust of platforms. Let’s make PR teams give details about the cause of blackouts and takedowns, rather than just bland apologies. Let’s ensure engineering teams conduct investigations, the findings of which should be made public. And let’s pressure platforms like TikTok into only shipping features that won’t affect users’ speech in the way it did this week.
✊🏽 EiM needs you
Can you spare 30 mins for a video call? I’m hoping to chat with five EiM subscribers over the coming weeks about the newsletter and what it could do better. I can offer $15 Amazon voucher or I’ll donate $20 to an anti-racism charity of your choice. Reply to me if that’s you. 
🇺🇸 The fallout of the Executive Order
Last week’s newsletter (EiM #66) was perhaps overly doom and gloom — we’re still here after all. To put that right, here are some good reads on Trump’s Executive Order, what it means and whether it will go anywhere:
  • A piece on Lawfare blog explains that, even though the Order will not withstand judicial scrutiny, the mere act of producing is enough to pressure companies to give his content preferential treatment.
  • Over on The Quint, a strong case is made by Rahul Matthan to replace the Good Samaritan moderation protection with a Bad Samaritan prosecution option.
  • EFF continue their good work on Section 230 in the form of a series of essays, including this one on how it gets the Federal Trade Commission’s job all wrong.
  • Trump and Twitter continue to go head-to-head, this time over a copyright complaint about one of the images in a George Floyd tribute video posted from his account.
  • Over on The Verge, Facebook said it will re-examine its policies after staff staged a walkout following the platform’s laissez-faire attitude to Donald Trump’s remark.
🕯 Not forgetting...
One of the co-chairs of Facebook’s Oversight Board was involved in a race speech controversy this week. Casey Newton of The Verge has tried to read the ruins on what it might mean for the board.
The Oversight Board and the N-word | Revue
The rumour that US authorities jammed communications during this week’s protests is reportedly false and Twitter have removed the accounts that started it.
Twitter suspends hundreds of accounts over fake protest claims
Snapchat have stopped promoting Donald Trump’s account following via its Discover tab after his remarks last week, even though it didn’t violate its community policy 🤔
Snapchat stops promoting Trump’s posts, saying they ‘incite racial violence and injustice’ | The Independent
On any other week, if the world wasn’t burning, I would have written at length about this crazy Australian legal judgement and what it means for publishers. 
Moderation but for TV? Roku found that a QAnon channel was live on its platform for two weeks after slipping through its review processes.
Roku removes dedicated QAnon channel that launched last month - The Verge
Everything in Moderation is a weekly newsletter about content moderation on the web and the policies, people and platforms that make it happen. It is written by journalist Ben Whitelaw and funded by loyal subscribers like you.
If you value the newsletter and want to help cover its costs, you can contribute here. Thanks in advance for your support.
Did you enjoy this issue?
Ben from Everything in Moderation

A weekly newsletter about the policies, products, platforms and people shaping content moderation, now and in the future.

Sign up at everythinginmoderation.co

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue
United Kingdom