This is the very same video platform, don’t forget, that has been heavily criticised for allowing racist users and content to propogate on its platform (#EiM 61
) and for lacking transparency about its moderation processes. Now, millions of users were seeing 0 views for hashtags relating to the George Floyd protests and, perhaps rightly, presuming the worst.
The reality was different. All hashtags, as TikTok pointed out, were at 0 as a result of a ‘technical glitch’
which only occurred in the Compose screen of the app. In a blog post
, published on Monday, it reiterated the diagnosis and acknowledged how it may have looked to supporters of the movement.
TikTok, however, wasn’t the only platform to pass off a fuck-up as a glitch this week.
also resorted to blaming a ‘technical error’ for deactivating the accounts of 60 high-profile Tunisian journalists and activists without warning. Facebook is huge in the north African country and was a vital communication tool during the 2011 revolution. Haythem El Mekki, a political commentator whose account was deactivated, said to The Guardian
: “It would be flattering to believe that we had been targeted, but I think it’s just as likely that an algorithm got out of control.”
There has been a worrying rise in these 'out of control’ algorithms in recent months — driven by an industry-wide move to a more automated process of moderation — and subsequently more excuses that have been put down to so-called ’glitches’. For example:
- Last week, YouTube blamed an ‘an error in our enforcement systems’ for deleting comments containing certain Chinese-language phrases related to the country’s government.
- In March, Facebook accidentally removed user posts including links from reputable news organisations because of an ‘issue with an automated system’.
- Even in January, before COVID-19, Chinese leader Xi Jinping’s name appeared as ‘Mr Shithole’ on Facebook when translated from English to Burmese. Again put down to a ’technical issue’.
It’s clearer than ever that platforms are using ‘technical error’ as a free pass when content moderation issues arise. It has become a way of sweeping issues that affect user speech under the carpet, of passing the blame to an anonymous engineer or product manager. The suggestion seems to be that if it’s a ‘technical error’, then the platforms can’t be blamed.
This is no longer good enough. With more automated systems being used to flag and deal with content that violates platform rules, the ‘technical glitch’ get-out doesn’t wash. Such ‘errors’ are impacting user’s speech in real-time and with real-world implications. If we are going to have more auto-moderated content (and it doesn’t look like we have a choice in the matter), we also deserve better responses to breakdowns of those systems than ‘computer says no’.