View profile

🆕 India is having a content moderation moment

Everything in Moderation
🆕 India is having a content moderation moment
Welcome to another Everything in Moderation, the weekly newsletter about online content moderation and the policies, people and platforms that make it happen. 
A special socially-distanced welcome to new subscribers from Allefonti, The Times, Seven League, and Sky EU. Hope you like your first EiM and, if you do, please tell friends, families and your mortal enemies. Word of mouth is everything.
This week, I take a look at a part of the world that is often neglected in our US-centric worldview of content moderation but presents both unique and difficult challenges.
Stay safe and thanks for reading,
Ben

🇮🇳 India on the cusp of change?
You can understand a lot about the state of content moderation in India at the moment by the reply ratio of TikTok India’s recent tweet and the ferocity of the responses to it. People aren’t happy.
The tweet came, as these prepared statements often do, on the back of a crisis. Let me take you though it.
On Saturday, a screenshot emerged of an email showing that Indian TikTok moderators had been told to “drop those content (sic) which is against Chinese government” including mentions of Dalai Lama and Tibet.  Even though the email was from February, Indian Twitter exploded with #BanTikTokIndia and #TikTokExposed and TikTok’s app was bombarded with 1-star reviews in protest.
On top of this, several other incidents made this feel like a significant moment for India and how it deals with online abuse:
  • The leader of the ruling party filed a public interest litigation in the Supreme Court seeking to create a mechanism for the government to check tweets for ‘seditious, instigative, separatist, hate-filled, divisive’ content and anything that is ‘against the society at large and against the spirit of the Union of India’. Wow.
  • Faizal Siddiqui, a popular Indian social media influencer with 13.4m followers, was banned from TikTok for posting a video of a woman having liquid was thrown in her face and suffering scarring (it was make-up). TikTok India banned him on the basis that the content 'risks safety of others, promotes physical harm or glorifies violence against women’ and banned him and the head of India’s National Commission for Women publicly sought to have TikTok banned.
These mini-dramas are notable because they come at a time when India is in the process of revising the Intermediary Liability rules of the Information Technology Act, which governs how social media companies monitor and take down content based on the request of the government. This is not an easy endeavour — the number of internet users in India is only second globally to China and the diversity of its citizens make creating country-wide policy very complex. This week’s collective outpouring, at least from afar, felt like Indian users were waking up to the possible impact of such legislation, both positive and otherwise.
At the same time, more moderator’ stories are also making their way out into the open, albeit anonymously. Money Control this week wrote about the experience of ‘Ramya’, an engineer that took a content moderator job at Cognizant for 8,700 rupees a year (around £93) before being laid off when the company left the content moderation market last year. Her quotes are telling.
 “It was horrible. We were treated worse than dogs, with low pay and more importantly no respect.”
Back in March, LiveMint also published an article about moderators being refused leave and prevented from going sick more than 7 days a year. Ulcers as a result of skipping meals and financial stress because the bank can’t reach you to confirm your loan (phone are banned at most BPOs) are not uncommon.
Such demand for change— from both users but also moderators — in a country as large and as significant in the commercial moderation ecosystem as India is notable. We may look back on this week as the week when momentum started to really shift in favour of regulation.
🕸 Is Google Drive a social network?
Alex Stamos — formerly Facebook, now Stamford — makes a point that gets to the heart of each and every debate about online regulation: how do you slow the effect of one platform without affecting a whole load more? 
Alex Stamos
I am very disturbed by the media assumption that every service that hosts user-generated content should be moderated like a social network.

There should be a sliding scale that weighs amplification/virality against privacy against harm.

https://t.co/7EbvXFYNL6
Elsewhere:
🐦 Mark Zuckerberg won’t accept an invite to join UK parliamentarians but he will give an interview to the BBC’s Simon Jack. It’s his first UK broadcast interview in 5 years, apparently. (Thanks Julie for the tip)
🐦 Graham Smith (aka @Cyberleagle) on why Ofcom is being set up to fail in its regulation of UK online harms.
🐦 Twitter’s new settings about who can reply to a tweet are bad for reply guys and good for others, as Evelyn Douek makes clear. More on this next week, I hope.
📯 Not forgetting...
An interesting, in-depth, piece by academic Michael Kwet from Yale on federated social networks and their potential. A good read if you’re not fully up-to-speed on Twitter’s bluesky project.
To fix social media, we need to introduce digital socialism | Social media | Al Jazeera
Richard Allan, who worked for Facebook’s policy team until last year, notes that the company has a ‘cognitive diversity problem’ in this Reuters Institute interview. A welcome and rational voice in the debate on regulation.
Good detailed analysis of France’s new legislation designed to minimise the spread of harmful content online (mentioned in last week’s EiM).
Aspen Institute ran a Q&A with the four Oversight Board chairs earlier this week. I didn’t get round to watching it but you can watch it back here.
Truth and Trust Online (virtual, October 16-17) has launched a call for papers. Registration will open in July. 
Three news organisation’s express scepticism for Facebook’s Oversight Board, all for different reasons: The GuardianWashington Post and, wait for it, Catholic News Agency
Aspen Institute ran a Q&A with the four Oversight Board chairs earlier this week. I didn’t get round to watching it but you can watch it back here.
Everything in Moderation is a weekly newsletter about content moderation on the web and the policies, people and platforms that make it happen. It is written by journalist Ben Whitelaw and funded by loyal subscribers like you.
If you value the newsletter and want to help cover its costs, you can contribute here. Thanks for your support.
Did you enjoy this issue?
Ben from Everything in Moderation

A weekly newsletter about the policies, products, platforms and people shaping content moderation, now and in the future.

Sign up at everythinginmoderation.co

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue
United Kingdom