The short: #SecuretheTribe – a controversial Twitter Spaces conversation bordering on hate towards African & Caribbean immigrant groups topped Twitter trends last week. The big question was – does immigration benefit “Foundational Black Americans”? We aren’t here to discuss the content of the debate. But should Twitter have done something about this conversation?
What’s more: The xenophobic conversation, co-hosted by “conspiracy buff” (per NYTimes), Tariq Nasheed, was dominated by harmful rhetoric, including intermittent breaks for calls to arms against immigrants (read: Africans). Central to the discussion was the false narrative that black immigrant populations decimate the chances of foundational black Americans succeeding in the U.S. Yikes.
So it begs the question: how does Twitter handle harmful content?
Big picture: Twitter has a hate speech problem. Company insiders raised concerns over the audio platform’s plan to moderate content ahead of its hurried launch in May 2021 and knew it would be a significant issue in coming months. Newsflash: Twitter had no plan.
According to a Washington Post report in December 2021, Spaces has become home to all manner of hateful content – from the Taliban to white nationalists and anti-vaxxers. Besides technology limitations that prevent Twitter from hearing audio conversations and moderating harmful content, the company lacks human moderators to listen in on Spaces conversations on the platform.
In December, Twitter rolled out a new reporting process, “symptoms-first”, to make reporting harmful content more efficient. However, the feature is limited, and it is too early to decide if the process has been effective.
Defending Free Speech: The issue of content moderation has remained a significant debate between free speech activists and several Big Tech companies. Hardline “Free Speech” proponents would argue that Twitter should not censor the Tariq-hosted conversation since both sides had a chance to pass their point of view across. But, lest we forget, earlier this week,
Substack published its decision to resist the demands for censorship, upholding the need to allow “hard conversations” exist on social platforms.
Final thoughts: Despite several users reporting Tariq’s Space, Twitter kept the conversation going for 19+ hours – which means one of two things – first is that the platform’s reporting system is broken, or the second is that Twitter’s reviewers believed that the conversation did not constitute a threat. I find either of both instances troubling.
What do you think Twitter should have done? Respond to this email or tweet at me {@fatuogwuche} with your strong opinions. I can take it.