View profile

Where platforms go after Christchurch

Welcome new readers, particularly those who found me after Mike Isaac's nice piece about the newslett
March 20 · Issue #301 · View online
The Interface
Welcome new readers, particularly those who found me after Mike Isaac’s nice piece about the newsletter gold rush in the New York Times. I invite you to check out this guide to what we cover around here, and let me know what you think by replying to this email.
After last week’s horrific terrorist attack in New Zealand, early commentary focused on how the shootings at two Christchurch shootings seemed to be purpose-built for spreading on social media. “A mass shooting of, and for, the internet,” Kevin Roose called it in the New York Times:
The details that have emerged about the Christchurch shooting — at least 49 were killed in an attack on two mosques — are horrifying. But a surprising thing about it is how unmistakably online the violence was, and how aware the shooter on the videostream appears to have been about how his act would be viewed and interpreted by distinct internet subcultures.
In some ways, it felt like a first — an internet-native mass shooting, conceived and produced entirely within the irony-soaked discourse of modern extremism.
As Roose notes, the alleged killer promoted the attack on Twitter and 8Chan, and broadcast it live on Facebook. Facebook took down the original video, but not before it could be copied and widely and shared. Over the next 24 hours, it would be uploaded to Facebook another 1.5 million times — of which, Facebook was able to remove 1.2 million copies at the time of uploading. The same thing was happening simultaneously on YouTube, but the company would not share any numbers that might describe the scale of its challenge.
The wide availability of videos of the attacks, both on and off the big tech platforms, has drawn widespread condemnation. On Tuesday, Rep. Bennie Thompson, chairman of the House Homeland Security Committee, called on tech companies to explain themselves in a briefing March 27th:
“Studies have shown that mass killings inspire copycats — and you must do everything within your power to ensure that the notoriety garnered by a viral video on your platforms does not inspire the next act of violence,” Thompson wrote.
But at the same time the platforms come in for another stern lecture from Congress, others are calling for a deeper view at the bigotry that makes such terrorist attacks possible. Here’s Caroline Haskins in a piece titled The Christchurch Terror Attack Isn’t an ‘Internet’ Terror Attack:
Whitney Philips, a professor of communications at Syracuse University, said that the ideas that we choose to tolerate on the internet is a result of the forces of the masses, not just the actions of people on fringe corners of the internet. If the kind of attack we saw at Christchurch could be neatly blamed on a small, white supremacy forum alone, it would be a far less difficult problem to solve. Sadly, the reality is much more complicated.
“The shifting of the Overton window is not the result of just a small group of extremists,” Philips said. “The window gets shifted because of much broader cultural forces.”
The general theme is that the internet platforms don’t care about this stuff, and that they optimize for profits over the good of society. And, while that may have been an accurate description a decade ago, it has not been true in a long, long time. The problem, as we’ve been discussing here on Techdirt for a while, is that content moderation at scale is impossible to get right. It is not just “more difficult,” it is difficult in the sense that it will never be acceptable to the people who are complaining.
Part of that is because human beings are flawed. And some humans are awful people. And they will do awful things. But we don’t blame “radio” for Hitler (Godwin’d!) just because it was a tool the Nazis used. We recognize that, in every generation, there may be terrible people who do terrible things, using the technologies of the day.
Given the opposing views, how do we move ahead? In my view, the debate highlights a distinction that we make all too rarely in discussing these issues. There are platform problems, and there are internet problems. And we have to consider them separately if we’re going to move beyond the finger-pointing stage of post-disaster conflict.
Platform problems include the issues endemic to corporations that grow audiences of billions of users, apply a light layer of content moderation, and allow the most popular content to spread virally using algorithmic recommendations. Uploads of the attack that collect thousands of views before they can be removed are a platform problem. Rampant Islamophobia on Facebook is a platform problem. Incentives are a platform problem. Subreddits that let you watch people die were a platform problem, until Reddit axed them over the weekend.
Internet problems include the issues that stem from the existence of a free and open network connecting all of humanity together. The existence of forums that allow white supremacists to meet, recruit new believers, and coordinate terrorist attacks is an internet problem. The proliferation of free file-sharing sites that allow users to post copies of gruesome videos is an internet problem. The rush of some tabloids to publish their own clips of the shooting, or analyze the alleged killer’s manifesto, are an internet problem.
Some problems, of course, are a little bit of both.
And in all cases, these “problems” have their upside. A free and open internet — and speech-friendly tech platforms — have been a boon to all sorts of causes, businesses, and artists. What really has Silicon Valley uneasy at the moment is the total uncertainty about how you address the bad that the internet does without crippling the good it does, too.
In the meantime, we are seeing a surge in far-right, white nationalist violence, and it increasingly resembles a coordinated terror campaign. Platforms, to their credit, have begun to treat it this way. The Global Internet Forum to Counter Terrorism, which includes Facebook, Microsoft, Twitter, and YouTube, acted over the weekend to share information about more than 800 distinct videos around the attack.
The forum formed in 2017 after platforms faced widespread criticism for failing to recognize how ISIS and other terrorist groups were using them to recruit new members. Platforms acted in concert to remove terrorists, and it appears to be successful. As Ryan Broderick and Ellie Hall wrote on Tuesday:
Google and Facebook have also invested heavily in AI-based programs that scan their platforms for ISIS activity. Google’s parent company created a program called the Redirect Method that uses AdWords and YouTube video content to target kids at risk of radicalization. Facebook said it used a combination of artificial intelligence and machine learning to remove more than 3 million pieces of ISIS and al-Qaeda propaganda in the third quarter of 2018.
These AI tools appear to be working. ISIS members and supporters’ pages and groups have almost been completely scrubbed from Facebook. Beheading videos are pulled down from YouTube within hours. The terror group’s formerly vast network of Twitter accounts have been almost completely erased. Even the slick propaganda videos, once broadcast on multiple platforms within minutes of publication, have been relegated to private groups on apps like Telegram and WhatsApp.
A similar approach is needed here. Not every problem related to the Christchurch shooting should be laid at the platforms’ feet. But nor can we throw up our hands and say well, that’s the internet for you. Platforms ought to fight Islamophobia with the same vigor that they fight Islamic extremism. Hatred kills, after all, no matter the form it takes.
We also shouldn’t ask the platforms to solve this problem alone. Fighting terrorism has not traditionally been the province of for-profit corporations, and for good reason. When terrorist groups are organizing in plain view on public web forums, governments are responsible for intervening. They won’t stop every attack, but we should ask that they put at least as much pressure on themselves as they’re putting on tech companies.

The Case for Investigating Facebook
Facebook Takes Steps to Prevent Bias in the Way It Shows Ads
Sen. Josh Hawley is making the conservative case against Facebook
Trump's campaign secret weapon: Facebook
Google hit with €1.5 billion antitrust fine by EU
Russia's Putin Signs Into Law Bills Banning 'Fake News,' Insults
Locating The Netherlands' Most Wanted Criminal By Scrutinising Instagram
Life After Facebook: The Untold Story Of Billionaire Eduardo Saverin’s Highly Networked Venture Firm
Can a Facebook Post Make Your Insurance Cost More?
Facebook says service hindered by lack of local news
Facebook’s Top Representative in China Leaves Firm ($)
Appeals Court Dismisses Freedom of Speech Claim Against Tech Giants
Kidfluencers' Rampant YouTube Marketing Creates Minefield for Google
Maricopa woman, sons, accused of abusing 7 adopted children featured in popular YouTube series
Devin Nunes sues Twitter for letting "Devin Nunes’ Mom" and "Devin Nunes’ Cow" insult him
Did Twitter Help Ground the Boeing 737 MAX?
After the porn ban, Tumblr users have ditched the platform as promised
China's new social media craze: Paying random people to shower you with over-the-top compliments
WhatsApp tests in-app reverse image searches to prevent the spread of hoaxes
Facebook is adding quoted replies to Messenger conversations
Oculus unveils the Rift S, a higher-resolution VR headset with built-in tracking
The Attack That Broke the Net’s Safety Net
Instagram just took advantage of Amazon’s biggest weakness
And finally ...
Facebook: ‘Identifying Hate Speech Is Difficult Because Some Posts Actually Make Pretty Interesting Points’
Talk to me
Send me tips, comments, questions, and your favorite internet problems:
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue