View profile

How extremism took over YouTube

Revue
 
Thanks to everyone who wrote in to answer yesterday's mystery: the University of Oregon class reading
 
April 2 · Issue #308 · View online
The Interface
Thanks to everyone who wrote in to answer yesterday’s mystery: the University of Oregon class reading this newsletter is Fact or Fiction, Prof. Seth Lewis’ course “about making sense of information in the digital age.” Thank you for making The Interface required reading, professor, and if any one your students misses more than two classes, please let me know and I will call them out by name in this space.
A system built to attract the maximum amount of user attention succeeds beyond all expectation, only to wind up promoting dangerous misinformation and hate speech around the world. It’s a story we have considered often in the context of Facebook, which has responded to the criticism with a promise to change the very nature of the company. And it’s a story we have not discussed nearly enough in the context of YouTube, which has promoted a similarly disturbing network of extremists and foreign propagandists and has tended to intervene cautiously and with underwhelming force.
Certainly YouTube has received its share of criticism since the broader reckoning over social networks began in 2016. Google CEO Sundar Pichai was compelled to answer questions about the video giant when he appeared before Congress last year. But we have generally had little insight into how YouTube makes high-level decisions about issues surrounding its algorithmic recommendations and its inadvertent cultivation of a new generation of extremists. What do employees say about the phenomenon of “bad virality” — YouTube’s unmatched ability take a piece of misinformation or hate speech and, using its opaque recommendation system, find it the widest possible audience?
In a major new report for Bloomberg, Mark Bergen begins to give us some answers. Over nearly 4,000 words, he outlines how YouTube pursued users’ attention with single-minded zeal, quashed internal criticism, and even discouraged employees from searching for videos that violate its rules, for fear it would cause the company to lose its safe harbor protections under the Communications Decency Act. As late as 2017, YouTube CEO Susan Wojcicki was reportedly pushing a revamp of the company’s business model to pay creators based on how much attention they attracted — despite mounting internal evidence that these engagement-based metrics incentivize the production of videos designed to outrage people, raising the risk of real-world violence.
Bergen reports:
In response to criticism about prioritizing growth over safety, Facebook has proposed a dramatic shift in its core product. YouTube still has struggled to explain any new corporate vision to the public and investors – and sometimes, to its own staff. Five senior personnel who left YouTube and Google in the last two years privately cited the platform’s inability to tame extreme, disturbing videos as the reason for their departure. […]
YouTube’s inertia was illuminated again after a deadly measles outbreak drew public attention to vaccinations conspiracies on social media several weeks ago. New data from Moonshot CVE, a London-based firm that studies extremism, found that fewer than twenty YouTube channels that have spread these lies reached over 170 million viewers, many who were then recommended other videos laden with conspiracy theories.
Bergen’s story is, in a way, a mirror of the New York Times’ November story on how Facebook first ignored, then sought to minimize warning signs about the platform’s unintended consequences. Both pieces illustrate the ugly fashion in which our social networks have developed: Phase one is an all-out war to gain user attention and build an advertising business; phase two is a belated effort to clean up the many problems that come with global scale faster than new ones can arise.
Like Facebook, YouTube has begun to address some of the concerns raised by those departed employees. Most importantly, in January the company said it would stop recommending what it calls “borderline content” — videos that come close to violating its community guidelines, but stop just short. Last year, it also began adding links to relevant Wikipedia entries on some common hoaxes, such as videos declaring that the Earth is flat.
At South by Southwest, before announcing the Wikipedia feature, YouTube CEO Susan Wojcicki compared the service to a humble library — a neutral repository for much of the world’s knowledge. It is a definition that attempts to cast YouTube as a noble civic institution while misrepresenting its power — most libraries do not, after all, mail members a more radical version of the book they were just reading as soon as they finish the last one.
One extremist who has used the platform nimbly over the past several years is Tommy Robinson, a far-right activist who leads an anti-immigration party in the United Kingdom. Robinson’s anti-Islam posts were sufficiently noxious to get him banned last week from Instagram and Twitter. YouTube decided today to let him keep his account and his 390,000 subscribers, Mark DiStefano reports:
While YouTube is stopping short of an outright ban, the restrictions will mean Robinson’s new videos won’t have view counts, suggested videos, likes, or comments. There’ll be an “interstitial,” or black slate, that appears before each video warning people that it might not be appropriate for all audiences.
Robinson will also be prevented from livestreaming to his channel. 
These tools may remind you of Pinterest’s approach to anti-vaccine misinformation, which I wrote about in February. Robinson will get his freedom of speech — he can still upload videos — but will be denied what Aza Raskin has called “freedom of reach.” It’s an approach I generally favor. And yet I still shudder at another revelation from Bergen’s report — that an internal YouTube tool built by one dissident showed that far-right creators like Robinson have become a pillar of the community:
An employee decided to create a new YouTube “vertical,” a category that the company uses to group its mountain of video footage. This person gathered together videos under an imagined vertical for the “alt-right,” the political ensemble loosely tied to Trump. Based on engagement, the hypothetical alt-right category sat with music, sports and gaming as the most popular channels at YouTube, an attempt to show how critical these videos were to YouTube’s business
And while some of YouTube’s initiatives to reduce the spread of extremism are in their early stages, there remains a worrying amount of it on the platform. Here’s Ben Makuch in Motherboard today:
But even in the face of those horrific terror attacks, YouTube continues to be a bastion of white nationalist militancy. Over the last few days, Motherboard has viewed white nationalist and neo-Nazi propaganda videos on the website that have either been undetected by YouTube, have been allowed to stay up by the platform, or have been newly uploaded.
When examples were specifically shown to YouTube by Motherboard, the company told us that it demonetized the videos, placed them behind a content warning, removed some features such as likes and comments, and removed them from recommendations—but ultimately decided to leave the videos online. The videos remain easily accessible via search.
Last month, writing about the difference between platform problems and internet problems, I noted that the ultimate answer we are groping for is how free the internet should be. The openness of YouTube has benefited a large and diverse group of creators, most of whom are innocuous. But reading today about Cole and Savannah LaBrant, internet-famous parents who tricked their 6-year-old daughter into believing they were giving away her puppy and filmed her reaction, it’s fair to ask why YouTube so often leads its creators to madness.
Extremism in all its forms is not a problem that YouTube can solve alone. What makes Bergen’s report so disturbing, though, is the way YouTube unwittingly promoted extremists until they had become one of its most powerful constituencies. In very real ways, extremism is a pillar of the platform, and unwinding the best of YouTube from its rotting heart promises to be as difficult as anything the company has ever done.

Democracy
As India Votes, False Posts and Hate Speech Flummox Facebook
What happens next in the housing discrimination case against Facebook?
Facebook’s new tools to block discriminatory ads will not apply outside the United States
Googlers protest AI advisory board member over anti-LGBT, anti-immigrant comments
Google staff condemn treatment of temp workers in 'historic' show of solidarity
Elsewhere
Inside Grindr, fears that China wanted to access user data via HIV research
Quibi Taps Tom Conrad, a Snap and Pandora Alum, as Chief Product Officer
Launches
WhatsApp launches fact-checking service in India ahead of elections
You’ve heard of fake news — how about fake gadgets? My colleague Ashley Carman has a great new series on YouTube and in the first episode she writes about the wild world of knockoffs. Check it out:
A gadget maker’s worst nightmare...
Takes
Google’s constant product shutdowns are damaging its brand
And finally ...
Google begins shutting down its failed Google+ social network
Talk to me
Send me tips, comments, questions, and your YouTube fixes: casey@theverge.com.
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue