Oh hey, today is our 500th issue! Thanks to everyone who has been with us from the beginning, and each of you who has joined along the way. We can’t imagine a better use of our time during this weird era than bringing you news and analysis of the day’s big moments in tech, democracy, and the pandemic.
One result of the COVID-19 pandemic has been that big tech companies, which long have been reluctant to intervene in questions of content moderation, have quickly become much more aggressive
. At Google, for example, the company began showing news stories from trusted sources
to anyone who searched for information about the virus. It stepped up efforts to remove videos containing misinformation about the pandemic from YouTube.
YouTube also added a “shelf” of high-quality breaking news videos, along with links to the World Health Organization, Centers for Disease Control and Prevention, and local health authorities. As a result, YouTube says, news consumption is up 75 percent from this time last year, and links to the WHO and CDC have received more than 20 billion impressions.
YouTube will begin adding informational panels containing information from its network of fact-checkers to videos in the United States, the company said. The panels, which were introduced last year in Brazil and India, appear on searches for topics where fact-checkers have published relevant articles on the subject. The move comes at a time when platforms have seen a surge in misinformation related to COVID-19 and its origin, possible cures, and other subjects. […]
YouTube says “more than a dozen” US publishers are already participating in its network of fact-checkers, including FactCheck.org, PolitiFact, and The Washington Post
Fact Checker. The network is open to any publisher that is a member of the International Fact-Checking Network (IFCN) and signs its code of principles
. Google recently announced that it would donate $1 million to the IFCN.
On Monday afternoon, I spoke with YouTube’s head of product, Neal Mohan, about how the company is navigating several challenges related to the pandemic. We talked about evolving advice from public health organizations, scrubbing bad content from the platform, and the company’s increasing reliance on automated systems for moderation.
“When users are searching on YouTube around a specific claim, we want to give an opportunity for those fact checks to show up right then and there, when our users are looking for information — especially around fast-moving, quickly changing topics like COVID-19,” Mohan told me.
Highlights from our interview are below, edited lightly for clarity and length.
Casey Newton: You rolled out fact checks in Brazil and India last year. What did you learn?
Neal Mohan: We happened to roll it out in India right around the time of their elections last year. Those are the largest elections in the world, and as a result of the number of people that vote, the election itself occurs over the course of a month. So there was time for the potential spread of a lot of misinformation between one election date and another.
We have information panels that we triggered in the case of more evergreen conspiracies, like flat earth and anti-vaccine. But what about fast-moving, changing news events where there might not be a robust Wikipedia article or a CDC entry or an Encyclopedia Britannica article to link to? And so that’s why we leaned on this concept of actually bringing professional fact checkers into our YouTube search results and triggering them there.
And our experience both in India and Brazil was positive. We think that we did our job in terms of curtailing the spread of misinformation in an otherwise sort of pretty flammable environment. Most importantly from my standpoint, we felt that we did right by our users in terms of doing our best to try to prevent this happening in those countries. And that that sort of positive result for our users led us to expand it here in the US. And our goal is not to just stop at these three countries — we want to continue to roll it out in other parts of the world as well.
One challenge of policing information about COVID-19 is that the disease itself is new, and the advice we get keeps evolving. In some cases, advice like “don’t wear masks” has changed to “everyone please wear masks.” How should a big tech platform approach that problem?
My perspective there is that we really do have to rely on sources — and in our case, that means channels — that have a track record of being relevant and credible in this space. Yes, lots of guidelines are changing, every single day, every single week. You’re literally seeing science being created on an hourly or daily basis. And so the reason why surfacing authoritative results feels like the best thing that we can do is because even if there’s a change, an authoritative source is going to give the context behind it.
So let’s say there’s a change in mask guidance. I would expect an authoritative news outlet, or a medical authority like the CDC, to give context on it and say, ‘this used to be our guidance, and our new guidance is this, and here’s the reason why.’ Or a news publication covering it says ‘CDC changes its guidelines: this is what they used to say, now they’re saying this, and this is the science that led them to change that.’ And by surfacing authoritative results, I think we’re doing what we can as a platform to deliver the most timely, but also the most credible information to our users.
I know YouTube has also been relying on more automated systems during the past couple months due to challenges with being able to bring third-party vendors into offices. How are you measuring the effect on your moderation decisions?
A lot of of this was really very, very simple, which was protecting the health of our extended workforce. And for me and I think for everybody else here at YouTube and Google, that was really the number one consideration, and frankly everything else we were going to do was going to be secondary.
You and I have talked before about the way that [content moderation] works best is through a combination of machines and machine learning, and the nuanced judgment of well trained raters who do this for a living. Without that second part, we’ve had to rely much more on handling things through appeals. Because there’s a lot of action taken by these machines, sometimes those appeals are impacted in terms of our response time. But generally speaking, we’ve been able to manage this.
Finally, we’re in a situation in which some of the people spreading misinformation about COVID-19 are elected officials. How is YouTube approaching that when it comes to moderation?
Just to be very clear, our community guidelines are based on the content. That applies to the content within the videos, and it also applies to comments and any other surface, if you will, on the YouTube platform. And so they’re not about the speaker. The policies apply equally, whether you or I say something, an elected official does, or a national leader does. This crisis is no different.
One of the enforcement examples that we gave around medical misinformation was explicitly encouraging somebody to flout state or national guidance around a stay-at-home order. And this happened in the case of the Brazilian president.
We removed a couple of videos
that happened when there was an explicit call to flout those orders. Of course, you have to strike the right balance. If there are people who have different opinions or would like to express an opinion — in terms of economic trade-offs versus health trade-offs — then that discourse needs to be allowed and protected on our platform. But something that explicitly says, through false information, that stay-at-home doesn’t actually do anything, that would be an example of a policy violation, regardless of who the speaker is.