But given the circumstances, the hilariously titled Indoctornation was arguably low-hanging fruit for trust and safety teams. What about the larger ecosystem around health-related misinformation? This week, a pair of reports offered us some things to think about on that front.
“Facebook’s Algorithm: A Major Threat to Public Health,” was published on Wednesday by the watchdog nonprofit group Avaaz. Using public data from Facebook-owned CrowdTangle, Avaaz researchers attempted to determine the scope of health-related misinformation on Facebook. They found examples of misinformation that lacked labels from fact-checkers, documented how bad actors evade fact-checking by re-posting it to different pages and in different languages, and explored the apparent connections between some of the biggest publishers of health misinformation on the platform.
There are some important caveats here. CrowdTangle only shares data around engagement — how many likes, comments, and shares a post gets — forcing researchers to use that as a proxy for the number of views a post received. Avaaz assumed that every interaction correlated to 29.7 views, and it’s basically impossible to know how accurate that figure is. And Avaaz’s investigation spanned from May 2019 to May 2020, meaning that some of the more recent steps Facebook has taken to crack down on health misinformation don’t factor into their analysis.
“We share Avaaz’s goal of limiting misinformation, but their findings don’t reflect the steps we’ve taken to keep it from spreading on our services,” Facebook told me today. “Thanks to our global network of fact-checkers, from April to June, we applied warning labels to 98 million pieces of COVID-19 misinformation and removed 7 million pieces of content that could lead to imminent harm.”
As the coronavirus pandemic has raged across the United States, misinformation about vaccines and other health topics has been viewed an estimated 3.8 billion times on Facebook — four times more than authoritative content from institutions such as the World Health Organization and the Centers for Disease Control and Prevention, according to a study by the left-leaning global human rights group Avaaz.
The group also found that Facebook pages promulgating misleading health information got even more traffic during the pandemic than at other times — reaching a one-year peak in April — despite Facebook’s policy of removing dangerous coronavirus-related misinformation and reducing the spread of other questionable health claims. In addition, the group found, articles that had been identified as misleading by Facebook’s own network of independent third-party fact-checkers were inconsistently labeled, with the vast majority, 84 percent, of the posts in Avaaz’s sample not including a warning label from fact-checkers.
Even if you assume the 3.8 billion pageview number is at least somewhat inflated, the raw CrowdTangle data showed that these posts generated a cumulative 91 million interactions across 82 publishers tracked in the study. That suggests the problem described here is real, even if the scale remains somewhat uncertain.
Facebook has taken some notable steps to reduce COVID-related misinformation, starting with
a COVID “information center” linking to high-quality resources prominently within the app. More than 2 billion people have seen the center,
by Facebook’s count, and 660 million opened it. And in April alone, the company put warning labels on 50 million posts that were rated as false by third-party fact-checkers.
This reflects Facebook’s favored approach to content moderation issues: whenever possible, fight bad speech with more speech. The benefit of this approach is that it allows the widest range of discussion, which can be particularly useful in times when the basic science around an issue is still poorly understood and constantly evolving. The downside, as the Avaaz report shows, is that it creates a platform that is effectively at war with itself: the News Feed algorithm relentlessly promotes irresistible clickbait about Bill Gates, vaccines, and hydroxychloroquine; the trust and safety team then dutifully counters it with bolded, underlined doses of reality.
If this approach were effective, you might expect skepticism around vaccines and other health issues to decline over time as people came to trust institutions more. But that doesn’t particularly seem to be the case, and it’s hard to lay that problem at the feet of platforms — our entire information sphere is now full of hucksters telling people to ignore their eyes and ears, sowing confusion across every social network, media publication, and cable news show.
Still, tech platforms probably have more tools to manage the spread of misinformation than they’re using today. In its report “
Fighting Coronavirus Misinformation and Disinformation,” the Center for American Progress lays out a trio of suggestions for doing just that. They are:
Virality circuit breakers. Platforms should detect, label, suspend algorithmic amplification, and prioritize rapid review and fact-checking of trending coronavirus content that displays reliable misinformation markers, which can be drawn from the existing body of coronavirus mis/disinformation.
Scan-and-suggest features. Platforms should develop privacy-sensitive features to scan draft posts, detect drafts discussing the coronavirus, and suggest quality information to users or provide them cues about being thoughtful or aware of trending mis/disinformation prior to publication.
Subject matter context additions. Social media platforms should embed quality information and relevant fact checks around posts on coronavirus topics. Providing in-post context by default can help equip users with the information they need to interpret the post content for themselves.
The first one, though, I really like. Facebook can already see which articles are going viral on the platform, but the company often removes them only after they have received tens of millions of views. This was evident recently when the mysterious group America’s Frontline Doctors
managed to get 20 million views on Facebook in less than a day promoting an unproven COVID cure.
Facebook already shares information about the virality of new articles on the platform with its fact-checking partners, who use that data to help determine which articles to check first. The CAP report asks whether the company itself might want to set thresholds at which its own teams evaluate the content for community standards. If a video on Facebook gets 5 million views in an hour, shouldn’t someone at Facebook take a look at it?
The good news is that it seems Facebook agrees. The company told me late Thursday that it is piloting a new effort that resembles CAP’s suggestion, and plans to roll it out broadly soon.