View profile

New ideas for fighting COVID-19 misinformation

Revue
 
Earlier this week, I wrote about how platforms proved they can stop a piece of health-related misinfo
 
August 20 · Issue #555 · View online
The Interface
Earlier this week, I wrote about how platforms proved they can stop a piece of health-related misinformation in its tracks. Given sufficient attention to the problem, and a helpful announcement of a premiere date from the hucksters, Facebook, YouTube, and Twitter all succeeded in preventing a sequel to the hit hoax “Plandemic” from going viral.
But given the circumstances, the hilariously titled Indoctornation was arguably low-hanging fruit for trust and safety teams. What about the larger ecosystem around health-related misinformation? This week, a pair of reports offered us some things to think about on that front.
“Facebook’s Algorithm: A Major Threat to Public Health,” was published on Wednesday by the watchdog nonprofit group Avaaz. Using public data from Facebook-owned CrowdTangle, Avaaz researchers attempted to determine the scope of health-related misinformation on Facebook. They found examples of misinformation that lacked labels from fact-checkers, documented how bad actors evade fact-checking by re-posting it to different pages and in different languages, and explored the apparent connections between some of the biggest publishers of health misinformation on the platform.
There are some important caveats here. CrowdTangle only shares data around engagement — how many likes, comments, and shares a post gets — forcing researchers to use that as a proxy for the number of views a post received. Avaaz assumed that every interaction correlated to 29.7 views, and it’s basically impossible to know how accurate that figure is. And Avaaz’s investigation spanned from May 2019 to May 2020, meaning that some of the more recent steps Facebook has taken to crack down on health misinformation don’t factor into their analysis.
“We share Avaaz’s goal of limiting misinformation, but their findings don’t reflect the steps we’ve taken to keep it from spreading on our services,” Facebook told me today. “Thanks to our global network of fact-checkers, from April to June, we applied warning labels to 98 million pieces of COVID-19 misinformation and removed 7 million pieces of content that could lead to imminent harm.”
That said, I still found my eyebrows raised several times as I read through the report. Elizabeth Dwoskin summed up some of the key findings in the Washington Post:
As the coronavirus pandemic has raged across the United States, misinformation about vaccines and other health topics has been viewed an estimated 3.8 billion times on Facebook — four times more than authoritative content from institutions such as the World Health Organization and the Centers for Disease Control and Prevention, according to a study by the left-leaning global human rights group Avaaz.
The group also found that Facebook pages promulgating misleading health information got even more traffic during the pandemic than at other times — reaching a one-year peak in April — despite Facebook’s policy of removing dangerous coronavirus-related misinformation and reducing the spread of other questionable health claims. In addition, the group found, articles that had been identified as misleading by Facebook’s own network of independent third-party fact-checkers were inconsistently labeled, with the vast majority, 84 percent, of the posts in Avaaz’s sample not including a warning label from fact-checkers.
Even if you assume the 3.8 billion pageview number is at least somewhat inflated, the raw CrowdTangle data showed that these posts generated a cumulative 91 million interactions across 82 publishers tracked in the study. That suggests the problem described here is real, even if the scale remains somewhat uncertain.
Facebook has taken some notable steps to reduce COVID-related misinformation, starting with a COVID “information center” linking to high-quality resources prominently within the app. More than 2 billion people have seen the center, by Facebook’s count, and 660 million opened it. And in April alone, the company put warning labels on 50 million posts that were rated as false by third-party fact-checkers.
This reflects Facebook’s favored approach to content moderation issues: whenever possible, fight bad speech with more speech. The benefit of this approach is that it allows the widest range of discussion, which can be particularly useful in times when the basic science around an issue is still poorly understood and constantly evolving. The downside, as the Avaaz report shows, is that it creates a platform that is effectively at war with itself: the News Feed algorithm relentlessly promotes irresistible clickbait about Bill Gates, vaccines, and hydroxychloroquine; the trust and safety team then dutifully counters it with bolded, underlined doses of reality.
If this approach were effective, you might expect skepticism around vaccines and other health issues to decline over time as people came to trust institutions more. But that doesn’t particularly seem to be the case, and it’s hard to lay that problem at the feet of platforms — our entire information sphere is now full of hucksters telling people to ignore their eyes and ears, sowing confusion across every social network, media publication, and cable news show.
Still, tech platforms probably have more tools to manage the spread of misinformation than they’re using today. In its report “Fighting Coronavirus Misinformation and Disinformation,” the Center for American Progress lays out a trio of suggestions for doing just that. They are:
Virality circuit breakers. Platforms should detect, label, suspend algorithmic amplification, and prioritize rapid review and fact-checking of trending coronavirus content that displays reliable misinformation markers, which can be drawn from the existing body of coronavirus mis/disinformation.
Scan-and-suggest features. Platforms should develop privacy-sensitive features to scan draft posts, detect drafts discussing the coronavirus, and suggest quality information to users or provide them cues about being thoughtful or aware of trending mis/disinformation prior to publication.
Subject matter context additions. Social media platforms should embed quality information and relevant fact checks around posts on coronavirus topics. Providing in-post context by default can help equip users with the information they need to interpret the post content for themselves.
The second suggestion strikes me as a little too intrusive, even if has significant comic potential. (“Hmmm, looks like you’re being stupid,” I imagine this mythical scan-and-suggest feature telling a racist uncle as he goes to share the latest piece of agitprop from FreedomEagle.biz.) As for the third, Facebook actually moved just last week to add a new contextual pop-up when people go to share articles about COVID-19.
The first one, though, I really like. Facebook can already see which articles are going viral on the platform, but the company often removes them only after they have received tens of millions of views. This was evident recently when the mysterious group America’s Frontline Doctors managed to get 20 million views on Facebook in less than a day promoting an unproven COVID cure.
Facebook already shares information about the virality of new articles on the platform with its fact-checking partners, who use that data to help determine which articles to check first. The CAP report asks whether the company itself might want to set thresholds at which its own teams evaluate the content for community standards. If a video on Facebook gets 5 million views in an hour, shouldn’t someone at Facebook take a look at it?
The good news is that it seems Facebook agrees. The company told me late Thursday that it is piloting a new effort that resembles CAP’s suggestion, and plans to roll it out broadly soon.

The Ratio
Today in news that could affect public perception of the big tech platforms.
🔼 Trending up: Jack Dorsey donated $10 million to Boston University’s Center for Antiracist Research. The center was started six weeks ago by renowned scholar Ibram X. Kendi. (Boston University)
Governing
Kamala Harris’s close ties to the tech industry could mean a return to the friendly relationship between Silicon Valley and the White house enjoyed under President Obama. Harris’s family, friends and former staff members are part of the revolving door between government and the tech industry. Daisuke Wakabayashi, Stephanie Saul and Kenneth P. Vogel at The New York Times have the story:
Although vice presidents rarely set policy, as a former state attorney general Ms. Harris is expected to have a say in Mr. Biden’s political appointments at the Justice Department, including officials who oversee antitrust enforcement. She could also have a significant influence on tech policy in a Biden administration, since Mr. Biden has largely focused on other issues.
“This is good news” for tech companies, said Hal Singer, an economist who specializes in antitrust and a managing director at Econ One, a consulting firm. “They probably feel like they have one of their own and that at the margin this is going to help push back against any reform.”
Mark Zuckerberg testified at a Federal Trade Commission hearing this week as part of the agency’s antitrust investigation into Facebook. The FTC faced significant criticism for not interviewing Zuckerberg in the wake of the Cambridge Analytica scandal. (Leah Nylen and Betsy Woodruff Swan / Politico)
Trump is asking the Supreme Court to let him block critics on Twitter. The move would reverse a lower court ruling that found Trump’s Twitter account constitutes a public forum. Which it obviously is! (John Kruzel / The Hill)
Trump’s campaign ads at the top of YouTube’s homepage have garnered more than 40 million views. One ad falsely depicts moments of violence from largely peaceful protests earlier this year as evidence of a violent leftist mob. Great stuff all around. (Nick Corasaniti / The New York Times)
Inside President Trump’s relationship with Sean Hannity, his “shadow chief of staff.” While the two are unusually close, this article says: “Hannity would tell you, off-off-off the record, that Trump is a batshit crazy person.” I’d tell you that on the record! That’s the Interface promise. (Brian Stelter / Vanity Fair)
Trump isn’t the only prominent Republican to embrace QAnon. The conspiracy theory is becoming a political movement, with more than a dozen Republicans running for Congress who have signaled varying degrees of support. (Matthew Rosenberg and Maggie Haberman / The New York Times)
Former Trump adviser Steve Bannon, along with three other associates, was arrested and charged with fraud over a crowdfunded wall along the US-Mexico border. The “We Build the Wall” campaign collected over $25 million to construct the border wall. But prosecutors say the four men funneled hundreds of thousands of dollars toward personal expenses. If you can imagine. (Adi Robertson / The Verge)
European Union privacy regulators are clashing over how much to fine Twitter for its handling of a data breach disclosed in 2019. The dispute is one of the first major tests for the privacy law known as GDPR, and may point to future delays for cases involving Facebook and Google. (Sam Schechner / The Wall Street Journal)
Three-quarters of US adults believe social media platforms intentionally censor political views they disagree with. This belief is, unsurprisingly, especially common among Republicans. (Pew Research Center)
Correction: Yesterday we included a link reporting that Taiwan is planning to ban mainland Chinese streaming services iQiyi and Tencent Holdings. In reality, Taiwan plans to stop local sales of the services, but won’t block them from operating. (Reuters)
Industry
Reddit saw an 18 percent drop in hateful content after banning nearly 7,000 subreddits starting in June. The progress is part of a sea change at Reddit after the site introduced new policies that explicitly ban hate speech and promised to enforce them rigorously. Here’s James Vincent at The Verge:
As part of its new stance, Reddit says it will be studying the spread of hate speech on the site more closely. Prior to the “ban wave” that began in June, the company says approximately 40,000 potentially hateful pieces of content were shared each day, making up around 0.2 percent of all content. These posts, comments, and messages accumulated some 6.47 million views or around 0.16 percent of total daily views. (The view percentage is smaller because moderation bots remove some content before anyone sees it.) The company did not say how these figures had changed as a result of the ban wave.
The company also says that half (48 percent) of all hateful content on the site was targeting a person’s ethnicity or nationality. That was followed by their class or political affiliation (16 percent), their sexuality (12 percent), their gender (10 percent), or their religion (6 percent), while 1 percent of hate content targeted ability and 7 percent had an unclear target.
Moderators of the banned subreddit “r/The_Donald” are helping other subreddits that have been kicked off the platform for hateful content find new homes. These include subreddits dedicated to the QAnon conspiracy theory and linked to the founder of the Proud Boys. (Alex Kaplan / Media Matters)
Facebook’s plan to require a Facebook login for future Oculus headsets is angering the VR world. Critics have raised concerns about intrusive data collection, targeted advertising, and being forced to use a service that they hate. (Adi Robertson / The Verge)
Why has Facebook’s TikTok clone Reels failed to take off in its early days? Because most of the posts are coming from the platform’s verified users, who aren’t nearly as weird or creative, this piece argues. (Rebecca Jennings / Vox)
A small liberal arts college in Michigan is forcing students to download and install a contact-tracing app in order to return to campus in the fall. The app is designed to track students’ real-time locations around the clock, and there is no way to opt out. It also has had known security flaws. (Zack Whittaker / TechCrunch)
Reddit saw an 18 percent drop in hateful content after banning nearly 7,000 subreddits starting in June. The progress is part of a sea change at Reddit after the site introduced new policies that explicitly ban hate speech and promised to enforce them rigorously. Here’s James Vincent at The Verge:
As part of its new stance, Reddit says it will be studying the spread of hate speech on the site more closely. Prior to the “ban wave” that began in June, the company says approximately 40,000 potentially hateful pieces of content were shared each day, making up around 0.2 percent of all content. These posts, comments, and messages accumulated some 6.47 million views or around 0.16 percent of total daily views. (The view percentage is smaller because moderation bots remove some content before anyone sees it.) The company did not say how these figures had changed as a result of the ban wave.
The company also says that half (48 percent) of all hateful content on the site was targeting a person’s ethnicity or nationality. That was followed by their class or political affiliation (16 percent), their sexuality (12 percent), their gender (10 percent), or their religion (6 percent), while 1 percent of hate content targeted ability and 7 percent had an unclear target.
Moderators of the banned subreddit “r/The_Donald” are helping other subreddits that have been kicked off the platform for hateful content find new homes. These include subreddits dedicated to the QAnon conspiracy theory and linked to the founder of the Proud Boys. (Alex Kaplan / Media Matters)
Facebook’s plan to require a Facebook login for future Oculus headsets is angering the VR world. Critics have raised concerns about intrusive data collection, targeted advertising, and being forced to use a service that they hate. (Adi Robertson / The Verge)
Why has Facebook’s TikTok clone Reels failed to take off in its early days? Because most of the posts are coming from the platform’s verified users, who aren’t nearly as weird or creative, this piece argues. (Rebecca Jennings / Vox)
A small liberal arts college in Michigan is forcing students to download and install a contact-tracing app in order to return to campus in the fall. The app is designed to track students’ real-time locations around the clock, and there is no way to opt out. It also has had known security flaws. (Zack Whittaker / TechCrunch)
Those good tweets
Wendy Molyneux
I very much apologize to everyone for whispering “I wish I could somehow work full time AND be a full time stay at home mom,” as I tossed one of my teeth into a haunted fountain during a lightning storm. This is not what I meant, lesson learned!
ceo of antifa
tshirts in facebook ads:

🆈🅴🆂
I’m a 𝓱𝓸𝓻𝓷𝔂 𝕆𝕃𝔻 𝕄𝔸ℕ
I was born in 𝐌𝐀𝐘
ᴵ ᵖᵃˢˢᵉᵈ ³⁴ ᵏⁱᵈⁿᵉʸ ˢᵗᵒⁿᵉˢ
I’m 𝖙𝖔𝖔 𝖔𝖑𝖉 to fight
𝓉𝑜𝑜 𝓈𝓁𝑜𝓌 𝓉𝑜 𝓇𝓊𝓃
𝗜’𝗹𝗹 𝗷𝘂𝘀𝘁 𝘀𝗵𝗼𝗼𝘁 𝘆𝗼𝘂
𝘼𝙉𝘿 𝘽𝙀 𝘿𝙊𝙉𝙀 𝙒𝙄𝙏𝙃 𝙄𝙏
𝕸𝖆𝖎𝖒𝖆 🧚
Too late i deleted your city off my weather app
Patrick
like 75% of viral tweets now are just stuff someone needs to tell a therapist
Talk to us
Send us tips, comments, questions, and virality circuit breakers: casey@theverge.com and zoe@theverge.com.
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue